url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://csnotes.me/Year_1/Computer_Systems/Databases/DBMS
|
# Multi User DBMS Architectures
• So far we have seen this model of interaction between end users and the database
• One central DBMS
• Users interact with DBMS using an application program
• Several issues still need clarification:
• Is the DBMS on the end user’s computer
• How are the users connected to the DBMS?
• Do we have one or more DBMS?
• Which computations are performed where?
• Is the database stored in one or many places
• Teleprocessing Architecture:
• The traditional (and most basic) architecture
• One computer with a single CPU
• Many (end-user) terminals all cabled to the central computer
• The terminal sends messages to the central computer
• All data processing in the central computer
• This puts tremendous burden on the central computer leading to decreased performance
• Nowadays, the trend is towards downsizing
• Replace expensive mainframe computers with cost-effective networks of personal computers
• Achieve the same/better
# File-Server architecture
Processing is distributed around a computer network
• Typically through a LAN
• One central file-server
• Every workstation has its own DBMS and its own user application
• Workstations request files they need from the file server
• File server acts like a “shared hard disk” (it has no DBMS)
SELECT fName, IName
FROM Branch b, Staff s
WHERE b.branchNo=s.branchNo AND b.street='163 Main St.'
• The file-server has no knowledge of SQL - the user’s DBMS has to request the whole tables Branch, Staff
• Therefore:
• Very large amount of network traffic (the tables may be huge)
• A full copy of the DBMS required on each workstation
• Concurrency/recovery/integrity control is more difficult since multiple DBMSs access the same files simultaneously
• The solution to these problems is a client-server architecture
# Client-Server architecture
• Client - Requires some resource
• Server - provides the resource
• Client/server are not always in the same machine/place
• Two-tier architecture
• Tier 1 (client): responsible for the presentation of data to the user
• Tier 2 (server): responsible for supplying data services to the user
• Typical Procedure:
• User gives a request to the client
• Client generates SQL query and sends it to the server
• Server accepts, processes the query and sends the result to the client
• Client formats the result for the user
• Increased performance: many client CPUs
• Reduced Hardware Costs: only the server needs increased storage and computational power
• Reduced communication costs: less data traffic (not unnecessary are transmitted)
• Database is still centralized - not a distributed database
# Three-Tier Client-Server Arch
• In modern system: 100s/1000s of users - need for increased enterprise
• Main problem of the client that prevent scalability - a “fat client” (many users) requires extensive resources on disk space/RAM/CPU power
• A new variant on the client server architecture
• Three layers, potentially running on different platforms
• First Tier: UI later (on end user’s computer)
• Second Tier: application server (connects to many users)
• Third Tier: database server (contains DBMS, communicates with the application server)
• “Thin clients” - increased performance of user’s computer
• Best example for a client: internet browser
• Smaller hardware cost for “thin clients”
• Easier application maintenance (centralized in one tier)
• Easier to modify/replace one tier without affecting others
• Easier load balancing between the different tiers
• Maps naturally to web applications
• It can be extended
• Separation of tasks into $n$ intermediate tiers for increased flexibility and scalability
# Distributed DBMS
• So far we have seen centralized database systems
• single database, located at one site
• Controlled by one DBMS
• We can improve database performance:
• Using networks of computers (decentralized approach)
• It mirrors the organizational structure:
• Logically distributed into divisions, departments, projects…
• Physically distributed into offices, flats, units, factories…
• Main targets
• Make all data accessible to all units
• Store the data proximate to the location where it is most frequently used
• Full functionality and efficiency
• Distributed Database
• A logically interrelated collection of shared data physically distributed over a network
• Distributed DBMS (DDBMS)
• The software system that can manage the distributed database
• It makes the distribution transparent (invisible) to users
• In a DDBMS
• A single logical database, which is split into fragments
• Each fragment is stored on one (or more) computers, under the control of a separate DBMS
• All of these computers are connected by a communications network
• Sites have local autonomy: independent processing of local data (via local applications)
• Sites have access to global applications (to process data fragments stored on other computers)
• Not all sites have local applications/local data
• Data fragments may be replicated in more sites (data consistency must be considered)
# Distributed processing vs Distributed DBMS
Distributed processing:
• A centralized database that is accessed over a computer network
• For example: the client server architecture
• This is not the same as a distributed DBMS
# Design of a distributed DBMS
In addition to ER modelling, we have to consider also:
• Fragmentation:
• How to break a relation into fragments
• Fragments can be horizontal/vertical/mixed
• Allocation:
• How fragments are allocated at the several sites
• Aim is to reach an “optimal” distribution (efficient, reliable,…)
• Replication
• Which fragments are stored in multiple sites (and which sites)
Choices for Fragmentation and Allocation:
• Based on how the database is to be used
• Quantitative and Qualitative information is used
• Quantitative information (mainly for fragmentation)
• The frequency with which specific transactions are run
• The (usual) sites from which transactions are run
• Desired performance criteria for the transactions
• Qualitative information (mainly for allocation):
• The relations/attributes/tuples being accessed
• The type of access (read/write)
• Strategic objectives for the choices about the fragments
• Locality of reference
• Data to be stored close to where it is used
• If a fragment is used at several sites then replication is useful
• Reliability and availability
• Improved by replication
• If one site fails, there are other fragment copies available
Further strategic objectives
• Acceptable performance
• Bad allocation results in “bottleneck” effects (a site receives too many requests so has bad performance)
• Also: Bad allocation caused underutilized resources
• Cost of storage capacities
• Cheap mass storage to be used at sites, whenever possible
• This must be balanced against locality of reference
• Minimal communication costs
• Minimum retrieval costs when max locality of reference, or when each site has its own copy of data
• But when replicated data is updated
• All copies of this data must be updated
• Increased network traffic/communication costs
Four alternative strategies for the placement of data
• Centralized
• Single database and DBMS
• Stored at one site with users distributed across the network
• Not distributed
• Partitioned
• Database partitioned into disjoint fragments
• Each data item assigned to exactly one site (no replication)
• Complete replication
• Complete copy of the database at each site
• Selective replication
• Combination of partitioning, replication and centralization
## Balance of the strategic objectives
Locality of referenceReliability and availabilityPerformanceStorage costsCommunication costs
CentralisedLowestLowestUnsatisfactoryLowestHighest
FragmentedHighLow for item; high for systemSatisfactoryLowestLow
Selective ReplicationHighLow for item, high for systemSatisfactoryAverageLow
Three correctness rules for the partitioned placement
1. Completeness
• If relation R is decomposed into fragments $R_1,R_2,...R_n$ each data item in R must appear in at least one fragment $R_i$
2. Reconstruction
• It must be possible to define a relational algebra expression that can reconstruct R from its fragments
3. Disjointness
• If a data item appears in fragment $R_i$, it should not appear in any other fragment
• Exception for vertical fragmentation: primary key attributed must be repeated for the reconstruction
# Fragmentation
Three main types of fragmentation
1. Horizontal
• A subset of the tuples of the relation
2. Vertical
• A subset of the attributes of the relation
3. Mixed
• A vertical fragment that is then horizontally fragmented
• Or a horizontal fragment that is then vertically fragmented
## Horizontal fragmentation
• Assume there exist two property types: ‘Flat’ and ‘House’
• We have a relation R with all properties for rent
• The horizontal fragmentation of R (by property type) is
$$P_1=\sigma_{type='House'}(PropertyForRent)$$
$$P_2=\sigma_{type='Flat'}(PropertyForRent)$$
• This fragmentation may be useful e.g. if we have separate applications dealing with flats/houses
• And it is correct
• Completeness: Each tuple is in either $P_1$ or in $P_2$
• Reconstruction: $R$ can be constructed from the fragments $P_1,P_2$ $R=P_1\cup P_2$
• Disjointness: There is no property that is both ‘flat’ and ‘house’
## Vertical Fragmentation
• For every staff member in a company
• The payroll department requires: staffNo, position, sex, salary
• The personnel department requires: staffNO, fName, DOB, branchNo
• We have a relation Staff with all staff members
• For this example, the vertical fragmentation of staff is:
$$S_1=\Pi_{\text{staffNo, position, sex, salary}}(Staff)$$
$$S_2=\Pi_{\text{staffNo, Name, DOB, branchNo}}(Staff)$$
• Both fragments include the primary key staffNo to allow reconstruction of Staff from $S_1$ and $S_2$
• This fragmentation is useful
• The fragments are stored at the departments that are needed
• Performance for every department is improved (as the fragment is smaller than the original relation Staff)
• This fragmentation is correct
• Completeness
• The primary key staffNo belongs to both $S_1$ and $S_2$
• Each other attribute is either in $S_1$ or in $S_2$
• Reconstruction
• R can be constructed from the fragments $S_1,S_2$ using the natural join operation
$$Staff=S_1\bowtie S_2$$
• Disjointness
• The fragments are disjoint except the primary key (which is necessary for the reconstruction)
• Reflects organizational structure
• Improves share-ability and local autonomy
• Improved availability and reliability
• Improved performance
• Smaller hardware cost
• Scalability
|
2020-10-20 12:07:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36549636721611023, "perplexity": 8275.550930827249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00292.warc.gz"}
|
https://ogdf.github.io/doc/ogdf/classogdf_1_1_p_q_internal_node.html
|
# OpenGraph DrawingFramework
v. 2022.02 (Dogwood)
ogdf::PQInternalNode< T, X, Y > Class Template Reference
The class template PQInternalNode is used to present P-nodes and Q-nodes in the PQ-Tree. More...
#include <ogdf/basic/pqtree/PQInternalNode.h>
Inheritance diagram for ogdf::PQInternalNode< T, X, Y >:
## Public Member Functions
PQInternalNode (int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat)
PQInternalNode (int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQInternalKey< T, X, Y > *internalPtr)
PQInternalNode (int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQInternalKey< T, X, Y > *internalPtr, PQNodeKey< T, X, Y > *infoPtr)
PQInternalNode (int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQNodeKey< T, X, Y > *infoPtr)
~PQInternalNode ()
The destructor does not delete any accompanying information class as PQLeafKey, PQNodeKey and PQInternalKey. More...
virtual PQInternalKey< T, X, Y > * getInternal () const
Returns a pointer to the PQInternalKey information. More...
virtual PQLeafKey< T, X, Y > * getKey () const
Returns 0. An element of type PQInternalNode does not have a PQLeafKey. More...
virtual PQNodeRoot::PQNodeMark mark () const
Returns the variable m_mark. More...
virtual void mark (PQNodeRoot::PQNodeMark m)
Sets the variable m_mark. More...
virtual bool setInternal (PQInternalKey< T, X, Y > *pointerToInternal)
setInternal() sets the pointer variable m_pointerToInternal to the specified adress of pointerToInternal that is of type PQInternalKey. More...
virtual bool setKey (PQLeafKey< T, X, Y > *pointerToKey)
Accepts only pointers pointerToKey = 0. More...
virtual PQNodeRoot::PQNodeStatus status () const
Returns the variable m_status in the derived class PQInternalNode. More...
virtual void status (PQNodeRoot::PQNodeStatus s)
Sets the variable m_status in the derived class PQInternalNode. More...
virtual PQNodeRoot::PQNodeType type () const
Returns the variable m_type in the derived class PQInternalNode. More...
virtual void type (PQNodeRoot::PQNodeType t)
Sets the variable m_type in the derived class PQInternalNode. More...
Public Member Functions inherited from ogdf::PQNode< T, X, Y >
PQNode (int count)
The (second) constructor is called, if no information is available or neccessary. More...
PQNode (int count, PQNodeKey< T, X, Y > *infoPtr)
The (first) constructor combines the node with its information and will automatically set the PQBasicKey::m_nodePointer (see basicKey) of the element of type PQNodeKey. More...
virtual ~PQNode ()
The destructor does not delete any accompanying information class as PQLeafKey, PQNodeKey and PQInternalKey. More...
bool changeEndmost (PQNode< T, X, Y > *oldEnd, PQNode< T, X, Y > *newEnd)
The function changeEndmost() replaces the old endmost child oldEnd of the node by a new child newEnd. More...
bool changeSiblings (PQNode< T, X, Y > *oldSib, PQNode< T, X, Y > *newSib)
The function changeSiblings() replaces the old sibling oldSib of the node by a new sibling newSib. More...
int childCount () const
Returns the number of children of a node. More...
void childCount (int count)
Sets the number of children of a node. More...
bool endmostChild () const
The function endmostChild() checks if a node is endmost child of a Q-node. More...
PQNode< T, X, Y > * getEndmost (PQNode< T, X, Y > *other) const
Returns one of the endmost children of node, if node is a Q-node. More...
PQNode< T, X, Y > * getEndmost (SibDirection side) const
Returns one of the endmost children of node, if node is a Q-node. More...
PQNode< T, X, Y > * getNextSib (PQNode< T, X, Y > *other) const
The function getNextSib() returns one of the siblings of the node. More...
PQNodeKey< T, X, Y > * getNodeInfo () const
Returns the identification number of a node. More...
PQNode< T, X, Y > * getSib (SibDirection side) const
The function getSib() returns one of the siblings of the node. More...
int identificationNumber () const
Returns the identification number of a node. More...
virtual void mark (PQNodeMark)=0
mark() sets the variable PQLeaf::m_mark in the derived class PQLeaf and PQInternalNode. More...
PQNode< T, X, Y > * parent () const
The function parent() returns a pointer to the parent of a node. More...
PQNode< T, X, Y > * parent (PQNode< T, X, Y > *newParent)
Sets the parent pointer of a node. More...
PQNodeType parentType () const
Returns the type of the parent of a node. More...
void parentType (PQNodeType newParentType)
Sets the type of the parent of a node. More...
int pertChildCount () const
Returs the number of pertinent children of a node. More...
void pertChildCount (int count)
Sets the number of pertinent children of a node. More...
SibDirection putSibling (PQNode< T, X, Y > *newSib)
The default function putSibling() stores a new sibling at a free sibling pointer of the node. More...
SibDirection putSibling (PQNode< T, X, Y > *newSib, SibDirection preference)
The function putSibling() with preference stores a new sibling at a free sibling pointer of the node. More...
PQNode< T, X, Y > * referenceChild () const
Returns a pointer to the reference child if node is a P-node. More...
PQNode< T, X, Y > * referenceParent () const
Returns the pointer to the parent if node is a reference child. More...
bool setNodeInfo (PQNodeKey< T, X, Y > *pointerToInfo)
Sets the pointer m_pointerToInfo to the specified adress of pointerToInfo. More...
virtual void status (PQNodeStatus)=0
Sets the variable PQLeaf::m_status in the derived class PQLeaf and PQInternalNode. More...
virtual void type (PQNodeType)=0
Sets the variable PQInternalNode::m_type in the derived class PQLeaf and PQInternalNode. More...
## Private Attributes
PQNodeRoot::PQNodeMark m_mark
#m_mark is a variable, storing if a PQInternalNode is QUEUEUD, BLOCKED or UNBLOCKED (see PQNode) during the first phase of the procedure Bubble(). More...
PQInternalKey< T, X, Y > * m_pointerToInternal
m_pointerToInternal stores the adress of the corresponding internal information. More...
PQNodeRoot::PQNodeStatus m_status
m_status is a variable storing the status of a PQInternalNode. More...
PQNodeRoot::PQNodeType m_type
m_status is a variable storing the status of a PQInternalNode. More...
Protected Attributes inherited from ogdf::PQNode< T, X, Y >
List< PQNode< T, X, Y > * > * fullChildren
Stores all full children of a node during a reduction. More...
int m_childCount
int m_debugTreeNumber
Needed for debuging purposes. More...
PQNode< T, X, Y > * m_firstFull
Stores a pointer to the first full child of a Q-node. More...
int m_identificationNumber
Each node that has been introduced once into the tree gets a unique number. More...
PQNode< T, X, Y > * m_leftEndmost
PQNode< T, X, Y > * m_parent
Is a pointer to the parent. More...
PQNodeType m_parentType
Stores the type of the parent which can be either a P- or Q-node. More...
int m_pertChildCount
Stores the number of pertinent children of the node. More...
int m_pertLeafCount
Stores the number of pertinent leaves in the frontier of the node. More...
PQNodeKey< T, X, Y > * m_pointerToInfo
Stores a pointer to the corresponding information of the node. More...
PQNode< T, X, Y > * m_referenceChild
Stores a pointer to one child, the reference child of the doubly linked cirkular list of children of a P-node. More...
PQNode< T, X, Y > * m_referenceParent
Is a pointer to the parent, in case that the parent is a P-node and the node itself is its reference child. More...
PQNode< T, X, Y > * m_rightEndmost
Stores the right endmost child of a Q-node. More...
PQNode< T, X, Y > * m_sibLeft
Stores a pointer ot the left sibling of PQNode. More...
PQNode< T, X, Y > * m_sibRight
Stores a pointer ot the right sibling of PQNode. More...
List< PQNode< T, X, Y > * > * partialChildren
Stores all partial children of a node during a reduction. More...
## Detailed Description
### template<class T, class X, class Y> class ogdf::PQInternalNode< T, X, Y >
The class template PQInternalNode is used to present P-nodes and Q-nodes in the PQ-Tree.
This implementation does not provide different classes for both, P- and Q-nodes, although this might seem necessary in the first place. The reason why this is not done, is supported by the fact that the maintainance of both nodes in the tree is similar and using the same class for P- and Q-nodes makes the application of the templates by Booth and Lueker much easier.
The template class PQInternalNode offers the possibility of using four different kinds of constructors, depending on the usage of the different possible information classes PQInternalKey<T,X,Y> and PQNodeKey<T,X,Y>.
In all four cases the constructor expects an integer value count, setting the value of the variable m_identificationNumber in the base class, an integer value type setting the variable m_type of PQInternalNode and an integer value status setting the variable m_status of PQInternalNode.
Besides, the constructors accept additional information of type PQNodeKey and PQInternalKey. This information is not necessary when allocating an element of type PQInternalNode and results in the four constructors that handle all cases.
Using a constructor with the infoPtr storing the adress of an element of type PQNodeKey automatically sets the PQBasicKey::m_nodePointer (see basicKey) of this element of type PQNodeKey to the newly allocated PQInternalNode. See also PQNode since this is done in the base class.
Using a constructor with the PQInternalKeyPtr storing the adress of an element of type PQInternalKey automatically sets the PQBasicKey::m_nodePointer (see basicKey) of this element of type PQInternalKey to the newly allocated PQInternalNode.
Definition at line 76 of file PQInternalNode.h.
## ◆ PQInternalNode() [1/4]
template<class T , class X , class Y >
ogdf::PQInternalNode< T, X, Y >::PQInternalNode ( int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQInternalKey< T, X, Y > * internalPtr, PQNodeKey< T, X, Y > * infoPtr )
inline
Definition at line 79 of file PQInternalNode.h.
## ◆ PQInternalNode() [2/4]
template<class T , class X , class Y >
ogdf::PQInternalNode< T, X, Y >::PQInternalNode ( int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQInternalKey< T, X, Y > * internalPtr )
inline
Definition at line 95 of file PQInternalNode.h.
## ◆ PQInternalNode() [3/4]
template<class T , class X , class Y >
ogdf::PQInternalNode< T, X, Y >::PQInternalNode ( int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat, PQNodeKey< T, X, Y > * infoPtr )
inline
Definition at line 109 of file PQInternalNode.h.
## ◆ PQInternalNode() [4/4]
template<class T , class X , class Y >
ogdf::PQInternalNode< T, X, Y >::PQInternalNode ( int count, PQNodeRoot::PQNodeType typ, PQNodeRoot::PQNodeStatus stat )
inline
Definition at line 122 of file PQInternalNode.h.
## ◆ ~PQInternalNode()
template<class T , class X , class Y >
ogdf::PQInternalNode< T, X, Y >::~PQInternalNode ( )
inline
The destructor does not delete any accompanying information class as PQLeafKey, PQNodeKey and PQInternalKey.
This has been avoided, since applications may need the existence of these information classes after the corresponding node has been deleted. If the deletion of an accompanying information class should be performed with the deletion of a node, either derive a new class with an appropriate destructor, or make use of the function CleanNode() of the class template PQTree.
Definition at line 146 of file PQInternalNode.h.
## ◆ getInternal()
template<class T , class X , class Y >
virtual PQInternalKey* ogdf::PQInternalNode< T, X, Y >::getInternal ( ) const
inlinevirtual
Returns a pointer to the PQInternalKey information.
Implements ogdf::PQNode< T, X, Y >.
Definition at line 170 of file PQInternalNode.h.
## ◆ getKey()
template<class T , class X , class Y >
virtual PQLeafKey* ogdf::PQInternalNode< T, X, Y >::getKey ( ) const
inlinevirtual
Returns 0. An element of type PQInternalNode does not have a PQLeafKey.
Implements ogdf::PQNode< T, X, Y >.
Definition at line 150 of file PQInternalNode.h.
## ◆ mark() [1/2]
template<class T , class X , class Y >
virtual PQNodeRoot::PQNodeMark ogdf::PQInternalNode< T, X, Y >::mark ( ) const
inlinevirtual
Returns the variable m_mark.
The variable m_mark describes the designation used in the first pass of Booth and Luekers algorithm called Bubble(). A P- or Q-node is either marked BLOCKED, UNBLOCKED or QUEUED (see PQNode).
Implements ogdf::PQNode< T, X, Y >.
Definition at line 202 of file PQInternalNode.h.
## ◆ mark() [2/2]
template<class T , class X , class Y >
virtual void ogdf::PQInternalNode< T, X, Y >::mark ( PQNodeRoot::PQNodeMark m )
inlinevirtual
Sets the variable m_mark.
Definition at line 205 of file PQInternalNode.h.
## ◆ setInternal()
template<class T , class X , class Y >
virtual bool ogdf::PQInternalNode< T, X, Y >::setInternal ( PQInternalKey< T, X, Y > * pointerToInternal )
inlinevirtual
setInternal() sets the pointer variable m_pointerToInternal to the specified adress of pointerToInternal that is of type PQInternalKey.
Observe that pointerToInternal has to be instantiated by the client. The function setInternal() does not instantiate the corresponding variable in the derived class. Nevertheless, using this function will automatically set the PQBasicKey::m_nodePointer of the element of type PQInternalKey to this PQInternalNode. The return value is always 1 unless pointerInternal was equal to 0.
Implements ogdf::PQNode< T, X, Y >.
Definition at line 183 of file PQInternalNode.h.
## ◆ setKey()
template<class T , class X , class Y >
virtual bool ogdf::PQInternalNode< T, X, Y >::setKey ( PQLeafKey< T, X, Y > * pointerToKey )
inlinevirtual
Accepts only pointers pointerToKey = 0.
The function setKey() is designed to set a specified pointer variable in a derived class of PQNode to the adress stored in pointerToKey that is of type PQLeafKey. The class template PQInternalNode does not store informations of type PQLeafKey.
setKey() ignores the informations as long as pointerToKey = 0. The return value then is 1. In case that pointerToKey != 0, the return value is 0.
Implements ogdf::PQNode< T, X, Y >.
Definition at line 164 of file PQInternalNode.h.
## ◆ status() [1/2]
template<class T , class X , class Y >
virtual PQNodeRoot::PQNodeStatus ogdf::PQInternalNode< T, X, Y >::status ( ) const
inlinevirtual
Returns the variable m_status in the derived class PQInternalNode.
The functions manage the status of a node in the PQ-tree. A status is any kind of information of the current situation in the frontier of a node (the frontier of a node are all descendant leaves of the node). A status can be anything such as EMPTY, FULL or PARTIAL (see PQNode). Since there might be more than those three possibilities, (e.g. in computing planar subgraphs) this function may be overloaded by the client.
Implements ogdf::PQNode< T, X, Y >.
Definition at line 217 of file PQInternalNode.h.
## ◆ status() [2/2]
template<class T , class X , class Y >
virtual void ogdf::PQInternalNode< T, X, Y >::status ( PQNodeRoot::PQNodeStatus s )
inlinevirtual
Sets the variable m_status in the derived class PQInternalNode.
Definition at line 220 of file PQInternalNode.h.
## ◆ type() [1/2]
template<class T , class X , class Y >
virtual PQNodeRoot::PQNodeType ogdf::PQInternalNode< T, X, Y >::type ( ) const
inlinevirtual
Returns the variable m_type in the derived class PQInternalNode.
The type of a PQInternalNode is either PNode or QNode (see PQNodeRoot).
Implements ogdf::PQNode< T, X, Y >.
Definition at line 227 of file PQInternalNode.h.
## ◆ type() [2/2]
template<class T , class X , class Y >
virtual void ogdf::PQInternalNode< T, X, Y >::type ( PQNodeRoot::PQNodeType t )
inlinevirtual
Sets the variable m_type in the derived class PQInternalNode.
Definition at line 230 of file PQInternalNode.h.
## ◆ m_mark
template<class T , class X , class Y >
PQNodeRoot::PQNodeMark ogdf::PQInternalNode< T, X, Y >::m_mark
private
#m_mark is a variable, storing if a PQInternalNode is QUEUEUD, BLOCKED or UNBLOCKED (see PQNode) during the first phase of the procedure Bubble().
Definition at line 239 of file PQInternalNode.h.
## ◆ m_pointerToInternal
template<class T , class X , class Y >
PQInternalKey* ogdf::PQInternalNode< T, X, Y >::m_pointerToInternal
private
m_pointerToInternal stores the adress of the corresponding internal information.
That is information not supposed to be available for leaves of the PQ-tree. The internal information must be of type PQInternalKey. The PQInternalKey information can be overloaded by the client in order to present different information classes, needed in the different applications of PQ-trees.
Definition at line 251 of file PQInternalNode.h.
## ◆ m_status
template<class T , class X , class Y >
PQNodeRoot::PQNodeStatus ogdf::PQInternalNode< T, X, Y >::m_status
private
m_status is a variable storing the status of a PQInternalNode.
A P- or Q-node can be either FULL, PARTIAL or EMPTY (see PQNode).
Definition at line 258 of file PQInternalNode.h.
## ◆ m_type
template<class T , class X , class Y >
PQNodeRoot::PQNodeType ogdf::PQInternalNode< T, X, Y >::m_type
private
m_status is a variable storing the status of a PQInternalNode.
A P- or Q-node can be either FULL, PARTIAL or EMPTY (see PQNode).
Definition at line 265 of file PQInternalNode.h.
The documentation for this class was generated from the following file:
|
2022-06-25 22:33:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.293128103017807, "perplexity": 7327.519204029843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00055.warc.gz"}
|
https://www.mechamath.com/calculus/product-rule-formula-proof-and-examples/
|
# Product Rule – Formula, Proof and Examples
The Product Rule is one of the main principles applied in Differential Calculus (or Calculus I). It is commonly used in deriving a function that involves the multiplication operation. The product rule was proven and developed using the backbone of Calculus, which is the limits. Other initial derivative methods like the chain rule have also been used to prove the principles of the product rule.
In this article, we will discuss everything about the product rule. We will cover its definition, formula, proofs, and application usage. We will also look at some examples and practice problems to apply the principles of the product rule.
##### CALCULUS
Relevant for
Learning about the product rule with examples.
See formula
##### CALCULUS
Relevant for
Learning about the product rule with examples.
See formula
## The Product Rule and its Formula
### What is the Product Rule?
The Product Rule is a rule which states that a product of at least two functions can be derived by getting the sum of the (a) first function in original form multiplied by the derivative of the second function and (b) second function in original form multiplied by the derivative of the first function. The first function is the first multiplicand of the given problem to be derived whilst the second function is the second multiplicand.
In another statement of the rule, it is when a product of the functions being derived is equal to the first function (or first multiplicand) in its original form multiplied by the derivative of the second function (or the second multiplicand) and then added to the original form of the second function (or the second multiplicand) in its original form multiplied by the derivative of the first function (or the first multiplicand).
### The Product Rule Formula
The product rule formula is:
where
• $latex u =$ first function $latex f(x)$ or the first multiplicand
• $latex v =$ first function $latex g(x)$ or the second multiplicand
Or in other forms, it can be:
$$\frac{d}{dx}(F(x)) = f(x) \cdot \frac{d}{dx}(g'(x)) + g(x) \cdot \frac{d}{dx}(f(x))$$
or
which is the most commonly used form of the product rule formula where
$latex u = f(x)$
$latex v = g(x)$
and $latex \frac{d}{dx}(uv)$ can also be $latex y’$, $latex F'(x)$, $latex{\Upsilon}’$ or other letters used to denote functions with the apostrophe symbol.
## Proofs of the Product Rule
There are three main methods that we can use to prove the product rule, by using limits, by using the chain rule, and by using logarithmic differentiation.
### Proof of the Product Rule by using limits
This proof uses the limit definition of a derivative:
$latex \frac{d}{dx} f(x) = \lim \limits_{h \to 0} {\frac{f(x+h)-f(x)}{h}}$
By using a substitution for $latex f(x)\cdot g(x)$, we can write the derivative of a product in terms of limits. Then, we can manipulate the numerator algebraically. If we use the laws of limits, we can solve the limits until we get the product rule.
Take a look at our article about the proofs of the product rule to learn how to prove the product rule step by step. In addition to the proof by using limits, you can also learn about the proofs by using the chain rule and by using logarithmic differentiation.
## When to use the Product Rule to find derivatives
The product rule formula is an efficient tool to use in order to derive given functions like the following:
a. $latex F(x) = f(x) \cdot g(x)$
where f(x) and g(x) are two multiplicands of the given function F(x).
b. $latex fg(x) = f(x) \cdot g(x)$
where f(x) and g(x) are two multiplicands and fg(x) is an operation of function.
c. $latex f(x) = u \cdot v$
where $latex u$ and $latex v$ are two multiplicands of the given function f(x).
d. $latex f(x) = x_1 \cdot x_2$
where $latex x_1$ and $latex x_2$ are two multiplicands of the given function f(x).
e. $latex G(x) = f(x) \cdot g(x) \cdot h(x)$
where f(x), g(x), and h(x) are three multiplicands of the given function G(x)
f. $latex H(x) = f(x) \cdot g(x) \cdot h(x) \cdot n(x)…$
where f(x), g(x), h(x), and n(x)… are various multiplicands of the given function H(x)
These are the most common examples of functions that use the product rule for derivation problems. Although you may argue that a function can be algebraically multiplied before it can be derived using the initial and simpler derivative methods (or even a function specific derivative formula), but that is not always the case, even in some basic multiplication problems that we may encounter.
Thus, we have the product rule to make it still possible to derive product of functions that are either very difficult to algebraically multiply or even impossible to be multiplied and simplified algebraically.
## How to use the Product Rule, a step by step tutorial
$latex f(x) = x^2 \sin{(x)}$
As you can observe, this given function has two multiplicands but they cannot be algebraically multiplied nor simplified anymore. But in order to derive this problem, we can use the product rule as shown by the following steps:
Step 1: It is always recommended to list the formula first if you are still a beginner.
$latex \frac{d}{dx}(uv) = uv’ + vu’$
Please take note that you may use any form of the product rule formula as long as you find it more efficient based on your preference or on the given problem.
Step 2: Identify how many multiplicands you have in the given problem. In this demonstration, our problem has two. The first one is $latex x^2$ and the second one is $latex \sin{(x)}$.
Step 3: In the chosen form of the product rule formula, we will identify the first multiplicand as $latex u$ and the second multiplicand as $latex v$.
Thus, we have
$latex u = x^2$
$latex v = \sin{(x)}$
Step 4: If you are a beginner, it is highly recommended that you individually derive each $latex u$ and $latex v$ first before you proceed in applying the product rule formula.
Thus, we have
$latex u’ = 2x$
$latex u’$ used the power formula.
$latex v’ = \cos{(x)}$
$latex v’$ used the derivative formula for trigonometric functions.
Step 5: Apply the product rule formula now by substituting the $latex u$, $latex u’$, $latex v$, and $latex v’$ in the product rule formula.
$latex \frac{d}{dx}(uv) = uv’ + vu’$
$$\frac{d}{dx}(uv) = (x^2) \cdot (\cos{(x)}) + (\sin{(x)}) \cdot (2x)$$
Step 6: Simplify algebraically and apply the necessary trigonometric identities as well as other operations in the new present functions of the derived equation whenever applicable.
$latex \frac{d}{dx}(uv) = x^2 \cos{(x)} + 2x \sin{(x)}$
Step 7: If you think the derived equation cannot be simplified any longer, declare it as your final derivative answer:
$latex f'(x) = x^2 \cos{(x)} + 2x \sin{(x)}$
For formality purposes, it is recommended that you use either $latex f'(x), y’,$ or $latex \frac{d}{dx}(f(x))$ as your derivative symbol on the left side of the derived final answer instead of $latex (uv)’$ or $latex \frac{d}{dx}(uv)$.
## Product Rule – Examples with answers
Using the formula detailed above, we can derive various functions that are expressed as products. Each of the following examples has its respective detailed solution. It is recommended for you to try to solve the sample problems yourself before looking at the solution so that you can practice and fully master this topic.
### EXAMPLE 1
Derive: $latex f(x) = x^3 (x-5)$
Step 1: List the product rule formula for reference:
$latex \frac{d}{dx}(uv) = uv’ + vu’$
Please take note that you may use any form of the product rule formula as long as you find it more efficient based on your preference or on the given problem.
Step 2: Identify how many multiplicands you have in the given problem. In this problem, we have two. The first one is $latex x^3$ and the second one is $latex (x-5)$.
Step 3: In the chosen form of the product rule formula of this demonstration, we will mark the first multiplicand as $latex u$ and the second multiplicand as $latex v$.
Thus, we have
$latex u = x^3$
$latex v = (x-5)$
Step 4: Derive $latex u$ and $latex v$ individually:
$latex u’ = 3x$
$latex u’$ used the power formula.
$latex v’ = (1-0)$
$latex v’$ used the power formula, the sum/difference of derivatives, and the derivatives of constants.
Step 5: Apply the product rule formula now by substituting the $latex u$, $latex u’$, $latex v$, and $latex v’$ in the product rule formula.
$latex \frac{d}{dx}(uv) = uv’ + vu’$
$$\frac{d}{dx}(uv) = (x^3) \cdot (1-0) + (x-5) \cdot (3x)$$
Step 6: Simplify algebraically:
$latex \frac{d}{dx}(uv) = x^3 + 3x (x-5)$
$latex \frac{d}{dx}(uv) = x^3 + 3x^2 – 15x$
Step 7: If you think the derived equation cannot be simplified any longer, declare it as your final derivative answer. The final answer is:
$latex f'(x) = x^3 + 3x^2 – 15x$
### EXAMPLE 2
Find the derivative of $latex f(x) = \sin{(x)} \tan{(x)}$.
Step 1: List the product rule formula for reference:
$latex \frac{d}{dx}(uv) = uv’ + vu’$
Please take note that you may use any form of the product rule formula as long as you find it more efficient based on your preference or on the given problem.
Step 2: Identify how many multiplicands you have in the given problem. In this problem, we have two. The first one is $latex \sin{(x)}$ and the second one is $latex \tan{(x)}$.
Step 3: In the chosen form of the product rule formula of this demonstration, we will mark the first multiplicand as $latex u$ and the second multiplicand as $latex v$.
Thus, we have
$latex u = \sin{(x)}$
$latex v = \tan{(x)}$
Step 4: Derive $latex u$ and $latex v$ individually:
$latex u’ = \cos{(x)}$
$latex u’$ used the derivative formula for trigonometric functions.
$latex v’ = \sec^{2}{(x)}$
$latex v’$ used the derivative formula for trigonometric functions.
Step 5: Apply the product rule formula now by substituting the $latex u$, $latex u’$, $latex v$, and $latex v’$ in the product rule formula.
$latex \frac{d}{dx}(uv) = uv’ + vu’$
$$\frac{d}{dx}(uv) = (\sin{(x)}) \cdot (\sec^{2}{(x)}) latex + (\tan{(x)}) \cdot (\cos{(x)})$$
Step 6: Simplify algebraically and since we have a trigonometric function in our derivative, we can also apply some trigonometric identities applicable in our solution:
$$\frac{d}{dx}(uv) = \sin{(x)} \sec^{2}{(x)} + \tan{(x)} \cos{(x)}$$
$$\frac{d}{dx}(uv) = (\sin{(x)}) (\frac{1}{\cos{(x)}})^2 + (\frac{\sin{(x)}}{\cos{(x)}}) (\cos{(x)})$$
$$\frac{d}{dx}(uv) = (\sin{(x)}) (\frac{1^{2}}{\cos^{2}{(x)}}) + (\frac{\sin{(x)}}{\cos{(x)}}) (\cos{(x)})$$
$$\frac{d}{dx}(uv) = (\frac{\sin{(x)}}{\cos{(x)}}) (\frac{1}{\cos{(x)}}) + (\frac{\sin{(x)}}{\cos{(x)}}) (\cos{(x)})$$
$$\frac{d}{dx}(uv) = \sec{(x)} \tan{(x)} + \sin{(x)}$$
Step 7: If you think the derived equation cannot be simplified any longer, declare it as your final derivative answer. The final answer is:
$latex f'(x) = \sec{(x)} \tan{(x)} + \sin{(x)}$
### EXAMPLE 3
Derive: $latex x^{2} \sin^{2}{(x)}$
Step 1: List the product rule formula for reference:
$latex \frac{d}{dx}(uv) = uv’ + vu’$
Step 2: Identify how many multiplicands you have in the given problem. In this problem, we have two. The first one is $latex x^{2}$ and the second one is $latex \sin^{2}{(x)}$.
Step 3: In the chosen form of the product rule formula of this demonstration, we have the first multiplicand as $latex u$ and the second multiplicand as $latex v$.
Thus, we have
$latex u = x^{2}$
$latex v = \sin^{2}{(x)}$
Step 4: Derive $latex u$ and $latex v$ individually:
$latex u’ = 2x$
$latex u’$ used the power formula.
$latex v’ = 2 \sin{(x)} \cos{(x)}$
$latex v’$ used the the chain rule formula followed by the derivative formula for trigonometric functions.
Step 5: Apply the product rule formula now by substituting the $latex u$, $latex u’$, $latex v$, and $latex v’$ in the product rule formula.
$latex \frac{d}{dx}(uv) = uv’ + vu’$
$$\frac{d}{dx}(uv) = x^{2} \cdot (2 \sin{(x)} \cos{(x)}) + (\sin^{2}{(x)}) \cdot (2x)$$
Step 6: Simplify algebraically and since we have a trigonometric function in our derivative, we can also apply some trigonometric identities applicable in our solution:
$$\frac{d}{dx}(uv) = x^{2} \cdot (2 \sin{(x)} \cos{(x)})+ (\sin^{2}{(x)}) \cdot (2x)$$
$$\frac{d}{dx}(uv) = 2x^{2} \sin{(x)} \cos{(x)} + 2x \sin^{2}{(x)}$$
$$\frac{d}{dx}(uv) = x^{2} (2 \sin{(x)} \cos{(x)}) + 2x \sin^{2}{(x)}$$
$$\frac{d}{dx}(uv) = x^{2} \sin{(2x)} + 2x \sin^{2}{(x)}$$
Step 7: If you think the derived equation cannot be simplified any longer, declare it as your final derivative answer. The final answer is:
$$\frac{d}{dx}(uv) = x^{2} \sin{(2x)} + 2x \sin^{2}{(x)}$$
### EXAMPLE 4
What is the derivative of $latex f(x) = 5x^7 \cot{(x^7)}$?
We start by listing the product rule for reference:
$latex \frac{d}{dx}(uv) = uv’ + vu’$
In this problem, we have two multiplicands in the function f(x). The first multiplicand is $latex 5x^7$ and the other one is $latex \cot{(x^7)}$.
We can set $latex u$ as the first multiplicand and $latex v$ as the second multiplicand.
Therefore, we have
$latex u = 5x^7$
$latex v = \cot{(x^7)}$
$latex f(x) = uv$
Now, we can use the product rule formula:
$latex f'(x) = uv’ + vu’$
$$\frac{d}{dx}f(x) = u \cdot \frac{d}{dx}(v) + v \cdot \frac{d}{dx}(u)$$
$$\frac{d}{dx}f(x) = 5x^7 \cdot \frac{d}{dx}(\cot{(x^7)}) + \cot{(x^7)} \cdot \frac{d}{dx}(5x^7)$$
Note: The derivative of $latex u$ is found using the power rule formula and the derivative of $latex v$ is found using the chain rule formula and the derivative formula for the trigonometric function.
By applying the product rule formula along with the other derivative formulas to be used for $latex u’$ and $latex v’$, we have:
$$\frac{d}{dx}f(x) = 5x^7 \cdot (-7x^6 \csc^{2}{(x^7)}) + \cot{(x^7)} \cdot (35x^6)$$
Simplifying algebraically, we get
$$\frac{d}{dx}f(x) = -35x^{13} \csc^{2}{(x^7)} + 35x^6 \cot{(x^7)}$$
$latex f'(x) = 35x^6 \cot{(x^7)} – 35x^{13} \csc^{2}{(x^7)}$
### EXAMPLE 5
What is the derivative of $latex f(x) = x^7 \sin{(\sin^{-1}{(x)})}$?
We list down the product rule formula for our reference:
$latex \frac{d}{dx}(uv) = uv’ + vu’$
Here, the first multiplicand is $latex x^7$ and the second multiplicand is $latex \sin{(\sin^{-1}{(x)})}$.
We will use $latex u$ for the first multiplicand and $latex v$ for the second multiplicand.
Therefore, we have
$latex u = x^7$
$latex v = \sin{(\sin^{-1}{(x)})}$
$latex f(x) = uv$
Now, we use the product rule formula to derive our given problem:
$latex f'(x) = uv’ + vu’$
$$\frac{d}{dx}f(x) = u \cdot \frac{d}{dx}(v) + v \cdot \frac{d}{dx}(u)$$
$$\frac{d}{dx}f(x) = x^7 \cdot \frac{d}{dx}(\sin{(\sin^{-1}{(x)})}) + \sin{(\sin^{-1}{(x)})} \cdot \frac{d}{dx}(x^7)$$
Note: In this problem, we derivative $latex u$ using the power rule formula and derive $latex v$ using the derivative formulas for trigonometric function and inverse trigonometric function.
By applying the product rule formula along with the other derivative formulas to be used for $latex u’$ and $latex v’$, we have:
$$\frac{d}{dx}f(x) = x^7 \cdot (\cos{(\sin^{-1}{(x)})} (\frac{1}{\sqrt{1-x^2}})) + \sin{(\sin^{-1}{(x)})} \cdot (7x^6)$$
Simplifying algebraically and applying trigonometric and inverse trigonometric operations and identities, we get
$$\frac{d}{dx}f(x) = x^7 \cdot (1) + 7x^6 \cdot (x)$$
$latex f'(x) = 8x^7$
## Product Rule – Practice problems
Solve the following differentiation problems and test your knowledge on this topic. Use the product rule formula detailed above to solve the exercises. If you have problems with these exercises, you can study the examples solved above.
|
2022-09-29 10:16:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278916120529175, "perplexity": 311.3764231221489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00123.warc.gz"}
|
http://zqjr.jf-huenstetten.de/latex-document-example-pdf.html
|
# Latex Document Example Pdf
\title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation. tex, like paper. Adds a couple new features to the commands you've been working with. · Then, you run a LaTeX compiler (we'll be using MiKTeX) to turn the file foo. Basic example¶. The body contains the main text of the document together with the so called floats (tables and figures). edu Abstract As mobile devices become pervasive, it is increasingly important to build mobile systems and applications that adequately address issues such as energy man-agement and disconnected operation. A useful application of pdfjam is for producing a handout from a file of presentation slides. It will embed figures saved during the logged session into the PDF document, as long as the graphs have been saved in. Inserting pages from an external PDF document within a LaTeX document 14 Dec 2013 on latex | pdf. Answer:According to your description, VeryPDF has two solutions for you: you can either use PDF Editor to overlay one PDF to another by software interface or you can use PDF Toolbox Command Line to overlay PDF to another programmatically by command line. Beyond that, it has many other capabilities due to a large amount of packages, such as Forest, which I used for laying out sentence trees in a college Linguistics class. It's on DICE machines, with custom document classes for the School of Informatics. tex' translates this into the LATEX document shown in Figures2and3. tex template, explains how to both compile it and generate the final Postscript and PDF document. Tips for Formatting Resumes Using Microsoft Word 2010 DON’T USE A TEMPLATE DON’T USE A TEMPLATE DON’T USE A TEMPLATE Bullets - How to create, move and format To create a bullet point, click on the “bullets” button at the top of document in the home menu. Here's an online LaTeX Wiki Book. If you have any questions, please contact your Graduate College academic counselor or call us at (405) 325-3811. Academic writing service in nigeria. Note that, like all bibliography references, quite a few compilations must be made for the document references to be displayed properly. Start your favorite text editor and type in the lines below exactly as shown \documentclass{article} \begin{document} This is my \emph{first} document prepared in \LaTeX. The package pdfpages let's you include a complete PDF or any combination of pages into a LaTeX document. The gnuplot input file is given below, and the output appears as Figure 1. Click the text or image you wish to edit. pdf and XeTeX-reference. Even that nowadays there are some more modern formats for example those related to ebooks (like. The Comprehensive TeX Archive Network (CTAN) is an ftp site which houses an enormous collection of TeX packages, styles, macros and sample files. zip to the files from typeinst. Learn more about document templates here. It takes a computer le, prepared according to the rules of LATEX and converts it to a form that may be printed on a high-quality printer, such as a laser writer, to produce a printed document of a quality comparable with good quality books and journals. tex le using any text editor and save it in the MiniPaper folder % this is hello. Sample Latex Files: I am distributing sample Latex files for the Report, Slides and Paper in one column and two column format. Templates help with placement of specific elements such as the author list in addition to providing guidance on stylistic elements such as abbreviations and acronyms. The basic layout of a LaTeX file. The TeX and LaTeX FAQ (list of Frequently Asked Questions) The TeX and LaTeX FAQ (list of Frequently Asked Questions) CTAN: The Comprehensive TeX Archive Network; This is the place to find everything related to TeX, LaTeX, and AMS-TeX. 3 Single Equations that are Too Long: multline If an equation is too long, we have to wrap it somehow. Skip navigation Sign in. Again, I’ve done the hard work of getting this Markdown code to produce the correct output. This sample demonstrates how to work with Portable Document Format (PDF) files programmatically. LaTeX Math Wizard Bibliography User View Options Help 'Quick Build PDF V part label examp e tiny n examplel. 2 This Document This document serves a couple of functions. On a PDF forms document displayed on a viewer application, we can enter text in more than one line or in multiple paragraph by pressing the Enter key at the end of each line or paragraph. BiBTeX files IEEEtranBST. In case you are with a similar problem but do not know which are he actual offsets, you can create two new lengths, store the original values, and restore them again:. \end{document} After typing in the commands to LaTeX (which are the instructions preceded by the backslash character) and the text of a sample paper, save them in a file with a name ending in. The latter can also be created directly from within R using > tools::texi2pdf("example-1. A number of examples are given below. It's based on the 'WYSIWYM' (what you see is what you mean) idea, meaning you only have to focus on the contents of your document and the computer will take care of the formatting. This class docstring shows how to use sphinx and rst syntax. zip to the files from typeinst. Making Portable Document Format (PDF) files from LaTeX source is a little tricky, because the PDF file must incorporate not only the images for any figures, but also the font glyphs (or at least, partial fonts) for anything outside the standard handful of fonts in the basic PostScript set. Please note that using this template is not a substitute for learning the fundamentals of LaTeX - both typesetting your material and compiling it to generate fully functional PDF. When the compiling is complete TeXworks' PDF viewer will open and display your document. 3 - 15 January 2019) can also be downloaded from CTAN packages and the JACoW repository, hosted by Github. It's on DICE machines, with custom document classes for the School of Informatics. NET supports popular file formats such as PDF, XFA, TXT, HTML, PCL, XML, XPS, EPUB, TEX and image formats as well as allows to create PDF documents directly through the API or via XML templates and can create forms or manage form fields embedded in the PDF documents. o The appendix (appendices) appears after the document text, but before the References. Case study of iep. Package is released under the free LaTeX Project Public License. ) viewer, editor for your browser. Quick and Dirty Instructions for the New ACM Typesetting This paper provides a sample of a LATEX document which the author's guide is available as acmart. Publication generation workflow in R. Lines and paragraphs reflow automatically, or you can click and drag to resize elements. LaTeX uses document classes, to influence the overall layout of your document. Convert the files with Pandoc to generate HTML (can be self-contained), PDF, Word documents In RStudio the knit button combines steps 3+4 behind the scene to compile the documents from the RMarkdown file. 50 Updated: 8/14 1. \bibliographystyle{plain} \bibliography{bibfile} BibTEX example. There is a definite advantage to this if you have, say, a large number of images in your results chapter, which you don't need when you're working on, say, the technical introduction. homework_tex_example. Abbreviate the month of. This is a sample PDF document. And at the end of the document, you can see the full reference. After that, you can use targets to generate pdf: make pdf: This will generate the Latex file using several passes and running all the necessary commands. bib, which are plain text (ASCII) files. Only basic features are covered, and a vast amount of detail has been omitted. docx to latex, doc to latex, pdf to latex, rtf to latex, odt to latex, ott to latex, bib to latex, pdb to latex, psw to latex, latex to latex, sdw to latex, stw to latex, sxw to latex, vor to latex, Related Tools:. Click the Add Text button at the top of the PDF page. Make a Document in LaTeX - Beginners Guide: Ok, so you've decided to take the plunge and learn Latex (the typesetting language, not the plasticy stuff) but how do you create a document that you can add to and publish pdf's until your heart is content?. Differences in Templates for Exporting to LaTeX and PDF. Result, after compiling the below source: mythesis. To sum up, if you are a Windows user, TeXnicCenter is one of the best LaTeX editors and you don’t have to look any further. Whether you're a novice to LaTeX who needs help modifying a template you like, or an advanced LaTeX user with no time to create a suite of document types for your business, you're covered. This has however the disadvantage that the frame size and DIN A 4 format do not agree, i. \documentclass[]{report} % list options between brackets \usepackage{} % list packages between braces % type user-defined commands here \begin{document}. I’ve sort of been cludging around using \hspace‘s and \textcolor but I’ve always meant to figure out the right way to do things so this seemed like a good chance to figure out how to do it right. Its purpose is to create familiarity with current thinking and research on. ò Click on the Save button. edu Samuel Lembo, Esq. Now, without entering into a detailed technical explanation we’ll provide the following example which should make it clear what LaTeX is about. bst, which produces citations in "author (year)" format. To create a PDF document from FrameMaker it is necessary to create a postscript intermediate because the Distiller job options cannot be manipulated from within FrameMaker itself. Screenshot 3: Overlay page placed on template. ternary plot. before the \begin{document} command. Even that nowadays there are some more modern formats for example those related to ebooks (like. Introduction Your next homework assignment is to write a short paper about our l − V r diagram project, essentially explaining the project goals, observations and data processing, and re-porting on our preliminary results. That's useful especially for tables and some of the other tedious bits of LaTeX. To produce (horizontal) blank space within a paragraph, use \hspace, followed by the length of the blank space enclosed within braces. This document is an unofficial reference manual for LATEX, a document preparation system, version of March 2018. electrumadf Electrum ADF fonts collection. It is found in nature, but synthetic latexes can be made by polymerizing a monomer such as styrene that has been emulsified with surfactants. The PDF also contains examples how to execute R within the Markdown document. Hook on an essay examples. LaTeX Community is an excellent resource for answering any questions you have about LaTeX. Compile LaTeX Documents. Skip navigation Sign in. This class docstring shows how to use sphinx and rst syntax. Some of these commands are LATEX macros, while oth- ers belong to plain TEX; no attempt to di eren- tiate them is made. For now, remember that you will generally go through the DVI file, and you can print the document from the YAP previewer. Creating your first document. Update changes em dash to hyphen in equation numbers. In this article, I will introduce you to GNU Emacs and describe how to use it to create LaTeX documents. This is achieved by the use of two operating modes, paragraph and math mode. download the Microsoft Word template and use this as a guide when creating your document in LaTex. Robin's Blog My LaTeX preamble April 8, 2011. Contents 1 Introduction 1 2 Example 2 3 Citation Styles 3 4 Making a. Hope this will useful. eps (the figure file) Put these into a directory (right click on the link and select Save Target As). tex ndocumentclassfarticleg nbeginfdocumentg Hello, nLaTeX! nendfdocumentg Compile using the RSI Make le $cd ˘/RSI/MiniPaper/$ make hello. Common BibTEX style files abbrv Standard abstract alphawith abstract alpha Standard apa APA plain Standard unsrt Unsorted The LATEX document should have the following two lines just before \end{document}, where bibfile. Example of how to set up an un-numbered footnote for previously published chapters. Try pandoc online PDF with numbered sections and a custom LaTeX header: pandoc -N --template=template. What is LATEX? Basic Mark-up Embedded Graphics Examples Resources Getting Started with LATEX for a Technical Document or Thesis Stephen Carr, IST Client Services University of Waterloo. Schedule 2: Appendix 21 – Statutory Declaration Template – Redacted Version SCHEDULE 2 – Appendix 21. It is most often used for medium-to-large technical or scientific documents but it can be used for almost any form of publishing. tar for LaTeX, etc. It takes a computer le, prepared according to the rules of LATEX and converts it to a form that may be printed on a high-quality printer, such as a laser writer, to produce a printed document of a quality comparable with good quality books and journals. Note that relevant interests and skills can be demonstrated through campus and volunteer. When Doxygen is finished processing, in the latex directory there's a file called 'refman. This free online PDF converter allows you to save a PDF document as a set of separate PNG images, ensuring better image quality and size than any other PDF to image converters. The freeware download presented here is called LaTeX. This is true. Notebooks may be exported to a range of static formats, including HTML (for example, for blog posts), reStructured-Text, LaTeX, PDF, and slide shows, via thenbconvertcommand. PDF Mod is a simple tool for modifying PDF documents. cls sets the page layout so that there are one inch margins all around (no matter what size paper you're using) and provides commands that make it easy to format questions, create. Of course a PostScript version can be made by using latex. Just let us know if anything is confusing, and if you feel like you need to find an attorney to help with something specific, we're always happy to help with that too. In the following image, you can see an example LaTeX file (. It can rotate, extract, remove and reorder pages via drag and drop. As you probably guessed, this presentation was made using the Beamer class. This template is used for the general user, who already knows the handling and use of a software. LaTeX Community is an excellent resource for answering any questions you have about LaTeX. Your working document consists of one or more text files containing your thesis content and mark-up tags, analogous to HTML. Past releases can be downloaded here. cls, print two-sided by default: the even pages and the odd pages have different layouts; other document classes use the twoside option to print two-sided. 3 Single Equations that are Too Long: multline If an equation is too long, we have to wrap it somehow. That's all you need to know about the syntax now. The package is loaded into LaTeX by the command \usepackage{amsmath}, which goes in the preamble of the LaTeX source file (that is, between the documentclass line and the begin{document} line). texto document. It allows you to create and manage LaTeX file directly on your browser and generate a PDF. The main file is " mythesis. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Highcharts export to pdf example. It takes a computer le, prepared according to the rules of LATEX and converts it to a form that may be printed on a high-quality printer, such as a laser writer, to produce a printed document of a quality comparable with good quality books and journals. (Note: Windows users will need to indicate “Save as Type>Document Template (*. We convert your PDF document to LaTeX. We suggest that you read this document carefully before you begin preparing your manuscript. This information is provided for educational purpose only. Following is a list of the documents that describe the memory services:. Compile LaTeX Documents. \end{document} After typing in the commands to LaTeX (which are the instructions preceded by the backslash character) and the text of a sample paper, save them in a file with a name ending in. o All appendices must be listed in your Table of Contents and have the same title format in the text and in the Table of Contents as your other chapter/section headings. paper for an IEEE Power & Energy Society Conference are presented. As the commercial arm of LaTeXTemplates. 84-89 of The LaTeX Companion. This listing contains short descriptions of the control sequences that are likely to be handy for users of LAT EX v2. Therefore, create a pdf document using the tool pdflatex which is generally included with the development suite and then use Adobe Acrobat Pro to convert to PDF/A. The information here is sorted by application area, so that it is grouped by the scientific communities that use similar notation and LaTeX constructs. The key to getting started with LaTeX, as with most things, is to start small; do something that you can throw away. Notebooks may be exported to a range of static formats, including HTML (for example, for blog posts), reStructured-Text, LaTeX, PDF, and slide shows, via thenbconvertcommand. Acrobat shows you the tools you’ll need. template thesis. Your LaTeX document can be easily modified to make a poster or a screen presentation similar to (and better than) PowerPoint. tex The bare bones template -- just add your paper. Title of website, webpage, or document [Online]. TEX file Creation Latex is a computer language - one creates and compiles files having names with a. However, the Create LaTeX/PDF Report command does not directly print but instead creates a TeX document in LaTeX2e. 53, is available [give location]. dvi References [1] Sample Latex Document, Paul A. This is a sample American Meteorological Society (AMS) LATEX template. Here’s a simple example > 2 + 2 [1] 4 What I actually typed in foo. \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation. ShareLaTeX is so easy to get started with that you'll be able to invite your non-LaTeX colleagues to contribute directly to your LaTeX documents. Sample LaTeX document Carl Schwarz August 24, 2015 1 Including entire pages of *. Preface LATEX [1] is a typesetting system that is very suitable for producing scien- tific and mathematical documents of high typographical quality. \end{document}. Luckily, RStudio has a “compile PDF” button when you are editing RNW files, make that RNW->TEX->PDF conversion a one-click process (assuming you have no debugging to do). 1 April 4, 2003 Abstract This document describes how to (i) modify citation styles in your body text, (ii) make your own bibliography style (. Current releases can be found here. EXAMPLE OF A LATEX DOCUMENT SUE MCGLASHAN AND YAEL KARSHON 1. Creating PDF files using LaTeX LaTeX is a "document preparation system", not a word processor. Contents 1 Paragraphs 2. More on that later. After that, you can use targets to generate pdf: make pdf: This will generate the Latex file using several passes and running all the necessary commands. Using LaTeX to make PDF documents with Japanese characters Posted by ciaran on Tuesday, July 15th, 2008 4 Even if you know nothing about LaTeX, you can make your first Japanese PDF document by taking a copy of this example file JIS. texdoc is especially convenient to create LATEX documents that contain Stata output. To make a figure in LaTeX is simpler than it looks and just requires a few commands. As you probably guessed, this presentation was made using the Beamer class. A Simple LaTeX Template. If you look at the PDF version, then after the first instance of 'LaTeX' in the introduction, you should notice a numbered reference. Robin's Blog My LaTeX preamble April 8, 2011. And at the end of the document, you can see the full reference. It can rotate, extract, remove and reorder pages via drag and drop. Quick and Dirty Instructions for the New ACM Typesetting This paper provides a sample of a LATEX document which the author's guide is available as acmart. eledform Define textual variants eledmac Typeset scholarly editions elegantbook An Elegant LaTeX Template for Books elegantnote Elegant LaTeX Template for Notes elegantpaper An Elegant LaTeX Template for Working Papers elements. Make a Document in LaTeX - Beginners Guide: Ok, so you've decided to take the plunge and learn Latex (the typesetting language, not the plasticy stuff) but how do you create a document that you can add to and publish pdf's until your heart is content?. It is also suitable for producing all sorts of other documents, from simple letters to. That's all you need to know about the syntax now. June18,2016 Onthe28thofApril2012thecontentsoftheEnglishaswellasGermanWikibooksandWikipedia projectswerelicensedunderCreativeCommonsAttribution-ShareAlike3. [email protected] Although my choice of colors may leave something to be desired, this example requirements specification was written with LaTeX, and converted to HTML using the latex2html conversion program. PDF is the Portable Document Format from Adobe that is widely adopted as a way to share documents for printing purposes. In the olden days, say 2014, we could write R markdown documents that did not require the. We will show raw LATEX code using a Verbatim command. Benefits of Understanding the Family Tree. Conversion to HTML is straightforward. As an example, a mathematics examination can maintain a correct correspondence between questions and answers by using sagetex to have Sage compute one from the other. Skip navigation Sign in. tex, the template file, and the md5sum for all of those files. TeX is a powerful text processing language and is the required format for some periodicals now. Result, after compiling the below source: mythesis. LaTeX is the de facto standard for the communication and publication of scientific documents. \end{document} After typing in the commands to LaTeX (which are the instructions preceded by the backslash character) and the text of a sample paper, save them in a file with a name ending in. Look at removing Markdown cells with a LaTeX child template:. However, naively converting the Postscript output to Adobe's Portable Document Format (pdf) using Acrobat Distiller or ps2pdf yields a document that displays poorly electronically, though it will look fine when printed. Lewlnskv}}. This document is for people who have never used. An Example LaTeX Document Evan Chen February 10, 2019 Abstract This is an example of a LATEX document, complete with theorems and head- ers and the like. moderncv -- a modern curriculum vitae class Moderncv provides a documentclass for typesetting curricula vitae in various styles. Depending on how you work in LaTeX, cropping may not be necessary. Also, be sure that your source file and PDF are identical. It only takes about a minute, and again — it's FREE. The current Ghostscript release 9. Compile this LaTeX code to a PDF Start Collaboration Mode It offers programmable desktop publishing features and extensive facilities for automating most aspects of typesetting and desktop publishing, including numbering and cross-referencing, tables and figures, page layout, bibliographies, and much more. An interpreter for the PostScript language and for PDF. A comment line begins with a percent symbol (%) and continues to the end of the line. What is LATEX? Basic Mark-up Embedded Graphics Examples Resources Getting Started with LATEX for a Technical Document or Thesis Stephen Carr, IST Client Services University of Waterloo. Differences in Templates for Exporting to LaTeX and PDF. There are several documents included with TeX Live (and probably with other distributions that ship XeTeX); in particular, look for XeTeX-notes. Simply upload your document and we will do the rest. tex, wrapped. Note that, like all bibliography references, quite a few compilations must be made for the document references to be displayed properly. See example below. Convert the files with Pandoc to generate HTML (can be self-contained), PDF, Word documents In RStudio the knit button combines steps 3+4 behind the scene to compile the documents from the RMarkdown file. Clarivate Analytics | ScholarOne Manuscripts™ Author LaTex File Upload Guide Page 2. In this case, submit your manuscript in PDF format only and supply the source files when requested. The PDF also contains examples how to execute R within the Markdown document. Lines which start with % are viewed as having been commented out. Conversion to HTML is straightforward. healthyeating. We append the source of this file should you want to study it. for instance a 13cm on 10cm has a frame in Beamer as format. Currently, the best choice is TeX/LaTeX, because this open format does not hide information. LaTeX uses document classes, to influence the overall layout of your document. dtx with pdflatex. tplx, which correspond to LaTeX document classes. Launch Emacs by typing:. 3 of this license or (at your option) any later version. I duplicated this in Sublime Text. Users of word processors such as Microsoft Word should save their documents as PDF and submit that. 100% Compatible WRITER supports DOC, DOCX, TXT, HTM, DOT, DOTX and is fully compatible with Microsoft Word ®. OSHA recommends the use of engineering or work practice controls to manage or eliminate hazards to the greatest extent possible. Postscript PDF SVG Veusz document. Getting Started. conditions of the LaTeX Project Public License, either version 1. Second, the source code of this document is meant to be an exemplar of “best practice” LATEX coding. Transparency in PDF files refers to objects on a page, such as images or text, which are transparent or ‘show through’. each page of the PDF is converted into a TIFF (for example, a PDF with 50 pages would result in 50 TIFFs). Most mathematical input is entered in math mode. To obtain an evaluation copy of Microsoft Visual Studio 2013,. By default, this TeX file is removed, however if you want to keep it (e. It takes a computer le, prepared according to the rules of LATEX and converts it to a form that may be printed on a high-quality printer, such as a laser writer, to produce a printed document of a quality comparable with good quality books and journals. [email protected] That is a faq. Then click the "Build and view current file" button. tex you would need to type: pdflatex filename. zip contains the files that are used to “style” your $$\LaTeX$$ document typeinst. make fast: This will generate a pdf in only one pass. Output files Device independent output: simple. Regardless of what citation style you want, you need to have your references formatted in BibTeX format. Index Terms—Class, IEEEtran, LATEX, paper, style, template, typesetting. Moderncv aims to be both straightforward to use and customizable, providing four ready-made styles (classic, casual, banking and oldstyle) and allowing one to define his own by modifying colors, fonts, icons, etc. Almost any PDF document can easily be converted to PDF/A-1b, using automated software tools such as the "Convert to PDF/A-1b" option of the Preflight tool of Acrobat Pro. Abbreviate the month of. User Story Template and (DocX version) This template is provided in three parts. Speci cally, exam. Equation 12-pt Times New Roman, indented, flush left, line return above and below. Un éditeur LaTeX en ligne facile à utiliser. Online document converter This free online RTF document converter allows you to convert your documents and ebooks to the RTF format. tex (note: "foo" is standing in for your file name). Pas d’installation, collaboration en temps réel, gestion des versions, des centaines de modèles de documents LaTeX, et plus encore. For example:---. While you are at it, you should also activates the SyncTeX feature by selecting you editor right below in the "PDF-TeX Sync support" section. Hence, this: \documentclass{article} \begin{document} This is a sample document. References can be "cited" during editing the LaTeX document using, for example, \cite {key} command, and later at the document compilation step LaTeX input files must be processed with LaTeX and BibTeX. bst, which produces citations in "author (year)" format. pdf files into LaTeX At the coarsest level, the entire output from a procedure (or several procedures) can be sent to a pdf file and then this pdf file can included into your document directly. In our Latex template file we can use \keyname to place the values inside of our document. It was one of the early adopters of TEX for its typese‰ing. 1 Document structure Point size Latex cmd User-de ned * Sample. With XeLaTeX and LuaLaTeX, Sphinx configures the LaTeX document to use polyglossia, but one should be aware that current babel has improved its support for Unicode engines in recent years and for some languages it may make sense to prefer babel over polyglossia. If you have any questions, please contact your Graduate College academic counselor or call us at (405) 325-3811. 0 or higher). For this, please compare the. Making Portable Document Format (PDF) files from LaTeX source is a little tricky, because the PDF file must incorporate not only the images for any figures, but also the font glyphs (or at least, partial fonts) for anything outside the standard handful of fonts in the basic PostScript set. Follow these links to see how to resolve typical reasons for. PdfPageRenderOptions PdfPageRenderOptions PdfPageRenderOptions PdfPageRenderOptions PdfPageRenderOptions: Represents display settings for a single page of a Portable Document Format (PDF) document, such as the page's background color and its encoding type. Please note: if you deliver a work consisting of lot of spelling or grammar mistakes or which. Releases and Release History. The body contains all the text, gures, tables, etc. It was one of the early adopters of TEX for its typese‰ing. 0 or higher). The following python lines produce Latex code which creates Latex variables out of our key-value pairs. Introduction to Beamer Beamer is a LaTeX class for creating slides for presentations PDF/LaTeX the document one each of PDF, PNG, and JPG types. Perhaps the reader will find some ancient history to be helpful. I personally prefer the amsart documentstyle (see rst line of this document). Formulas and math symbols. Inserting a pdf file in latex. The template holds dummy text with examples for creating tables, figures, an index and glossary. The body of the document is sandwiched between \begin{document} and \end{document}. Adobe Export PDF makes it easy to convert PDFs to Microsoft Word or Excel for editing. The complete LaTeX file (and the pdf output) can be found in my repository, latex-homework-template. You can change any of these later but you might as well set it up correctly now. 11, NOVEMBER 2002 1 How to Use the IEEEtran LATEX Class Michael Shell, Member, IEEE (Invited Paper) Abstract—This article describes how to use the IEEEtran. Section 3 of the SRS document contains the heart of the specification of exactly what the system should do and how. SignNow’s Smart Field technology enables you to generate customized documents on the flyand integrate eSigning into your workflows without using the API. Welcome to the PGF and TikZ examples gallery. \lipsum[1-7] \end{document} Save that le and in the terminal window run pdflatex latex-second. Sample of LaTeX thesis source files. 0 offers a very nice, smart, and easy-to-use alternative to non-LaTeX users, in particular, the ability to import editable tables into a Word document. o The appendix (appendices) appears after the document text, but before the References. In the olden days, say 2014, we could write R markdown documents that did not require the. It is also suitable for producing all sorts of other documents, from simple letters to. homework_tex_example. For this, please compare the. Multiple documents may be combined via drag and drop. The package introduces some new commands for the frontmatter part. tex , going to a shell command line and typing " pdflatex JIS. LaTeX Source of Example 1. In case you want to process myarticle. While printable documents are thought of as static, PDF can be dynamic and interactive, see for example these interactivity demos. This way, the document will not be represented exactly as the designer wanted it to, but at least the text won’t reflow. can be included in the final pdf document using Latex. Consider the following example, which. LaTeX Community is an excellent resource for answering any questions you have about LaTeX. vi Short contents IV Customizing 265 9 Customizing LATEX 267 V Long bibliographies and indexes 309 10 BIBTEX 311 11 MakeIndex 332 A Math symbol tables 345 B Text symbol tables 356 C The AMS-LATEX sample article 360. tex) and then I have generated the corresponding PDF for visualization (Example. In this example document, I have included one. Each column is the sub-mode. tex as an example) for this is: > latex aipsamp > bibtex aipsamp > latex aipsamp > latex aipsamp Here, the rst invocation of latex has the e ect of rewriting the aipsamp. Eramian Beamer 1/9. However, the Create LaTeX/PDF Report command does not directly print but instead creates a TeX document in LaTeX2e. It bypasses the requirement to generate an interim format such as DocBook, Apache FO, or LaTeX. Long Term Storage of Diesel STORAGE LIFE. each page of the PDF is converted into a TIFF (for example, a PDF with 50 pages would result in 50 TIFFs). ‚e Association for Computing Machinery1 is the world’s largest educational and sci-enti•c computing society, which delivers resources that advance computing as a science and a profession.
|
2019-11-12 19:46:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847721517086029, "perplexity": 2972.3273180708306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00301.warc.gz"}
|
http://eprint.iacr.org/2012/150
|
## Cryptology ePrint Archive: Report 2012/150
Circular chosen-ciphertext security with compact ciphertexts
Dennis Hofheinz
Abstract: A key-dependent message (KDM) secure encryption scheme is secure even if an adversary obtains encryptions of messages that depend on the secret key. Such key-dependent encryptions naturally occur in scenarios such as harddisk encryption, formal cryptography, or in specific protocols. However, there are not many provably secure constructions of KDM-secure encryption schemes. Moreover, only one construction, due to Camenisch, Chandran, and Shoup (Eurocrypt 2009) is known to be secure against active (i.e., CCA) attacks.
In this work, we construct the first public-key encryption scheme that is KDM-secure against active adversaries and has compact ciphertexts. As usual, we allow only circular key dependencies, meaning that encryptions of arbitrary *entire* secret keys under arbitrary public keys are considered in a multi-user setting.
Technically, we follow the approach of Boneh, Halevi, Hamburg, and Ostrovsky (Crypto 2008) to KDM security, which however only achieves security against passive adversaries. We explain an inherent problem in adapting their techniques to active security, and resolve this problem using a new technical tool called lossy algebraic filters'' (LAFs). We stress that we significantly deviate from the approach of Camenisch, Chandran, and Shoup to obtain KDM security against active adversaries. This allows us to develop a scheme with compact ciphertexts that consist only of a constant number of group elements.
Category / Keywords: key-dependent messages, chosen-ciphertext security, public-key encryption
Publication Info: Full version of Eurocrypt 2013 paper
Date: received 22 Mar 2012, last revised 19 Jan 2013
Contact author: Dennis Hofheinz at kit edu
Available format(s): PDF | BibTeX Citation
Note: Additional intuition for the main scheme.
Short URL: ia.cr/2012/150
[ Cryptology ePrint archive ]
|
2017-04-26 17:21:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5849325656890869, "perplexity": 7007.1281622061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00060-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://vlab.stern.nyu.edu/docs/lrrisk
|
V-Lab
## Motivation
Prior to the financial crisis of 2007 and 2008, standard risk management focused primarily on short-run risks such as 1-day value at risk ($\mathrm{VaR}$) or 10-day $\mathrm{VaR}$. Engle (2009) points out that most assets in portfolios are held longer than these horizons, and as such longer term risk measures are important from a risk management perspective. For example, in periods of low volatility excess leverage likely carries little short-run risk. However, volatility is mean reverting and will rise in the long run; hence, any short-term risk measure will understate the risk of a leveraged portfolio. The Long Run $\mathrm{VaR}$ section of V-Lab is designed to account for mean-reversion of volatility and build $\mathrm{VaR}$ models that account for “the risk that risk will change”.
## Definition
Consider an asset’s log-return series ${r}_{t}=\mu +{\epsilon }_{t}$, where $\mu$ is the expected return and ${\epsilon }_{t}$ is a zero-mean white noise. The total log-return between date $t$ and date $t+k$ is then naturally defined as:$rt,t+k=∑i=1krt+k$The standard definition of the $k$-day ahead $\mathrm{VaR}$ of a position in this asset is the 1% or 5% quantile of the return distribution for ${r}_{t,t+k}$. V-Lab’s long run risk measures use simulation based methods to calculate the $\mathrm{VaR}$ at horizons of $k=30$ and $k=365$.
## Esimation
One way to calculate $\mathrm{VaR}$ is to simulate future realizations of the return process and use the resulting simulations to calculate $\mathrm{VaR}$. For both of the models prescribed in the models section of the documentation, a volatility model is fit to historical data on each day. The resulting model is then simulated ahead 10,000 times, for a horizon of 1-year in advance. All simulations are bootstrapped - that is historical shocks to the return process are drawn at random to simulate each path. $\mathrm{VaR}$ is then calculated using both the 1% and 5% quantile of the 10,000 simulated return paths. Finally, logarithmic returns are converted back to arithmetic returns.
## References
Engle, Robert F., The Risk that Risk Will Change. Journal Of Investment Management (JOIM), Fourth Quarter 2009. https://www.joim.com/article/the-risk-that-risk-will-change/
Engle, R. F. and J. G. Rangel, 2008. The Spline-GARCH Model for Low-Frequency Volatility and Its Global Macroeconomic Causes. Review of Financial Studies 21(3): 1187-1222. https://www.jstor.org/stable/40056848
Glosten, L. R., R. Jagannathan, and D. E. Runkle, 1993. On The Relation between The Expected Value and The Volatility of Nominal Excess Return on stocks. Journal of Finance 48: 1779-1801. https://doi.org/10.1111/j.1540-6261.1993.tb05128.x
Zakoian, J. M., 1994. Threshold Heteroscedastic Models. Journal of Economic Dynamics and Control 18: 931-955. https://doi.org/10.1016/0165-1889(94)90039-6
|
2019-12-10 12:58:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7336710691452026, "perplexity": 1898.0037275282425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527620.19/warc/CC-MAIN-20191210123634-20191210151634-00215.warc.gz"}
|
http://hitchhikersgui.de/Portal:Algebra
|
# Portal:Algebra
## Algebra
Algebra (from Arabic "al-jabr", literally meaning "reunion of broken parts") is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols and the rules for manipulating these symbols; it is a unifying thread of almost all of mathematics. As such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra; the more abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine and economics. Abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians.
Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are either unknown or allowed to take on many values. For example, in ${\displaystyle x+2=5}$ the letter ${\displaystyle x}$ is unknown, but the law of inverses can be used to discover its value: ${\displaystyle x=3}$. In E = mc2, the letters ${\displaystyle E}$ and ${\displaystyle m}$ are variables, and the letter ${\displaystyle c}$ is a constant, the speed of light in a vacuum. Algebra gives methods for writing formulas and solving equations that are much clearer and easier than the older method of writing everything out in words.
The word algebra is also used in certain specialized ways. A special kind of mathematical object in abstract algebra is called an "algebra", and the word is used, for example, in the phrases linear algebra and algebraic topology.
A mathematician who does research in algebra is called an algebraist.
## Selected article
3D illustration of a stereographic projection from the north pole onto a plane below the sphere.
The stereographic projection is a particular mapping (function) that projects a sphere onto a plane. The projection is defined on the entire sphere, except at one point — the projection point. Where it is defined, the mapping is smooth and bijective. It is also conformal, meaning that it preserves angles. On the other hand, it does not preserve area, especially near the projection point.
Intuitively, then, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including differential geometry, complex analysis, cartography, geology, and crystallography.
...Archive Image credit: Mark Howison Read more...
## WikiProjects
The Mathematics WikiProject is the center for mathematics-related editing on Wikipedia. Join the discussion on the project's talk page.
Project pages
Essays
Subprojects
Related projects
## Selected picture
These are all the connected Dynkin diagrams, which classify the irreducible root systems, which themselves classify simple complex Lie algebras and simple complex Lie groups. These diagrams are therefore fundamental throughout Lie group theory.
## Wikimedia
Algebra on Wikinews Algebra on Wikiquote Algebra on Wikibooks Algebra on Wikisource Algebra on Wiktionary Algebra on Wikimedia Commons News Quotations Manuals & Texts Texts Definitions Images & Media
Purge server cache
|
2018-04-26 13:40:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674301266670227, "perplexity": 600.1789163009277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948214.37/warc/CC-MAIN-20180426125104-20180426145104-00274.warc.gz"}
|
http://mathhelpforum.com/algebra/148696-very-large-exponent.html
|
Math Help - Very large exponent
1. Very large exponent
I need help defining the following very large exponent:
10,000 ^ 13,766,400
No doubt, this number is astronomical...but surely (hopefully) it can be expressed with scientific notation. The calculation has maxed out my TI-83 as well as any online calculator designed to handle large calculations. I do not consider myself to be mathematically inclined, so I have no way of knowing whether or not I am asking for something absurd. I appreciate any help in this matter.
2. Originally Posted by cerpintaxt33
I need help defining the following very large exponent:
10,000 ^ 13,766,400
Here is an example: $(10,000)^{321}=(10^4)^{321}=(10)^{1284}$
3. Originally Posted by cerpintaxt33
I need help defining the following very large exponent:
10,000 ^ 13,766,400
No doubt, this number is astronomical...but surely (hopefully) it can be expressed with scientific notation. The calculation has maxed out my TI-83 as well as any online calculator designed to handle large calculations. I do not consider myself to be mathematically inclined, so I have no way of knowing whether or not I am asking for something absurd. I appreciate any help in this matter.
In scientific notation it is: $1 \times 10^{13,766,404}$
You should not need a calculator for this just the definition of scientific notation and the laws of exponents.
(most calculators and computer languages are limited to decimal exponents of 3 or less digits, some not being able to handle even 3 digits)
CB
4. Originally Posted by cerpintaxt33
I need help defining the following very large exponent:
10,000 ^ 13,766,400
No doubt, this number is astronomical...but surely (hopefully) it can be expressed with scientific notation. The calculation has maxed out my TI-83 as well as any online calculator designed to handle large calculations. I do not consider myself to be mathematically inclined, so I have no way of knowing whether or not I am asking for something absurd. I appreciate any help in this matter.
Note that $10^4 = 10,000$
So, since sicientfic notation is expressed in base 10, and the number in front of the 10 in this case is 1 we have:
$(10,000)^{13,766,400} = (10^4)^{13,766,400}$
And an exponant to an exponant is simply the exponants multiplied together so we have:
$(10,000)^{13,766,400} = 10^{4*13,766,400} = 10^{55,065,600}$
And since its multiplied by 1 we have:
$1$X $10^{55,065,600}$
5. Thanks, I appreciate it. Any idea how many digits this number would be?
6. Originally Posted by cerpintaxt33
Thanks, I appreciate it. Any idea how many digits this number would be?
Well since $10^n$ adds $n-1$ zeros to $10$ (and thus $n-1$ digits ), and becuause 10 already adds two digits, I beleive that this number would have $55,065,601$ digits
|
2014-07-28 16:58:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804413914680481, "perplexity": 591.59915212979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00416-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://www.ijfis.org/journal/view.html?uid=893&vmd=Full
|
Title Author Keyword ::: Volume ::: Vol. 19Vol. 18Vol. 17Vol. 16Vol. 15Vol. 14Vol. 13Vol. 12Vol. 11Vol. 10Vol. 9Vol. 8Vol. 7Vol. 6Vol. 5Vol. 4Vol. 3Vol. 2Vol. 1 ::: Issue ::: No. 4No. 3No. 2No. 1
PSR: PSO-Based Signomial Regression Model
SeJoon Park1, NagYoon Song2, Wooyeon Yu2, and Dohyun Kim2
1Division of Energy Resources Engineering and Industrial Engineering, Kangwon National University, Chuncheon, Korea
2Department of Industrial and Management Engineering, Myongji University, Yongin, Korea
Correspondence to: Dohyun Kim (ftgog@mju.ac.kr)
Received November 11, 2019; Revised December 10, 2019; Accepted December 12, 2019.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Regression analysis can be used for predictive and descriptive purposes in a variety of business applications. However, the successive existing regression methods such as support vector regression (SVR) have the drawback that it is not easy to derive an explicit function description that expresses the nonlinear relationship between an output variable and input variables. To resolve this issue, developed in this article is a nonlinear regression algorithm using particle swarm optimization (PSO) which is PSR. The output variables of PSR allow to obtain the explicit function description of the output variable using input variables. Three PSRs are proposed based on infeasible-particle update rules. Their experimental results show that the proposed approach performs similarly to and slightly better than the existing methods regardless of the data sets, implying that it can be utilized as a useful alternative when obtaining the explicit function description of the output variable using input variables and interpreting which of the original input variables are more important than others in the obtained regression model.
Keywords : Signomial regression, Particle swarm optimization, Meta-heuristic optimization
1. Introduction
Supervised learning is one of the representative machine learning methods that attempt to reveal hidden relationships between input data and output data. The main problems of supervised learning are largely classification and regression. Especially, regression analysis can be used for predictive and descriptive purposes in a variety of business applications. For this reason, many regression methods have been developed, including ordinary least squares (OLS) and support vector regression (SVR).
OLS aims to estimate the unknown parameters in a linear regression model in order to minimize the sum of squared differences between the observed and predicted values. OLS, one of the simplest methods, has the advantage that the output variable can be expressed as a function expression of the input variables. However, to express the nonlinear relationship between the input variables and the output variable, the analyst must manually generate the necessary variables.
SVR, proposed originally by Vapnik et al. in 1997 [1], is one of the most popular supervised learning algorithms widely used in regression analysis. Using kernel tricks to efficiently perform nonlinear regression and implicitly map inputs to higher-dimensional space, SVR’s performance is very superior. However, there are some problems. First, it is difficult to obtain a clear function representation of the output variable using input variables. Therefore, it is not easy to understand the relationship between the output variable and the input variables. Second, the selection of many parameters is important to the performance of the SVR, directly affecting its generalization and regression efficiency.
In this paper, we propose an algorithm that combines the advantages of OLS and SVR using the particle swarm optimization (PSO) method. The chief advantages of this PSO-based algorithm are that it can first derive an explicit function description that expresses the nonlinear relationship between the output variable and the input variables without kernel tricks, and that, simultaneously, it helps to select the optimal parameters for the generalized regression model.
This paper is organized as follows. Section 2 explains PSO-based non-linear regression algorithm. Section 3 discusses the performance of the proposed method. Finally, Section 4 draws conclusions.
2. PSO-Based Signomial Regression (PSR) Model
### 2.1 PSR Model
In this section, we present the signomial function. The signomial function f(x) is the sum of the monomial gi(x), which is the product of a number of positive variables (x1, …, xn) with exponents of real numbers (ri1, …, rin) [2].
$gi(x)=x1ri1x2ri2⋯xnrin,f(x)=∑i=1mβigi(x),$
where the coefficients βi is the real number.
Our goal is to obtain the function f(x) by estimating the parameters (βi and ri1, ri2, …, rin) using PSO.
### 2.2 Particle Swarm Optimization
In this research, the basic PSO proposed by Kennedy and Eberhart [3] was applied. In 2007, many implementations of the basic PSO were cataloged and defined by Bratton and Kennedy [4]. PSO has been applied to many optimization problems such as the traveling salesman problem [5] and scheduling problems [6]. PSO is also well known as a metaheuristic algorithm that can be utilized to optimize continuous nonlinear problems [7]. This is why PSO also has been widely used to select optimal parameters of support vector machine (SVM) for classification and regression [810]. In this paper, we will present a new, PSO-based supervised learning method for regression problems. To the best of our knowledge, there have been no previous studies on regression problems using PSO.
PSO is an iterative procedure maintaining multiple “particles”( solutions) and their “directions and speeds,” which are referred to simply as particles and velocities, respectively. In this research, PSO was used to estimate rij for all i, j in Eq. (1). The objective of PSO is to find the estimated rij and the coefficients in Eq. (1) in order to minimize the mean square error (MSE) of Eq. (1). First, the particle and velocity is defined as below:
$Particle: Rk=(r11k,r12k,…,rmnk),Velocity: Vk=(v11k,v12k,…,vmnk),$
where k is the particle index. The overall PSO procedure is schematized in
In the beginning, multiple particles and their velocities are randomly generated. The initial $rijk$ is between −1 and 1, and $∑j=1n|rijk|≤1$ for ∀ k. The initial $vijk$ is between −0.01 and 0.01. Table 1 shows how to initialize the k th particle and velocity. All $rijk$ are initialized from i = 1 to i = m term by term. In each term the independent variable index, j, is randomly selected one by one. In each selection, a uniform $[-1+∑∀j|rijk|,1-∑∀j|rijk|]$ random variable is generated and assigned to $rijk$. This allows $∑j=1n|rijk|$ to be less than or equal to one.
In each iteration, each particle and each velocity is updated based on the current particle, current velocity, local best particle, and global best particle. If $Rkt$ and $Vkt$ are the particle and velocity for the tth iteration, each particle is randomly changed by Eq. (3).
$Vkt+1=Vkt+2·R1·(global best particle-Rkt)+2·R2·(local best particle-Rkt),Rkt+1=Rkt+Vkt+1,$
where, R1 and R2 are uniform [0, 1] random variables. The local best particle is the best particle (solution) over all iterations for particle k, and the global best particle is the best particle (solution) over all iterations for all particles. The local and global best particles are updated when the coefficients and MSE are updated in Figure 1. The coefficients in Eq. (1) and its MSE are determined for each particle by the least square method. As shown in Figure 1, if the best global particle is not changed over TR iterations, the PSO procedure is terminated. This is the termination rule of the PSO procedure.
### 2.3 Three Particle and Velocity Update Methods
When a particle and its velocity are updated, the particle can obey constraint $∑j=1n|rijk|≤1$ (Figure 1). Three different update methods are proposed in this research, namely PSR1, PSR2, and PSR3, as shown in Table 2. In PSR1, if $∑j=1n|rijk|>1$, set $rijk=0$ one by one until $∑j=1n|rijk|≤1$. A smaller $|rijk|$ is selected, and set to zero, in order to select a less important $rijk$. In PSR 2, the constraint is relaxed as each $|rijk|≤1$. Therefore, if $|rijk|>1$, set $rijk$ to 1 or −1. In PSR3, PSO is able to keep infeasible particles (solutions) such as $|rijk|>1$. However, this method adds sufficient penalty to the associated MSE to prevent selection of an infeasible particle as the local or global best solution. In the next section, the results of the PSRs and other approaches are compared.
3. Experimental Design and Results
### 3.1 Experimental Design
In this section, the method by which the regression models are determined by the PSRs is explained, and then their respective experimental results are compared with OLS, multivariate adaptive regression splines (MARS), and SVR. In order to compare the OLS, MARS, SVR, and 3 PSRs, the four datasets shown in Table 3 were used. These data sets are usually used in machine learning area. Each data set has 10 train, valid, and test datasets, which are used for model training, model validation, and model testing, respectively. For optimal objectivity, various types of data were tested. Abalone had a larger number of data than the other datasets, and Triazines a larger number of independent variables.
A PSR, with its random search characteristics, provides different regression models over multiple runs with the same data. In order to find better regression models by the PSRs, five replication runs for each data set were performed. Five replication models were determined by each PSR with the test data. Then, the model providing the minimum MSE by valid data was selected as the regression model for the PSR. Finally, the selected model was tested by the test data. Ten terms (m = 10) were used for the PSR models in Eq. (1). The other parameters were pre-tested with data, as indicated in Table 3. Three numbers of particles were examined: 100, 200 and 300. The termination rule (TR) was also tested with 100, 200, and 300 particles. Two hundred particles and TR 200 were selected for a comparison experiment, since increasing them did not improve the overall MSEs. The R package version 3.4.0 was used for the PSR procedures, which were performed on an Intel Core i5-4590 CPU @ 3.30 GHz, 4 GB RAM.
### 3.2 Experimental Results
Table 4 compares the average MSEs and their average standard deviations for ten independent runs. For each method, the first row shows the average MSE, and the second row shows the average standard deviation. The bold values in Table 4 are the top 3 results for the 6 methods and test data. The 3 PSRs provided better test results than the other methods for the Abalone and Machine data sets. PSR1 and PSR2 provided better average MSEs for Abalone, Machine, and Triazines than OLS, MARS or SVR. SVR provided the best average MSE for the Auto data. These results show that the PSR methods provide competitive regression models.
Table 5 shows the test MSE error, (Method MSE − Best MSE)/Best MSE, between each method and the best method. PSR1 provided the best MSE for Abalone and Triazines, and PSR3 provided the best MSE for Machine. PSR1 was the best in terms of the average test MSE error. The second- and third-best methods were PSR2 and SVR, respectively. Tables 4 and 5 provide the evidence of the competitiveness of the PSR regression models.
Figure 2 represents the Table 5 data graphically. Figure 2(a) shows overall test MSE errors in Table 5. Figure 2(b) provides a zoom-in of the most notable area of Figure 2(a). Most of the methods provided smaller MSE errors for the Abalone dataset. OLS and MARS provided larger average errors than the others. Average MSE errors of PSR1 and PSR2 are smaller than SVR. All of the MSE errors of PSR1, PSR2, and PSR3 were less than 25%. Most significantly, all of the MSE errors of PSR2 were less than 20%. This result shows that PSR1 was the best method for minimization of average test MSE errors. However, PSR2 showed robustness with all of the data sets.
The advantage of the PSR models is their provision of a model description using independent variables. Tables 68 show PSR1, PSR2, and PSR3 model examples, respectively. With PSR1, the most valid MSE was shown at the first replication for the first run of Avalone (Table 6); with PSR2, the most valid MSE was shown at the third replication for the first run of Auto (Table 7), and with PSR3, the most valid MSE was shown at the third replication for the first run of Machine (Table 8). Each PSR model provided the minimum test MSE error. No model for Triazines is provided herein, due to the large number of independent variables.
Eq. (4) shows the model in Table 6 in mathematical form.
$y= -3.4941-15.5727·x10.3·x20.06·x30.02·x40.14·x50.11 ·x60.04·x7-0.11+3.4747·x10.15·x2-0.03·x3-0.17·x4-0.2 ·x6-0.1·x70.12-4.6892·x10.26·x2-0.18·x30.03·x40.01 ·x5-0.06·x6-0.05·x7-0.3-1.8090·x1-0.1·x2-0.29 ·x30.02·x4-0.05·x5-0.12·x6-0.07·x70.01+15.6752 ·x1-0.11·x20.3·x30.25·x4-0.01·x50.07·x6-0.03·x7-0.01 -3.8564·x1-0.03·x20.27·x40.07·x50.05·x6-0.33·x70.07 +7.0280·x2-0.02·x4-0.23·x50.21·x60.09·x7-0.37 +28.3323·x1-0.11·x2-0.02·x40.49·x5-0.27 ·x6-0.01·x70.04-6.8067·x1-0.1·x2-0.05·x30.13·x40.11 ·x50.22·x6-0.09·x7-0.16-14.3795·x1-0.37·x2-0.29 ·x4-0.01·x50.02·x60.2·x7-0.01.$
Tables 68 represents the mathematical model from PSR1, PSR2, and PSR3. Eq. (4) shows example mathematical model as a mathematical form.
The rij in Tables 68 are all between −1 and 1. All $∑j=1n|rijk|≤1$ in Tables 6 and 8. The regression models differ based on the PSR method. More than half of rij are bounded at 1 or −1 in Table 7 but not in Table 6. Some rij are zeros in Tables 68. The ratios of rij = 0 for all of the developed models are provided in Table 9. The ratio was calculated by the total number of rij = 0 over the total number of estimated rij. When the number of independent variables increased, the number of rij= 0 also increased. For Abalone, 3, 500 rij (7 independent variables ×10 terms ×5 replications ×10 independent data sets) were estimated for each PSR model. Totals of 3, 500, 3, 000, and 30, 000 rij were estimated for Auto, Machine, and Triazines, respectively, based on the number of independent variables.
PSR1 and PSR3 estimated more rij to be zero than PSR2, since they have more bounded constraints. For Abalone and Triazines, PSR1 was the best and PSR2 second best. Users can choose their model in consideration of minimum test MSE error (PSR1), robustness (PSR2), or ratio of rij = 0 (PSR1), according to their specific purpose. All of these models were shown to be competitive with existing methods (i.e., OLS, MARS, and SVR).
4. Conclusion
In this paper, we proposed new PSRs to solve the nonlinear regression problem. The PSR methods provide the signomial functions using the PSO method. The PSR regression models offer the benefit of explicit functions descriptions representing the nonlinearity of the given data without kernel tricks.
Also, with the PSR regression models, sparse signomial functions that can be used for prediction purposes as well as interpretation purposes can be obtained. The present experimental results revealed that this approach can detect a sparse signomial nonlinear function; indeed, PSR may be considered as a useful alternative when interpreting which original input variables are more important than others in an obtained regression model. The development of nonlinear sparse regression models using other metaheuristic optimization methods will be an important focus of future studies.
Conflict of Interest
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1E1A1A01077375).
Figures
Fig. 1.
PSO procedure.
Fig. 2.
Test MSE errors: (a) overall test MSE errors in Table and (b) a zoom-in of the most notable area of Figure 2(a).
TABLES
### Table 1
Pseudo-code for initialization
Pseudo-code
Particle Step 1: Set $rijk=0$ for ∀ i, j. Set i = 1, S = {1, 2, 3, …, n}.Step 2: Randomly select q in S.Step 3: Generate uniform $[-1+∑∀j|rijk|,1-∑∀j|rijk|]$ and round up two decimal places and assign it to $riqk$.Step 4: S = S−{q}. If S ≠ Ø, go to Step 2.Step 5: If i < m, i = i + 1, set S = {1, 2, 3, …, n} and go to Step 2.
Velocity Generate uniform [−0.01, 0.01] and assign it to $vijk$ for ∀i, j
### Table 2
Three infeasible-particle update rules
Method Update rule
PSR1 If $∑j=1n|rijk|>1$, set $rpqk=0$, where $(p,q)=arg∀(p,q),|rpqk|>0min|rpqk|$, until $∑j=1n|rijk|≤1$.
PSR2 If $rijk>1$, set $rijk=1$ or if $rijk<-1$, set $rijk=-1$.
PSR3 Add a penalty to MSE when $∑j=1n|rijk|>1$ for any k.
### Table 3
Parameters for each dataset
Dataset Number of independent variables (n) Number of independent runs Number of observations
Train Valid Test
Abalone 7 10 2,507 835 304
Auto 7 10 304 101 101
Machine 6 10 125 42 42
Triazines 60 10 112 37 37
### Table 4
Average MSEs and their standard deviations for each method and dataset
Method Dataset
Abalone Auto Machine Triazines
Train Valid Test Train Valid Test Train Valid Test Train Valid Test
OLS MSE 4.90 4.93 5.11 10.24 11.82 12.57 2854 8388 6215 0.011 0.042 0.058
SD 0.10 0.29 0.41 0.51 1.63 1.57 846 6467 4488 0.001 0.009 0.043
MARS MSE 4.45 4.53 4.66 6.97 8.72 9.36 1476 11212 10710 0.013 0.026 0.023
SD 0.09 0.23 0.28 0.55 0.95 1.93 446 12416 15231 0.002 0.005 0.005
SVR MSE 4.28 4.46 4.67 4.11 7.50 7.19 1156 5600 5092 0.013 0.023 0.023
SD 0.13 0.26 0.36 0.46 1.36 1.24 250 4409 4976 0.001 0.006 0.006
PSR1 MSE 4.42 4.38 4.62 6.77 8.05 8.93 1324 5625 4231 0.013 0.021 0.019
SD 0.12 0.22 0.31 0.43 1.05 1.75 243 4688 3795 0.001 0.004 0.004
PSR2 MSE 4.36 4.38 4.63 6.42 7.88 8.19 680 3507 4656 0.012 0.020 0.020
SD 0.13 0.23 0.32 0.53 1.13 1.48 204 2585 4204 0.001 0.003 0.006
PSR3 MSE 4.43 4.39 4.66 7.06 7.92 8.89 1676 4484 4039 0.015 0.022 0.026
SD 0.13 0.21 0.37 0.21 1.29 1.61 446 3550 2904 0.001 0.004 0.016
### Table 5
Test MSE error
Method Abalone Auto Machine Triazines Average
OLS 0.105856 0.748477 0.538925 1.969327 0.840646
MARS 0.008258 0.301676 1.651818 0.187167 0.537230
SVR 0.009596 0 0.260881 0.155416 0.106473
PSR1 0 0.241871 0.047562 0 0.072358
PSR2 0.002454 0.139036 0.152940 0.047548 0.085495
PSR3 0.007278 0.237266 0 0.340737 0.146320
MSE error = (Method MSE – Best MSE) / Best MSE.
### Table 6
PSR1 model for first dataset of Avalone
Term index (i) Coefficient (βi) rij
Independent variable index (j)
1 2 3 4 5 6 7
1 −15.5727 0.3 0.06 0.02 0.14 0.11 0.04 −0.11
2 3.4747 0.15 −0.03 −0.17 −0.2 0 −0.1 0.12
3 −4.6892 0.26 −0.18 0.03 0.01 −0.06 0.05 −0.3
4 −1.8090 −0.1 −0.29 0.02 −0.05 −0.12 −0.07 0.01
5 15.6752 −0.11 −0.3 0.25 −0.01 0.07 −0.03 −0.01
6 −3.8564 −0.03 0.27 0 0.07 0.05 −0.33 0.07
7 7.0208 0 −0.02 0 −0.23 0.21 0.09 −0.37
8 28.3323 −0.11 −0.02 0 0.49 −0.27 −0.01 0.04
9 −6.8067 −0.1 −0.05 0.13 0.11 0.22 −0.09 −0.16
10 −14.3795 −0.37 −0.29 0 −0.01 0.02 0.2 −0.01
0 −3.4941 -
### Table 7
PSR2 model for first dataset of Auto
Term index (i) Coefficient (βi) rij
Independent variable index (j)
1 2 3 4 5 6 7
1 5.64E+11 −0.96 −1 −1 −1 −1 −1 −1
2 −140.3110 −0.29 −0.61 −1 1 1 −1 −0.59
3 0.08789 −0.1 −1 −1 1 1 1 0
4 −0.0291 1 0.44 0.73 −0.41 1 −0.36 0.8
5 4.31E-06 1 −0.91 0.81 0.76 1 1 −0.75
6 0.0851 0.79 −0.7 −1 1 −0.61 1 0.85
7 3946841 −1 −0.5 −0.55 −1 −1 1 0.88
8 0.0008 1 1 0.54 0.04 −0.35 −0.51 1
9 −4714388 −1 −1 −1 −0.38 −1 0.81 1
10 2.9175 −0.42 1 −0.45 −1 0.99 1 −1
0 8.5401 -
### Table 8
PSR3 model for first dataset of Machine
Term index (i) Coefficient (βi) rij
Independent variable index (j)
1 2 3 4 5 6
1 −1145.7200 −0.73 0 0 0.16 0 0.01
2 −318.298 −0.01 −0.4 −0.02 0.19 0.01 0.33
3 0.1497 0.83 0.02 −0.01 0.02 0.02 0.05
4 0.2105 0.01 0.05 −0.01 −0.07 0.65 0.18
5 0.7012 0.16 0.71 −0.01 0.02 −0.01 0
6 1679.2390 0.05 −0.17 −0.22 −0.23 −0.02 −0.29
7 −1.8570 0.32 0.52 −0.02 0.07 0.03 −0.01
8 0.4133 0.47 0.06 0.32 −0.01 0.03 −0.06
9 307.1087 −0.36 −0.04 −0.01 0.28 0.01 0.27
10 −28.4306 −0.23 0.23 0 −0.25 0.1 0.08
0 26.2379 -
### Table 9
Ratio of rij = 0 (total number of rij = 0)
Method Abalone Auto Machine Triazines
PSR1 0.294 (1,029) 0.611 (2,140) 0.527 (1,582) 0.952 (2,8571)
PSR2 0.012 (41) 0.034 (115) 0.018 (54) 0.467 (14,009)
PSR3 0.119 (415) 0.192 (671) 0.113 (338) 0.906 (27,181)
References
1. Drucker, H, Burges, CJ, Kaufman, L, Smola, AJ, and Vapnik, V (1996). Support vector regression machines. Advances in Neural Information Processing Systems. 9, 155-161.
2. Maranas, CD, and Floudas, CA (1997). Global optimization in generalized geometric programming. Computers & Chemical Engineering. 21, 351-369. https://doi.org/10.1016/S0098-1354(96)00282-7
3. Kennedy, J, and Eberhart, R 1995. Particle swarm optimization (PSO)., Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, Array, pp.1942-1948. https://doi.org/10.1109/ICNN.1995.488968
4. Bratton, D, and Kennedy, J 2007. Defining a standard for particle swarm optimization., Proceedings of 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, Array, pp.120-127. https://doi.org/10.1109/SIS.2007.368035
5. Wang, KP, Huang, L, Zhou, CG, and Pang, W 2003. Particle swarm optimization for traveling salesman problem., Proceedings of the 2003 international Conference on Machine Learning and Cybernetics (IEEE cat no 03ex693), Xian, China, Array, pp.1583-1585. https://doi.org/10.1109/ICMLC.2003.1259748
6. Akjiratikarl, C, Yenradee, P, and Drake, PR (2007). PSO-based algorithm for home care worker scheduling in the UK. Computers & Industrial Engineering. 53, 559-583. https://doi.org/10.1016/j.cie.2007.06.002
7. Schwaab, M, Biscaia, EC, Monteiro, JL, and Pinto, JC (2008). Nonlinear parameter estimation through particle swarm optimization. Chemical Engineering Science. 63, 1542-1552. https://doi.org/10.1016/j.ces.2007.11.024
8. Bao, Y, Hu, Z, and Xiong, T (2013). A PSO and pattern search based memetic algorithm for SVMs parameters optimization. Neurocomputing. 117. https://doi.org/10.1016/j.neucom.2013.01.027
9. Wang, D, and Wu, X (2008). The Optimization of SVM Parameters Based on PSO. Computer Applications. 28, 134-139.
10. Yuan, X, and Liu, A (2007). The research of SVM parameter selection based on PSO algorithm. Techniques of Automation and Application. 26, 5-8.
Biographies
SeJoon Park received the Industrial Engineering Master and Ph.D. degrees from University of Florida in 2005 and the Ph.D. from Oregon State University in 2012, respectively. Currently, he is an Assistant Professor, Division of Energy Resources Engineering and Industrial Engineering at Kangwon National University. His research interests include big data analysis and optimization.
E-mail: sonmupsj@kangwon.ac.kr
NagYoon Song received his M.S. degree in Industrial and Management Engineering from Myongji University in 2016. His research interests anomaly detection, explainable artificial intelligence (XAI) and architecture search based deep learning.
E-mail: nysong@mju.ac.kr
Wooyeon Yu is currently a professor of Industrial and Management Engineering at Myongji University. He received his Ph.D. degree in the Department of Industrial and Manufacturing Systems Engineering from Iowa State University. Before he joined Myongji University, he worked as a senior consultant at Samsung SDS. His primary research interests are in material handling, logistics and supply chain management.
E-mail: wyyuie@mju.ac.kr
Dohyun Kim received his M.S. and Ph.D. degrees in industrial engineering from KAIST, Korea, in 2002 and 2007, respectively. Currently, he is an Associate Professor with the Department of Industrial and Management Engineering, Myongji University. His research interests include statistical data mining, deep learning and graph data analysis.
E-mail: ftgog@mju.ac.kr
December 2019, 19 (4)
|
2020-01-20 18:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41632625460624695, "perplexity": 1996.116741322911}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599718.13/warc/CC-MAIN-20200120165335-20200120194335-00453.warc.gz"}
|
https://www.queryoverflow.gdn/query/creating-csv-simple-double-header-24_504125.html
|
# creating csv simple double header
by KZ123 Last Updated August 13, 2019 21:23 PM
I am trying to include to first two rows as headers. I would like to have the units - - m m m etc. on every page and another line below.
I am not an advanced user of csvsimple, and I have not been unable to find answers in the manual.
First page looks like this:
And the second:
I have already sorted out the problem of having latex symbols such as \delta.
Also, the other problem I am facing is that I cannot get the header to display on every page.
\begin{longtable}{lllllllllll} \caption{Database of Ropax Ships.}\\ \toprule \csvreader[ head=false, late after line=\\, filter equal={\thecsvinputline}{1}, ]{data_appendix.csv}{}{\csvcoli & \csvcolii & \csvcoliii & \csvcoliv & \csvcolv & \csvcolvi & \csvcolvii & \csvcolviii & \csvcolix} \midrule \endfirsthead \toprule \csvreader[ head=false, late after line=\\, filter equal={\thecsvinputline}{1}, ]{data_appendix.csv}{}{\csvcoli & \csvcolii & \csvcoliii & \csvcoliv & \csvcolv & \csvcolvi & \csvcolvii & \csvcolviii & \csvcolix} \midrule \endhead \midrule \endfoot \bottomrule \endlastfoot \csvreader[ head=false, late after line=\\, filter not equal={\thecsvinputline}{1}, ]{data_appendix.csv}{}{\csvcoli & \csvcolii & \csvcoliii & \csvcoliv & \csvcolv & \csvcolvi & \csvcolvii & \csvcolviii & \csvcolix} \end{longtable}
I would really appreciate some help.
Tags :
## Related Questions
Updated April 05, 2017 19:23 PM
Updated September 14, 2017 17:23 PM
Updated February 19, 2018 20:23 PM
Updated July 23, 2019 14:23 PM
Updated May 05, 2015 21:10 PM
|
2019-08-20 00:13:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3605143427848816, "perplexity": 12323.39243647463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00517.warc.gz"}
|
https://etheses.bham.ac.uk/id/eprint/1564/
|
# Automated planning for hydrothermal vent prospecting using AUVs
Saigol, Zeyn A. (2011). Automated planning for hydrothermal vent prospecting using AUVs. University of Birmingham. Ph.D.
Preview
Saigol11PhD.pdf
PDF
## Abstract
This thesis presents two families of novel algorithms for automated planning under uncertainty. It focuses on the domain of searching the ocean floor for hydrothermal vents, using autonomous underwater vehicles (AUVs). This is a hard problem because the AUV's sensors cannot directly measure the range or bearing to vents, but instead detecting the plume from a vent indicates the source vent lies somewhere up-current, within a relatively large swathe of the search area. An unknown number of vents may be located anywhere in the search area, giving rise to a problem that is naturally formulated as a partially-observable Markov decision process (POMDP), but with a very large state space (of the order of 10$$^{123}$$ states). This size of problem is intractable for current POMDP solvers, so instead heuristic solutions were sought. The problem is one of chemical plume tracing, which can be solved using simple reactive algorithms for a single chemical source, but the potential for multiple sources makes a more principled approach desirable for this domain. This thesis presents several novel planning methods, which all rely on an existing occupancy grid mapping algorithm to infer vent location probabilities from observations. The novel algorithms are information lookahead and expected-entropy-change planners, together with an orienteering problem (OP) correction that can be used with either planner. Information lookahead applies online POMDP methods to the problem, and was found to be effective in locating vents even with small lookahead values. The best of the entropy-based algorithms was one that attempts to maximise the expected change in entropy for all cells along a path, where the path is found using an OP solver. This expected-entropy-change algorithm was at least as effective as the information-lookahead approach, and with slightly better computational efficiency.
Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Dearden, Richard WUNSPECIFIEDUNSPECIFIED
Wyatt, JeremyUNSPECIFIEDUNSPECIFIED
Licence:
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Computer Science
Funders: None/not applicable
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
URI: http://etheses.bham.ac.uk/id/eprint/1564
### Actions
Request a Correction View Item
|
2019-03-22 12:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.477400004863739, "perplexity": 2938.6127163235174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202658.65/warc/CC-MAIN-20190322115048-20190322141048-00218.warc.gz"}
|
http://jkms.kms.or.kr/journal/view.html?uid=8291
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors
On conformally flat polynomial $(\alpha,\beta)$-metrics with weakly isotropic scalar curvature J. Korean Math. Soc. 2019 Vol. 56, No. 2, 329-352 Published online 2019 Mar 01 Bin Chen, KaiWen Xia Tongji University; Tongji University Abstract : In this paper, we study conformally flat ($\alpha,\beta$)-metrics in the form $F=\alpha(1+\sum_{j=1}^m a_j(\frac{\beta}{\alpha})^j)$ with $m\geq2$, where $\alpha$ is a Riemannian metric and $\beta$ is a 1-form on a smooth manifold $M$. We prove that if such conformally flat ($\alpha,\beta$)-metric $F$ is of weakly isotropic scalar curvature, then it must has zero scalar curvature. Moreover, if $a_{m-1} a_m\neq0$, then such metric is either locally Minkowskian or Riemannian. Keywords : ($\alpha,\beta$)-metric, conformally flat, weakly isotropic scalar curvature MSC numbers : 53B40, 53C60 Full-Text :
|
2019-03-20 01:14:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41715386509895325, "perplexity": 2323.6198006989293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202188.9/warc/CC-MAIN-20190320004046-20190320030046-00290.warc.gz"}
|
https://www.physicsforums.com/threads/the-electrostatic-potential.16200/
|
# The electrostatic potential
1. Mar 13, 2004
### burak_ilhan
1}If the electric field is zero in some region, must the potential also be zero?An example?
2}Give an example of a conductor that is not an equipotential.Is this conductor in electrostatic equilibrium?
3)If a high voltage cable falls on top of your automobile, will you probably be safest if you remain inside the automobile.Why?
4)If we surroun some region with a conducting surface, we shield it from external electric fields. Why can we not shield a region gravitational field by a similar method?
Its clear that these questions need no mathematical work,i'd like to discuss them with forum members
Last edited: Mar 13, 2004
2. Mar 13, 2004
### Palpatine
You gave the answer to 3 in 4.
3. Mar 13, 2004
### burak_ilhan
yes but a car is not compeletely closed with metal(windows).And what will happen if you open a door and try to get out?
4. Mar 13, 2004
### cookiemonster
1) No, it just means that the potential isn't changing. Remember that potential is a totally relative quantity, anyway. You can define the potential relative to 0V or relative to 50,000,000V if it floats your boat.
2) Here's half of it: a conductor in an electrostatic arrangement will also be equipotential. There was actually a nice little discussion about it in a different thread, https://www.physicsforums.com/showthread.php?s=&threadid=15931&perpage=15&pagenumber=1
3) Oh yes. If that does happen, don't get out and don't touch the ground! Remember that your car is insulated from the ground by your tires (unless your exhaust or something is dragging , like my poor car is apt to do), so it's just going to hold all that charge. If you, a pretty nice conductor, decide to try to step out of the car, you're going to be a path for that charge the car is holding to travel through to the ground. I don't know about you, but I'd rather not be a make-shift powerline.
4) There's no negative mass like there is negative charge.
cookiemonster
5. Mar 13, 2004
### Palpatine
The car is not insulated from the ground by the tires because the electric field of the lightning bolt is strong enough to cause the air surrounding the car to undergo eletrical breakdown. In other words the charge gets conducted through the air to the ground.
6. Mar 14, 2004
### burak_ilhan
the absence of electrostatic fields in closed conducting cavities is proved by using the idea that the electrostatic field as a conservative field:
$$0=\oint E.dl$$
this equation implies that a field line can never form a closed loop.
In my opinion this cannot be applied to gravitational fields because gravitational field lines can form a closed loop.Is that correct?If it is anyone knows how to explain this mathematically?
7. Mar 14, 2004
### kuengb
They can't. The line integral formula you wrote down means that a field is conservative, as you said. Well a grav. field definitely is!
My response to 2:
A conductor that is not in equipotential "contains" an el. field, the result is a current, so it isn't an electrostatic situation. An example for this is the cable that sends the signal to your screen right now. But as long as we have electrostatic equilibrium in a conductor, we have an equipotential.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
2017-02-28 12:26:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6179322600364685, "perplexity": 1129.1665620194028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174159.38/warc/CC-MAIN-20170219104614-00093-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://lnu.diva-portal.org/smash/resultList.jsf?af=%5B%5D&aq=%5B%5B%7B%22categoryId%22%3A%2211524%22%7D%5D%5D&aqe=%5B%5D&aq2=%5B%5B%5D%5D&language=en&query=
|
lnu.sePublications
Change search
Refine search result
12 1 - 50 of 82
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
• 1.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Effects of electron-electron interaction in pristine and doped graphene2014Independent thesis Advanced level (degree of Master (Two Years)), 30 credits / 45 HE creditsStudent thesis
The goal of this master thesis is to investigate the eect of electron-electron interaction on electronic properties of graphene that can be measured experimentally. A tight-binding model, which includes up to next-nearest-neighbor hopping, with parameters tted to density functional theory calculations, has been used to describe the electronic structure of graphene. The electron-electron interaction is described by the Hubbard model using a mean- eld approximation. Based on the analysis of dierent tight-binding models available in the literature, we conclude that a next-nearest-neighbor tight-binding model is in better agreement with density functional theory calculations, especially for the linear dispersion around the Dirac point. The Fermi velocity in this case is very close to the experimental value, which was measured by using a variety of techniques. Interaction-induced modi cations of the linear dispersion around the Dirac point have been obtained. Unlike the non-local Hartree-Fock calculations, which take into account the long-range electron-electron interaction and yield logarithmic corrections, in agreement with experiment, we found only linear modi cations of the Fermi velocity. The reasons why one cannot obtain logarithmic corrections using the mean- eld Hubbard model have been discussed in detail. The remaining part of the thesis is focused on calculations of the local density of states around a single substitutional impurity in graphene. This quantity can be directly compared to the results of the scanning tunneling microscopy in doped graphene. We compare explicitly non-interacting and interacting cases. In the latter case, we performed self-consistent calculations, and found that electron-electron interaction has a signi cant eect on the local density of states. Furthermore, the band gap at high-symmetry points of the Brillouin zone of a supercell, triggered by the impurity, is modi ed by interactions. We use a perturbative model to explain this eect and quantitative agreement with numerical results. In conclusion, it is expected that the long-range electron-electron nteraction is extremely strong and important in graphene. However, as this thesis has shown, interactions at the level of the Hubbard model and mean- eld approximation also introduce corrections to the electronic properties of graphene.
• 2.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Effects of short-range electron-electron interactions in doped graphene2015In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 92, no 15, 155420Article in journal (Refereed)
We study theoretically the effects of short-range electron-electron interactions on the electronic structure of graphene, in the presence of substitutional impurities. Our computational approach is based on the π orbital tight-binding model for graphene, with the electron-electron interactions treated self-consistently at the level of the mean-field Hubbard model. The finite impurity concentration is modeled using the supercell approach. We compare explicitly noninteracting and interacting cases with varying interaction strength and impurity potential strength. We focus in particular on the interaction-induced modifications in the local density of states around the impurity, which is a quantity that can be directly probed by scanning tunneling spectroscopy of doped graphene. We find that the resonant character of the impurity states near the Fermi level is enhanced by the interactions. Furthermore, the size of the energy gap, which opens up at high-symmetry points of the Brillouin zone of the supercell upon doping, is significantly affected by the interactions. The details of this effect depend subtly on the supercell geometry. We use a perturbative model to explain these features and find quantitative agreement with numerical results.
• 3.
National and Kapodistrian University of Athens, Greece.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
The eigenvalue problem for a bianisotropic cavity2013In: PIERS proceedings 2013: Progress in Electromagnetics Research Symposium, Electromagnetics Acad , 2013, 858-862 p.Conference paper (Refereed)
We discuss the eigenvalue problem for a perfectly conducting bianisotropic cavity. We formulate the corresponding mathematical problem and we give a characterization of the eigenelements (non-zero eigenfrequencies and modes) via a perturbation argument involving the eigenelements of the hollow cavity.
• 4.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Quantum Holonomy for Many-Body Systems and Quantum Computation2013Doctoral thesis, comprehensive summary (Other academic)
The research of this Ph. D. thesis is in the field of Quantum Computation and Quantum
Information. A key problem in this field is the fragile nature of quantum states. This
becomes increasingly acute when the number of quantum bits (qubits) grows in order to
perform large quantum computations. It has been proposed that geometric (Berry) phases
may be a useful tool to overcome this problem, because of the inherent robustness of such
phases to random noise. In the thesis we investigate geometric phases and quantum
holonomies (matrix-valued geometric phases) in many-body quantum systems, and elucidate
the relationship between these phases and the quantum correlations present in the systems.
An overall goal of the project is to assess the feasibility of using geometric phases and
quantum holonomies to build robust quantum gates, and investigate their behavior when the
size of a quantum system grows, thereby gaining insights into large-scale quantum
computation.
In a first project we study the Uhlmann holonomy of quantum states for hydrogen-like
atoms. We try to get into a physical interpretation of this geometric concept by analyzing its
relation with quantum correlations in the system, as well as by comparing it with different
types of geometric phases such as the standard pure state geometric phase, Wilczek-Zee
holonomy, Lévay geometric phase and mixed-state geometric phases. In a second project we
establish a unifying connection between the geometric phase and the geometric measure of
entanglement in a generic many-body system, which provides a universal approach to the
study of quantum critical phenomena. This approach can be tested experimentally in an
interferometry setup, where the geometric measure of entanglement yields the visibility of
the interference fringes, whereas the geometric phase describes the phase shifts. In a third
project we propose a scheme to implement universal non-adiabatic holonomic quantum
gates, which can be realized in novel nano-engineered systems such as quantum dots,
molecular magnets, optical lattices and topological insulators. In a fourth project we propose
an experimentally feasible approach based on “orange slice” shaped paths to realize non-
Abelian geometric phases, which can be used particularly for geometric manipulation of
qubits. Finally, we provide a physical setting for realizing non-Abelian off-diagonal
geometric phases. The proposed setting can be implemented in a cyclic chain of four qubits
with controllable nearest-neighbor interactions. Our proposal seems to be within reach in
various nano-engineered systems and therefore opens up for first experimental test of the
non-Abelian off-diagonal geometric phase.
• 5.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Department of Quantum Chemistry, Uppsala University, Box 518, Se-751 20 Uppsala, Sweden.
Unifying geometric entanglement and geometric phase in a quantum phase transition2013In: Physical Review A. Atomic, Molecular, and Optical Physics, ISSN 1050-2947, E-ISSN 1094-1622, Vol. 88, no 1, Article ID: 012310- p.Article in journal (Refereed)
Geometric measure of entanglement and geometric phase have recently been used to analyze quantum phase transition in the XY spin chain. We unify these two approaches by showing that the geometric entanglement and the geometric phase are respectively the real and imaginary parts of a complex-valued geometric entanglement, which can be investigated in typical quantum interferometry experiments. We argue that the singular behavior of the complex-valued geometric entanglement at a quantum critical point is a characteristic of any quantum phase transition, by showing that the underlying mechanism is the occurrence of level crossings associated with the underlying Hamiltonian.
• 6.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. Department of Quantum Chemistry, Uppsala University, Box 518, Se-751 20 Uppsala, Sweden.
Universal Non-adiabatic Holonomic Gates in Quantum Dots and Single-Molecule MagnetsManuscript (preprint) (Other academic)
Geometric manipulation of a quantum system offers a method for fast, universal, and robust quantum information processing. Here, we propose a scheme for universal all-geometric quantum computation using non-adiabatic quantum holonomies. We propose three different realizations of the scheme based on an unconventional use of quantum dot and single-molecule magnet devices,which offer promising scalability and robust efficiency.
• 7.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Uppsala University ; National University of Singapore, Singapore.
Universal Non-adiabatic Holonomic Gates in Quantum Dots and Single-Molecule Magnets2014In: New Journal of Physics, ISSN 1367-2630, E-ISSN 1367-2630, Vol. 16, 013029Article in journal (Refereed)
Geometric manipulation of a quantum system offers a method for fast, universal, and robust quantum information processing. Here, we propose a scheme for universal all-geometric quantum computation using non-adiabatic quantum holonomies. We propose three different realizations of the scheme based on an unconventional use of quantum dot and single-molecule magnet devices,which offer promising scalability and robust efficiency.
• 8.
Chalmers University of Technology.
Chalmers University of Technology. Universität Konstanz, Germany. Chalmers University of Technology.
Basic theory of electron transport through molecular contacts2015In: Handbook of Single Molecule Electronics / [ed] K. Moth-Poulsen, Pan Stanford Publishing, 2015, 31-78 p.Chapter in book (Refereed)
• 9.
Photonics and Semiconductor Nanophysics, Department of Applied Physics, Eindhoven University of Technology, P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. London Center for NanoTechnology, 17-19 Gordon Street, WC1H 0AH, London, U.K.. 5Department of Physics, University of New Hampshire, Durham, New Hampshire 03824-3520, USA. London Center for NanoTechnology, 17-19 Gordon Street, WC1H 0AH, London, U.K.. London Center for NanoTechnology, 17-19 Gordon Street, WC1H 0AH, London, U.K.. Department of Physics and Astronomy, University of Iowa, Iowa City, Iowa 52242-1479,U.S.A.. Photonics and Semiconductor Nanophysics, Department of Applied Physics, Eindhoven University of Technology, P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands. Department of Chemistry, UCL, London, WC1H 0AJ, United Kingdom. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Photonics and Semiconductor Nanophysics, Department of Applied Physics, Eindhoven University of Technology, P. O. Box 513, NL-5600 MB Eindhoven, The Netherlands.
Magnetic anisotropy of single Mn acceptors in GaAs in an external magnetic field2013In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 88, Article ID: 205203- p.Article in journal (Refereed)
We investigate the effect of an external magnetic field on the physical properties of the acceptor hole statesassociated with single Mn acceptors placed near the (110) surface of GaAs. Cross-sectional scanning tunnelingmicroscopy images of the acceptor local density of states (LDOS) show that the strongly anisotropic hole wavefunction is not significantly affected by a magnetic field up to 6 T. These experimental results are supported bytheoretical calculations based on a tight-binding model of Mn acceptors in GaAs. For Mn acceptors on the (110)surface and the subsurfaces immediately underneath, we find that an applied magnetic field modifies significantlythe magnetic anisotropy landscape. However, the acceptor hole wave function is strongly localized around theMn and the LDOS is quite independent of the direction of the Mn magnetic moment. On the other hand, for Mnacceptors placed on deeper layers below the surface, the acceptor hole wave function is more delocalized andthe corresponding LDOS is much more sensitive on the direction of the Mn magnetic moment. However, themagnetic anisotropy energy for these magnetic impurities is large (up to 15 meV), and a magnetic field of 10 Tcan hardly change the landscape and rotate the direction of the Mn magnetic moment away from its easy axis.We predict that substantially larger magnetic fields are required to observe a significant field dependence of thetunneling current for impurities located several layers below the GaAs surface.
• 10.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
First-principle studies of tunneling transport in single-molecule magnets2010Conference paper (Refereed)
• 11.
Lund University.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Remarks on the mathematical solution of the hollow cavity eigenvalue problem2013In: Progress in Electromagnetics Research Symposium, Electromagnetics Acad. , 2013, 79-83 p.Conference paper (Refereed)
We discuss the eigenvalue problem for a perfectly conducting hollow cavity under a strict functional analytic point of view. We make use of a variant of the classical spectral theorem for compact selfadjoint operators and we pay extra attention on the null space of the Maxwell operator. We also discuss the corresponding inhomogeneous problem, where currents are present, even when they may depend on the fields.
• 12.
Universität Konstanz, Germany.
Universität Konstanz, Germany. RWTH Aachen, Germany. Universität Konstanz, Germany.
Spin transport and tunable Gilbert damping in a single-molecule magnet junction2013In: Physical Review B, ISSN 2469-9950, E-ISSN 2469-9969, Vol. 87, 045426Article in journal (Refereed)
We study time-dependent electronic and spin transport through an electronic level connected to two leads and coupled with a single-molecule magnet via exchange interaction. The molecular spin is treated as a classical variable and precesses around an external magnetic field. We derive expressions for charge and spin currents by means of the Keldysh nonequilibrium Green's functions technique in linear order with respect to the time-dependent magnetic field created by this precession. The coupling between the electronic spins and the magnetization dynamics of the molecule creates inelastic tunneling processes which contribute to the spin currents. The inelastic spin currents, in turn, generate a spin-transfer torque acting on the molecular spin. This back-action includes a contribution to the Gilbert damping and a modification of the precession frequency. The Gilbert damping coefficient can be controlled by the bias and gate voltages or via the external magnetic field and has a nonmonotonic dependence on the tunneling rates.
• 13.
University of Hamburg, Germany ; IBM Research-Zurich, Switzerland.
University of Hamburg, Germany. University of Hamburg, Germany. University of Hamburg, Germany. IFW Dresden, Germany. IFW Dresden, Germany. IFW Dresden, Germany. Lund University ; Halmstad University. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. IBM Research-Zurich, Switzerland. University of Hamburg, Germany ; IFW Dresden, Germany.
Local Magnetic Suppression of Topological Surface States in Bi2Te3 Nanowires2016In: ACS Nano, ISSN 1936-0851, E-ISSN 1936-086X, Vol. 10, no 7, 7180-7188 p.Article in journal (Refereed)
Locally induced, magnetic order on the surface of a topological insulator nanowire could enable room-temperature topological quantum devices. Here we report on the realization of selective magnetic control over topological surface states on a single facet of a rectangular Bi2Te3 nanowire via a magnetic insulating Fe3O4 substrate. Low-temperature magnetotransport studies provide evidence for local time-reversal symmetry breaking and for enhanced gapping of the interfacial 1D energy spectrum by perpendicular magnetic-field components, leaving the remaining nanowire facets unaffected. Our results open up great opportunities for development of dissipation-less electronics and spintronics.
• 14.
Warsaw University, Poland.
Warsaw University, Poland. Warsaw University, Poland. Warsaw University, Poland. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Lund University ; Polish Acad Sci, Poland . Polish Acad Sci, Poland. Warsaw University, Poland. Warsaw University, Poland. Polish Acad Sci, Poland ; University of Warsaw, Poland ; Tohoku University, Japan. Warsaw University, Poland.
Hydrostatic-pressure-induced changes of magnetic anisotropy in (Ga, Mn) As thin films2017In: Journal of Physics: Condensed Matter, ISSN 0953-8984, E-ISSN 1361-648X, Vol. 29, no 11, 115805Article in journal (Refereed)
The impact of hydrostatic pressure on magnetic anisotropy energies in (Ga, Mn) As thin films with in-plane and out-of-plane magnetic easy axes predefined by epitaxial strain was investigated. In both types of sample we observed a clear increase in both in-plane and out-of-plane anisotropy parameters with pressure. The out-of-plane anisotropy constant is well reproduced by the mean-field p-d Zener model; however, the changes in uniaxial anisotropy are much larger than expected in the Mn-Mn dimer scenario.
• 15.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Electron transport in quantum point contacts: A theoretical study2011Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
Electron transport in mesoscopic systems, such as quantum point contacts and Aharonov-Bohm rings are investigated numerically in a tight-binding language with a recursive Green's function algorithm. The simulation reveals among other things the quantized nature of the conductance in point contacts, the Hall conductance, the decreasing sensitivity to scattering impurities in a magnetic field, and the periodic magnetoconductance in an Aharonov-Bohm ring. Furthermore, the probability density distributions for some different setups are mapped, making the transmission coefficients, the quantum Hall effect, and the cyclotron radius visible, where the latter indicates the correspondance between quantum mechanics and classical physics on the mesoscopic scale.
• 16.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Modeling of non-equilibrium scanning probe microscopy2015Licentiate thesis, comprehensive summary (Other academic)
The work in this thesis is basically divided into two related but separate investigations.
The first part treats simple chemical reactions of adsorbate molecules on metallic surfaces, induced by means of a scanning tunneling probe (STM). The investigation serves as a parameter free extension to existing theories. The theoretical framework is based on a combination of density functional theory (DFT) and non-equilibrium Green's functions (NEGF). Tunneling electrons that pass the adsorbate molecule are assumed to heat up the molecule, and excite vibrations that directly correspond to the reaction coordinate. The theory is demonstrated for an OD molecule adsorbed on a bridge site on a Cu(110) surface, and critically compared to the corresponding experimental results. Both reaction rates and pathways are deduced, opening up the understanding of energy transfer between different configurational geometries, and suggests a deeper insight, and ultimately a higher control of the behaviour of adsorbate molecules on surfaces.
The second part describes a method to calculate STM images in the low bias regime in order to overcome the limitations of localized orbital DFT in the weak coupling limit, i.e., for large vacuum gaps between a tip and the adsorbate molecule. The theory is based on Bardeen's approach to tunneling, where the orbitals computed by DFT are used together with the single-particle Green's function formalism, to accurately describe the orbitals far away from the surface/tip. In particular, the theory successfully reproduces the experimentally well-observed characteristic dip in the tunneling current for a carbon monoxide (CO) molecule adsorbed on a Cu(111) surface. Constant height/current STM images provide direct comparisons to experiments, and from the developed method further insights into elastic tunneling are gained.
• 17.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Toyama Univ, Japan. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Theory of vibrationally assisted tunneling for hydroxyl monomer flipping on Cu(110)2014In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 90, no 16, Article ID: 165413- p.Article in journal (Refereed)
To describe vibrationally mediated configuration changes of adsorbates on surfaces we have developed a theory to calculate both reaction rates and pathways. The method uses the T-matrix to describe excitations of vibrational states by the electrons of the substrate, adsorbate, and tunneling electrons from a scanning tunneling probe. In addition to reaction rates, the theory also provides the reaction pathways by going beyond the harmonic approximation and using the full potential energy surface of the adsorbate which contains local minima corresponding to the adsorbates different configurations. To describe the theory, we reproduce the experimental results in [T. Kumagai et al., Phys. Rev. B 79, 035423 (2009)], where the hydrogen/deuterium atom of an adsorbed hydroxyl (OH/OD) exhibits back and forth flipping between two equivalent configurations on a Cu(110) surface at T = 6 K. We estimate the potential energy surface and the reaction barrier, similar to 160 meV, from DFT calculations. The calculated flipping processes arise from (i) at low bias, tunneling of the hydrogen through the barrier, (ii) intermediate bias, tunneling electrons excite the vibrations increasing the reaction rate although over the barrier processes are rare, and (iii) higher bias, overtone excitations increase the reaction rate further.
• 18.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Exact Diagonalization of Few-electron Quantum Dots2009Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
We consider a system of few electrons trapped in a two-dimensional circularquantum dot with harmonic confinement and in the presence of ahomogeneous magnetic field, with focus on the role of e-e interaction. Byperforming the exact diagonalization of the Hamiltonian in second quantization,the low-lying energy levels for spin polarized system are obtained. The singlet-triplet oscillation in the ground state of the two-electron system showing up inthe result is explained due to the role of Coulomb interaction. The splitting ofthe lowest Landau level is another effect of the e-e interaction, which is alsoobserved in the results.
• 19. Holmqvist, Cecilia
Spin-singlet and spin-triplet superconducting correlations in Josephson junctions2010In: The Linnaeus Summer School in Quantum Engineering, Hindås (2010), The Linnaeus Summer School in Quantum Engineering, Hindås (2010) , 2010Conference paper (Refereed)
• 20.
Universität Konstanz, Germany.
Universität Konstanz, Germany. Chalmers University of Technology.
Spin-precession-assisted supercurrent in a superconducting quantum point contact coupled to a single-molecule magnet2012In: Physical Review B, ISSN 2469-9950, E-ISSN 2469-9969, Vol. 86, 054519Article in journal (Refereed)
The supercurrent through a quantum point contact coupled to a nanomagnet strongly depends on the dynamics of the nanomagnet's spin. We employ a fully microscopic model to calculate the transport properties of a junction coupled to a spin whose dynamics is modeled as Larmor precession brought about by an external magnetic field and find that the dynamics affects the charge and spin currents by inducing transitions between the continuum states outside the superconducting gap region and the Andreev levels. This redistribution of the quasiparticles leads to a nonequilibrium population of the Andreev levels and an enhancement of the supercurrent which is visible as a modified current-phase relation as well as a nonmonotonous critical current as function of temperature. The nonmonotonous behavior is accompanied by a corresponding change in spin-transfer torques acting on the precessing spin and leads to the possibility of using temperature as a means to tune the back-action on the spin.
• 21.
Université Joseph Fourier, France ; Chalmers University of Technology.
Université Joseph Fourier, France. Université Joseph Fourier, France ; Université de la Méditerranée, France.
Emergence of a negative charging energy in a metallic dot capacitively coupled to a superconducting island2008In: Physical Review B, ISSN 2469-9950, E-ISSN 2469-9969, Vol. 77, 054517Article in journal (Refereed)
• 22.
Universit ̈ at Konstanz, Germany.
Chalmers University of Technology. Universit ̈ at Konstanz, Germany.
Spin-polarized Shapiro steps and spin-precession-assisted multiple Andreev reflection2014In: Physical Review B, ISSN 2469-9950, E-ISSN 2469-9969, Vol. 90, 014516Article in journal (Refereed)
We investigate the charge and spin transport of a voltage-biased superconducting point contact coupled toa nanomagnet. The magnetization of the nanomagnet is assumed to precess with the Larmor frequencyωLwhen exposed to ferromagnetic resonance conditions. The Larmor precession locally breaks the spin-rotationsymmetry of the quasiparticle scattering and generates spin-polarized Shapiro steps for commensurate Josephsonand Larmor frequencies that lead to magnetization reversal. This interplay between the ac Josephson current andthe magnetization dynamics occurs at voltages|V|=ωL/2enforn=1,2,..., and the subharmonic steps withn>1 are a consequence of multiple Andreev reflection (MAR). Moreover, the spin-precession-assisted MARgenerates quasiparticle scattering amplitudes that, due to interference, lead to current-voltage characteristics ofthe dc charge and spin currents with subharmonic gap structures displaying an even-odd effect.
• 23.
Chalmers University of Technology.
Université Pierre et Marie Curie, France. Chalmers University of Technology.
Nonequilibrium effects in a Josephson junction coupled to a precessing spin2011In: Physical Review B, ISSN 2469-9950, E-ISSN 2469-9969, Vol. 83, 104521Article in journal (Refereed)
We present a theoretical study of a Josephson junction consisting of two s-wave superconducting leads coupled over a classical spin. When an external magnetic field is applied, the classical spin will precess with the Larmor frequency. This magnetically active interface results in a time-dependent boundary condition with different tunneling amplitudes for spin-up and spin-down quasiparticles and where the precession produces spin-flip scattering processes. We show that as a result, the Andreev states develop sidebands and a nonequilibrium population which depend on the precession frequency and the angle between the classical spin and the external magnetic field. The Andreev states lead to a steady-state Josephson current whose current-phase relation could be used for characterizing the precessing spin. In addition to the charge transport, a magnetization current is also generated. This spin current is time dependent and its polarization axis rotates with the same precession frequency as the classical spin.
• 24.
Chalmers University of Technology ; CNRS and Université Joseph Fourier, France.
CNRS and Université Joseph Fourier, France ; Universites Paris 6 et 7, France. INAC/SPSMS, France. CNRS and Université Joseph Fourier, France. Chalmers University of Technology.
Josephson current through a precessing spin2009In: Journal of Physics, Conference Series, ISSN 1742-6588, E-ISSN 1742-6596, Vol. 150, 022027Article in journal (Refereed)
A study of the dc Josephson current between two superconducting leads in thepresence of a precessing classical spin is presented. The precession gives rise to a time-dependenttunnel potential which not only creates different tunneling probabilities for spin-up and spin-down quasiparticles, but also introduces a time-dependent spin-flip term. In particular, westudy the effects of the spin-flip term alone on the Josephson current between two spin-singletsuperconductors as a function of precession frequency and junction transparency. The systemdisplays a steady-state solution although the magnitude and nature of the current is indeedaffected by the precession frequency of the classical spin.
• 25.
Virginia Commonwealth University, USA.
Stable magnetic order and charge induced rotation of magnetizationin nano-clusters2014In: Applied Physics Letters, ISSN 0003-6951, E-ISSN 1077-3118, Vol. 105, no 152409Article in journal (Refereed)
Efficient control of magnetic anisotropy and the orientation of magnetization are of centralimportance for the application of nanoparticles in spintronics. Conventionally, magnetization iscontrolled directly by an external magnetic field or by an electric field via spin-orbit coupling.Here, we demonstrate a different approach to control magnetization in small clusters. We firstshow that the low magnetic anisotropy of a Co5 cluster can be substantially enhanced by attachingbenzene molecules due to the mixing between p states of C and the d states of Co sites. We thenshow that the direction of magnetization vector of Co5 sandwiched between two benzene moleculesrotates by 90 when an electron is added or removed from the system. An experimental set up torealize such effect is also suggested.
• 26.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Ab initio calculations of the magnetic properties of Mn impurities on GaAs (110) surfaces2012In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. B 85, Article ID: 155306- p.Article in journal (Refereed)
We present a computational study of individual and pairs of substitutional Mn impurities on the (110) surface of GaAs samples based on density functional theory. We focus on the anisotropy properties of these magnetic centers and their dependence on on-site correlations, spin-orbit interaction, and surface-induced symmetry-breaking effects. For a Mn impurity on the surface, the associated acceptor-hole wave function tends to be more localized around the Mn than for an impurity in bulk GaAs. The magnetic anisotropy energy for isolated Mn impurities is of the order of 1 meV, and can be related to the anisotropy of the orbital magnetic moment of the Mn acceptor hole. Typically Mn pairs have their spin magnetic moments parallel aligned, with an exchange energy that strongly depends on the pair orientation on the surface. The spin magnetic moment and exchange energies for these magnetic entities are not significantly modified by the spin-orbit interaction, but are more sensitive to on-site correlations. Correlations in general reduce the magnetic anisotropy for most of the ferromagnetic Mn pairs.
• 27.
Virginia Commonwealth University, USA.
Virginia Commonwealth University, USA.
On the enhancement of magnetic anisotropy in cobalt clusters via non-magnetic doping2014In: Journal of Physics: Condensed Matter, ISSN 0953-8984, E-ISSN 1361-648X, Vol. 26, no 125303Article in journal (Refereed)
We show that the magnetic anisotropy energy (MAE) in cobalt clusters can be significantlyenhanced by doping them with group IV elements. Our firstprincipleselectronic structurecalculations show that Co4C2 and Co12C4 clusters have MAEs of 25 K and 61 K, respectively. The large MAE is due to controlled mixing between Co dandC pstatesand can be furthertuned by replacing C by Si. Larger assemblies of such primitive units are shown to be stablewith MAEs exceeding 100 K in units as small as 1.2 nm, in agreement with the recentobservation of large coercivity. These results may pave the way for the use of nanoclustersinhigh density magnetic memory devices for spintronics applications
• 28.
Kyoto University, Japan.
Kyoto University, Japan. Kyoto University, Japan. Kyoto University, Japan. Kyoto University, Japan. Donostia International Physics Center (DIPC), Spain ; IKERBASQUE, Basque Foundation for Science, Spain. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. University of Toyama, Japan.
Controlled switching of single-molecule junctions by mechanical motion of a phenyl ring2015In: Beilstein Journal of Nanotechnology, ISSN 2190-4286, Vol. 6, 2088-2095 p.Article in journal (Refereed)
Mechanical methods for single-molecule control have potential for wide application in nanodevices and machines. Here we demonstrate the operation of a single-molecule switch made functional by the motion of a phenyl ring, analogous to the lever in a conventional toggle switch. The switch can be actuated by dual triggers, either by a voltage pulse or by displacement of the electrode, and electronic manipulation of the ring by chemical substitution enables rational control of the on-state conductance. Owing to its simple mechanics, structural robustness, and chemical accessibility, we propose that phenyl rings are promising components in mechanical molecular devices.
• 29.
Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Fysik.
Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Fysik.
Quantum size effects of CO reactivity on metallic quantum dots2006In: Surface Science, ISSN 0039-6028, E-ISSN 1879-2758, Vol. 600, no 1, 6-14 p.Article in journal (Refereed)
We study the reactivity of a metallic quantum dot when exposed to a gas phase CO molecule. First, we perform a Newns–Anderson model calculation in which the valence electrons of the quantum dot are confined by a finite potential well and the molecule is characterized by its lowest unoccupied molecular orbital in the gas phase. A pronounced quantum size effect regarding the charge transfer between the quantum dot and molecule is observed. We then perform a first-principles calculation for a selected size interval. The quantum dot is described within the jellium model and the molecule by pseudopotentials. Our results show that the charge transfer between the quantum dot and the molecule depends critically on the size of the quantum dot, and that this dependence is intimately connected with the electronic structure. The key factor for charge transfer is the presence of states with the symmetry of the chemically active molecular orbital at the Fermi level.
• 30.
Uppsala University, Sweden.
Uppsala University, Sweden. Uppsala University, Sweden. Växjö University, Faculty of Mathematics/Science/Technology, School of Mathematics and Systems Engineering. Fysikavdelningen. Uppsala University, Sweden.
Large magnetic circular dichroism in resonant inelastic x-ray scattering at the Mn L-edge of Mn-Zn ferrite2006In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 74, no 17, 172409Article in journal (Refereed)
We report resonant inelastic x-ray scattering (RIXS) excited by circularly polarized x rays on Mn-Zn ferrite at the Mn L2,3 resonances. We demonstrate that crystal-field excitations, as expected for localized systems, dominate the RIXS spectra and thus their dichroic asymmetry cannot be interpreted in terms of spin-resolved partial density of states, which has been the standard approach for RIXS dichroism. We observe large dichroic RIXS at the L2 resonance which we attribute to the absence of metallic core hole screening in the insulating Mn ferrite. On the other hand, reduced L3-RIXS dichroism is interpreted as an effect of longer scattering time that enables spin-lattice core hole relaxation via magnons and phonons occurring on a femtosecond time scale.
• 31.
KNT University of Technology.
Influence of in-Plane Magnetic Field on Spin Polarization in the Presence of the Oft-neglected k3-Dresselhaus Spin-Orbit Coupling2008In: Physics Letters A, ISSN 0375-9601, Vol. 372, no 38, 6022-6025 p.Article in journal (Refereed)
The influence of in-plane magnetic field on spin polarization in the presence of the oft-neglected k3k3-Dresselhaus spin–orbit coupling was investigated. The k3k3-Dresselhaus term can produce a limited spin polarization. The in-plane magnetic field plays a great role in the tunneling process. It can generate the perfect spin polarization of the electrons and the ideal transmission coefficient for spin up and down simultaneously. In energy scale, complete separation between spin up and down resonance was obtained by a relatively higher in-plane magnetic field while a comparatively lower in-plane magnetic field vanishes the spin separation. On the other hand, the spin relaxation can be suppressed by compensating the oft-neglected k3k3-Dresselhaus spin orbit coupling using a relatively lower in-plane magnetic field.
• 32.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Magnetic solotronics near the surface of a semiconductor and a topological insulator2015Doctoral thesis, comprehensive summary (Other academic)
Technology where a solitary dopant acts as the active component of an opto-electronic device is an emerging field known as solotronics, and bears the promise to revolutionize the way in which information is stored, processed and transmitted. Magnetic doped semiconductors and in particular (Ga, Mn)As, the archetype of dilute magnetic semiconductors, and topological insulators (TIs), a new phase of quantum matter with unconventional characteristics, are two classes of quantum materials that have the potential to advance spin-electronics technology. The quest to understand and control, at the atomic level, how a few magnetic atoms precisely positioned in a complex environment respond to external stimuli, is the red thread that connects these two quantum materials in the research presented here.
The goal of the thesis is in part to elucidate the properties of transition metal (TM) impurities near the surface of GaAs semiconductors with focus on their response to local magnetic and electric fields, as well as to investigate the real-time dynamics of their localized spins. Our theoretical analysis, based on density functional theory (DFT) and using tight-binding (TB) models, addresses the mid-gap electronic structure, the local density of states (LDOS) and the magnetic anisotropy energy of individual Mn and Fe impurities near the (110) surface of GaAs. We investigate the effect of a magnetic field on the Mn acceptor LDOS measured in cross-sectional scanning tunneling microscopy, and provide an explanation of why the experimental LDOS images depend weakly on the field direction despite the strongly anisotropic nature of the Mn acceptor wavefunction. We also investigate the effects of a local electrostatic field generated by nearby charged As vacancies, on individual and pairs of ferromagnetically coupled magnetic dopants near the surface of GaAs, providing a means to control electrically the exchange interaction of Mn pairs. Finally, using the mixed quantum-classical scheme for spin dynamics, we calculate explicitly the time evolution of the Mn spin and its bound acceptor, and analyze the dynamic interaction between pairs of ferromagnetically coupled magnetic impurities in a nanoscaled semiconductor.
The second part of the thesis deals with the theoretical investigation of a single substitutional Mn impurity and its associated acceptor state on the (111) surface of Bi2Se3 TI, using an approach that combines DFT and TB calculations. Our analysis clarifies the crucial role played by the spatial overlap and the quasi-resonant coupling between the Mn-acceptor and the topological surface states inside the Bi2Se3 band gap, in the opening of a gap at the Dirac point. Strong electronic correlations are also found to contribute significantly to the mechanism leading to the gap, since they control the hybridization between the p orbitals of nearest-neighbor Se atoms and the acceptor spin-polarization. Our results explain the effects of inversion-symmetry and time-reversal symmetry breaking on the electronic states in the vicinity of the Dirac point, and contribute to clarifying the origin of surface-ferromagnetism in TIs. The promising potential of magnetic-doped TIs accentuates the importance of our contribution to the understanding of the interplay between magnetic order and topological protected surface states.
• 33.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Anisotropy energy and local density of states of Mn acceptor states near the (110) surface of GaAs in the presence of an external magnetic field2011Conference paper (Refereed)
• 34.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Texas at Austin.
Effect of As vacancies on the binding energy and exchange splitting of Mn impurities on a GaAs surface2012In: Bulletin of the American Physical Society, APS March Meeting 2012, Volume 57, Number 1, 2012, L14.00002- p.Conference paper (Other academic)
State-of-the-art STM spectroscopy is nowadays able to manipulate and probe the magnetic properties of individual magnetic impurities located near the surface of a semiconductor. A recent advance of these technique employs the electric field generated by a As vacancy in GaAs to affect the environment surrounding substitutional Mn impurities in the host material [1]. Here we calculate the binding energy of a single Mn dopant in the presence of nearby As vacancies, by using a recently-introduced tight-binding method [2] that is able to capture the salient features of Mn impurities near the (110) GaAs surface. The As vacancies, modeled by the repulsive potential they produce, are expected to decrease the acceptor binding energy in agreement with experiment [1]. Within this theoretical model, we investigate the possible enhancement of the exchange splitting for a pair of ferromagnetically ordered Mn impurities, observed experimentally when As vacancies are present [3]. We also calculate the response of the Mn-impurity---As-vacancy complex to an external magnetic field. \\[4pt] [1] H. Lee and J. A. Gupta, Science, 1807-1810, (2010). \\[0pt] [2] T. O. Strandberg, C. M. Canali, A. H. MacDonald, Phys. Rev. B 80, 024425, (2009). \\[0pt] [3] J.A. Gupta, private communication.
• 35.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. University of Texas at Austin.
Effect of magnetic field on the local density of states of Mn acceptor magnets in GaAs2011In: Bulletin of the American Physical Society, Volume 56, Number 1: APS March Meeting 2011, 2011, W15.00002- p.Conference paper (Other academic)
Advances in atomic manipulation, real-space imagining and spectroscopic power of STM techniques have recently made it possible to investigate the local electronic properties of a few substitutional Mn impurities inserted in the GaAs surfaces [1]. Theoretical work [2] predicts that the local density of states in the vicinity of the Mn impurities should depend strongly on the direction of the Mn magnetic moment. In contrast, recent STM experiments [3] from several groups find a negligible dependence of the tunneling LDOS on the magnetic field direction for applied fields up to 7 T. Based on tight- binding calculations we interpret these findings by arguing that large LDOS signals require large angle moment rotations, and that the strength of the magnetic field used in present experiments is not strong enough to substantially modify the magnetic anisotropy landscape of Mn impurities near the GaAs surface.\\[4pt] [1] D. Kitchen et al., Nature, 442, 436 (2006); J. K. Garleff et al., Phys. Rev. B 82, 035303 (2010).\\[0pt] [2] T. O. Strandberg, C. M. Canali, and A. H. MacDonald, Phys. Rev. B 80, 024425 (2009). [3] P. M. Koenraad, Private Communication.
• 36.
Linnaeus University, Faculty of Science and Engineering, School of Engineering.
Linnaeus University, Faculty of Science and Engineering, School of Engineering.
Local manipulation of the magnetic properties of Mn impurities on a GaAs surface by As vacancies2012Conference paper (Refereed)
• 37.
KNT University of Technology.
Efficient Spin Filtering in a Disordered Semiconductor Superlattice in the Presence of Dresselhaus Spin-Orbit coupling2008In: Physics Letters A, ISSN 0375-9601, Vol. 372, no 11, 1926-1929 p.Article in journal (Refereed)
The influence of the Dresselhaus spin–orbit coupling on spin polarization by tunneling through a disordered semiconductor superlattice was investigated. The Dresselhaus spin–orbit coupling causes the spin polarization of the electron due to transmission possibilities difference between spin up and spin down electrons. The electron tunneling through a zinc-blende semiconductor superlattice with InAs and GaAs layers and two variable distance InxGa(1−x)As impurity layers was studied. One hundred percent spin polarization was obtained by optimizing the distance between two impurity layers and impurity percent in disordered layers in the presence of Dresselhaus spin–orbit coupling. In addition, the electron transmission probability through the mentioned superlattice is too much near to one and an efficient spin filtering was recommended.
• 38.
KNT University of Technology.
Particular Nanowire Superlattice as a Spin Filter2009In: Physics Letters A, ISSN 0375-9601, Vol. 373, no 43, 3994-3996 p.Article in journal (Refereed)
A nanowire superlattice of InAs and GaAs layers with In0.47Ga0.53As as the impure layers is proposed. The oft-neglected k3k3 Dresselhaus spin–orbit coupling causes the spin polarization of the electron but often can produce a limited spin polarization. In this nanowire superlattice, Dresselhaus term produce complete spin filtering by optimizing the distance between the In0.47Ga0.53As layers and the Indium (In) in the impure layers. The proposed structure is an optimized nanowire superlattice that can efficiently filter any component of electron spins according to its energy. In fact, this nanowire superlattice is an energy dependent spin filter structure.
• 39.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
As vacancies in MnGaAs: tight binding and first-principles studies2012Conference paper (Refereed)
• 40.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Virginia Commonwealth Univ, Richmond, VA 23284 USA. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Electronic structure and magnetic properties of Mn and Fe impurities near the GaAs (110) surface2014In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 89, no 16, Article ID: 165408- p.Article in journal (Refereed)
Combining density functional theory calculations and microscopic tight-binding models, we investigate theoretically theelectronic and magnetic properties of individual substitutional transition-metal impurities (Mn and Fe) positioned in the vicinity of the (110) surface of GaAs. For the case of the [Mn2+](0) plus acceptor-hole (h) complex, the results of a tight-binding model including explicitly the impurity d electrons are in good agreement with approaches that treat the spin ofthe impurity as an effective classical vector. For the case of Fe, where both the neutral isoelectronic [Fe3+](0) and the ionized [Fe2+](-)states are relevant to address scanning tunneling microscopy (STM) experiments, the inclusion of d orbitals is essential. We find that the in-gap electronic structure of Fe impurities is significantly modified by surface effects. For the neutral acceptor state [Fe2+, h](0), the magnetic-anisotropy dependence on the impurity sublayer resembles the case of [Mn2+, h](0). In contrast, for [Fe3+](0) electronic configuration the magnetic anisotropy behaves differently and it is considerably smaller. For this state we predict that it is possible to manipulate the Fe moment, e. g., by an external magnetic field, with detectable consequences in the local density of states probed by STM.
• 41.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
University of Texas at Austin, USA. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Electric manipulation of the Mn-acceptor binding energy and the Mn-Mn exchange interaction on the GaAs (110) surface by nearby As vacancies2015In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 92, no 4, 045304Article in journal (Refereed)
We investigate theoretically the effect of nearby As (arsenic) vacancies on the magnetic properties of substitutional Mn (manganese) impurities on the GaAs (110) surface, using a microscopic tight-binding model which captures the salient features of the electronic structure of both types of defects in GaAs. The calculations show that the binding energy of the Mn-acceptor is essentially unaffected by the presence of a neutral As vacancy, even at the shortest possible VAs--Mn separation. On the other hand, in contrast to a simple tip-induced-band-bending theory and in agreement with experiment, for a positively charged As vacancy the Mn-acceptor binding energy is significantly reduced as the As vacancy is brought closer to the Mn impurity. For two Mn impurities aligned ferromagnetically, we find that nearby charged As vacancies enhance the energy level splitting of the associated coupled acceptor levels, leading to an increase of the effective exchange interaction. Neutral vacancies leave the exchange splitting unchanged. Since it is experimentally possible to switch reversibly between the two charge states of the vacancy, such a local electric manipulation of the magnetic dopants could result in an efficient real-time control of their exchange interaction.
• 42.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Spin dynamics of Mn impurities and their bound acceptors in GaAs2014In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 90, no 24, 245406Article in journal (Refereed)
We present results of tight-binding spin-dynamics simulations of individual and pairs of substitutionalMn impurities in GaAs. Our approach is based on the mixed quantum-classical schemefor spin dynamics, with coupled equations of motions for the quantum subsystem, representing thehost, and the localized spins of magnetic dopants, which are treated classically. In the case ofa single Mn impurity, we calculate explicitly the time evolution of the Mn spin and the spins ofnearest-neighbors As atoms, where the acceptor (hole) state introduced by the Mn dopant resides.We relate the characteristic frequencies in the dynamical spectra to the two dominant energy scalesof the system, namely the spin-orbit interaction strength and the value of the p-d exchange couplingbetween the impurity spin and the host carriers. For a pair of Mn impurities, we find signaturesof the indirect (carrier-mediated) exchange interaction in the time evolution of the impurity spins.Finally, we examine temporal correlations between the two Mn spins and their dependence on theexchange coupling and spin-orbit interaction strength, as well as on the initial spin-configuration andseparation between the impurities. Our results provide insight into the dynamic interaction betweenlocalized magnetic impurities in a nano-scaled magnetic-semiconductor sample, in the extremelydilute(solotronics) regime.
• 43.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Time-Dependent Spin Dynamics of Few Transition Metal Impurities in a Semiconductor Host2014In: 2014 MRS Spring Meeting and Exhibit, April 21-25, San Francisco, 2014Conference paper (Refereed)
Recently, remarkable progress has been achieved in describing electronic and magnetic properties of individual dopants in semiconductors, both experimentally [1] and theoretically [2, 3], offering exciting prospects for applications in future electronic devices. In view of potential novel applications, which involve communication between individual magnetic dopants, mediated by the electronic carriers of the host, the focus of this research field has been shifting towards fundamental understanding and control of spin dynamics of these atomic-scale magnetic centers. Importantly, the development of time-resolved spectroscopic techniques has opened up the possibility to probe the dynamics of single spin impurities experimentally [4]. These advances pose new challenges for theory, calling for a fully microscopic time-dependent description of spin dynamics of individual impurities in the solid states environment.We present results of theoretical investigations of real-time spin dynamics of individual and pairs of transition metal (Mn) impurities in GaAs. Our approach combines the microscopic tight-binding description of substitutional dopants in semicondutors [3] with the time-dependent scheme for simulations of spin dynamics [5], based on the numerical integration of equations of motion for the coupled system of the itinerant electronic degrees of freedom of the host and the localized impurity spins. We study the spin dynamics of impurities in finite clusters containing up to hundred atoms, over time scales of a few hundred femtoseconds. In particular, we calculate explicitly the time-evolution of the impurity spins and electrons of the host upon weak external perturbations. From the Fourier spectra of the time-dependent spin trajectories, we identify the energy scales associated with intrinsic interactions of the system, namely the spin-orbit interaction and the exchange interaction between the impurity spins and the spins of the nearest-neighbor atoms of the host. Furthermore, we investigate the effective dynamical coupling between the spins of two spatially separated Mn impurities, mediated by the host carriers. We find signatures of ferromagnetic coupling between the impurities in the time-evolution of their spins. Finally, we propose a scheme for investigating the spin relaxation of Mn dopants in GaAs, by extending the time-dependent approach for spin dynamics in an isolated conservative system to the case of an open system, with dephasing mechanisms included as an effective interaction between the system and an external bath [5].
• 44.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Trend of the magnetic anisotropy for individual Mn dopants near the (110) GaAs surface2014In: Journal of Physics: Condensed Matter, ISSN 0953-8984, E-ISSN 1361-648X, Vol. 26, no 39, Article ID: 394006- p.Article in journal (Refereed)
Using a microscopic finite-cluster tight-binding model, we investigate the trend of the magnetic anisotropy energy as a function of the cluster size for an individual Mn impurity positioned in the vicinity of the (1 1 0) GaAs surface. We present results of calculations for large cluster sizes containing approximately 104 atoms, which have not been investigated so far. Our calculations demonstrate that the anisotropy energy of a Mn dopant in bulk GaAs, found to be non-zero in previous tight-binding calculations, is purely a finite size effect that vanishes with inverse cluster size. In contrast to this, we find that the splitting of the three in-gap Mn acceptor energy levels converges to a finite value in the limit of the infinite cluster size. For a Mn in bulk GaAs this feature is related to the nature of the mean-field treatment of the coupling between the impurity and its nearest neighbor atoms. We also calculate the trend of the anisotropy energy in the sublayers as the Mn dopant is moved away from the surface towards the center of the cluster. Here the use of large cluster sizes allows us to position the impurity in deeper sublayers below the surface, compared to previous calculations. In particular, we show that the anisotropy energy increases up to the fifth sublayer and then decreases as the impurity is moved further away from the surface, approaching its bulk value. The present study provides important insights for experimental control and manipulation of the electronic and magnetic properties of individual Mn dopants at the semiconductor surface by means of advanced scanning tunneling microscopy techniques.
• 45.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Department of Physics, University of Texas at Austin, U.S.A.
Theoretical studies of single magnetic impurities on the surface of semiconductors and topological insulators2013In: MRS Online Proceedings Library/Volume 1564/2013, Materials Research Society, 2013Conference paper (Refereed)
We present results of theoretical studies of transition metal dopants in GaAs, based on microscopic tight-binding model and ab-initio calculations. We focus in particular on how the vicinity of surface affects the properties of the hole-acceptor state, its magnetic anisotropy and its magnetic coupling to the magnetic dopant. In agreement with STM experiments, Mn substitutional dopants on the (110) GaAs surface give rise to a deep acceptor state, whose wavefunction is localized around the Mn center. We discuss a refinement of the theory that introduces explicitly the d-levels for the TM dopant. The explicit inclusion of d-levels is particularly important for addressing recent STM experiments on substitutional Fe in GaAs. In the second part of the paper we discuss an analogous investigation of single dopants in Bi2Se3 three-dimensional topological insulators, focusing in particular on how substitutional impurities positioned on the surface affect the electronic structure in the gap. We present explicit results for BiSe antisite defects and compare with STM experiments.
• 46.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Virginia Commonwealth University, USA. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Interplay between Mn-acceptor state and Dirac surface states in Mn-doped Bi2Se3 topological insulator2014In: MAR14 Meeting of The American Physical Society, Denver, Colorado: American Physical Society , 2014Conference paper (Refereed)
• 47.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Interplay between Mn-acceptor state and Dirac surface states in Mn-doped Bi2Se3 topological insulator2014In: Physical Review B. Condensed Matter and Materials Physics, ISSN 1098-0121, E-ISSN 1550-235X, Vol. 90, Article ID: 195441- p.Article in journal (Refereed)
We investigate the properties of a single substitutional Mn impurity and its associated acceptor state on the (111) surface of Bi$_2$Se$_3$ topological insulator. Combining \textit{ab initio} calculations with microscopic tight-binding modeling, we identify the effects of inversion-symmetry and time-reversal-symmetry breaking on the electronic states in the vicinity of the Dirac point. In agreement with experiments, we find evidence that the Mn ion is in ${+2}$ valence state and introduces an acceptor in the bulk band gap. The Mn-acceptor has predominantly $p$--character, and is localized mainly around the Mn impurity and its nearest-neighbor Se atoms. Its electronic structure and spin-polarization are determined by the hybridization between the Mn $d$--levels and the $p$--levels of surrounding Se atoms, which is strongly affected by electronic correlations at the Mn site. The opening of the gap at the Dirac point depends crucially on the quasi-resonant coupling and the strong real-space overlap between the spin-chiral surface states and the mid-gap spin-polarized Mn-acceptor states.
• 48.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering. Linnaeus University, Faculty of Technology, Department of Physics and Electrical Engineering.
The role of d levels of substitutional magnetic impurities at the (110) GaAs surface2013Conference paper (Other academic)
The study of the spin of individual transition-metal dopants in a semiconductor host is an emergent field known as magnetic solotronics, bearing exciting prospects for novel spintronics devices at the atomic scale. Advances in different STM based techniques allowed experimentalists to investigate substitutional dopants at a semiconductor surface with unprecedented accuracy and degree of details [1]. Theoretical studies based both on microscopic tight-binding (TB) models and DFT techniques have contributed in elucidating the experimental findings. In particular, for the case of Mn dopants on the (110) GaAs surface, TB models [2] have provided a quantitative description of the properties of the associated acceptor states. Most of these TB calculations ignore dealing explicitly with the Mn d-levels and treat the associated magnetic moment as a classical vector. However recent STM experiments [3] involving other TM impurities, such as Fe, reveal topographic features that might be related to electronic transitions within the d-level shell of the dopant. In this work we have included explicitly the d levels in the Hamiltonian. The parameters of the model have been extracted from DFT calculations. We have investigated the role that d levels play on the properties of the acceptor states of the doped GaAs(110) surface, and analyzed their implications for STM spectroscopy.
• 49.
Virginia Commonwealth University, USA.
Virginia Commonwealth University, USA. Virginia Commonwealth University, USA. Virginia Commonwealth University, USA.
Robust Magnetic Moments on Impurities in Metallic Clusters: Localized Magnetic States in Superatoms2013In: Journal of Physical Chemistry A, ISSN 1089-5639, E-ISSN 1520-5215, Vol. 117, no 20, 4297-4303 p.Article in journal (Refereed)
Introducing magnetic impurities into clusters of simplemetals can create localized states for higher angular momentum quantumnumbers (l = 2 or 3) that can breed magnetism analogous to that invirtual bound states in metallic hosts, offering a new recipe for magneticsuperatoms. In this work, we demonstrate that MnCan clusterscontaining 6−15 Ca atoms show a spin magnetic moment of 5.0 μBirrespective of the cluster size. Theoretical analysis reveals that the Mn dstates hybridize only partially with superatomic states and introduce extramajority and minority d states, largely localized at the Mn site, with alarge gap. Successive addition of Ca atoms introduces superatomic statesof varying angular momentum that are embedded in this gap, allowingcontrol over the stability of the motifs without altering the moment. Assemblies of such clusters can offer novel electronic features due to theformation of localized magnetic “quasibound states” in a confined nearlyfree electron gas.
• 50.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics.
Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. Navy Research Laboratory, Washington DC, (U.S.A). Linnaeus University, Faculty of Science and Engineering, School of Computer Science, Physics and Mathematics. Universitá dell'Insubria, Como (Italy).
Theory of tunneling spectroscopy in a Mn12 single-electron transistor by DFT methods2010In: Physical Review Letters, ISSN 0031-9007, Vol. 104, no 1, 017202-017205 p.Article in journal (Refereed)
12 1 - 50 of 82
Cite
Citation style
• apa
• harvard1
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
2017-11-19 21:30:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45149800181388855, "perplexity": 2106.998592102259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00748.warc.gz"}
|
https://rdrr.io/cran/bigstatsr/man/big_tcrossprodSelf.html
|
big_tcrossprodSelf: Tcrossprod In bigstatsr: Statistical Tools for Filebacked Big Matrices
Description
Compute X.row X.row^T for a Filebacked Big Matrix X after applying a particular scaling to it.
Usage
1 2 3 4 5 6 7 8 9 10 big_tcrossprodSelf( X, fun.scaling = big_scale(center = FALSE, scale = FALSE), ind.row = rows_along(X), ind.col = cols_along(X), block.size = block_size(nrow(X)) ) ## S4 method for signature 'FBM,missing' tcrossprod(x, y)
Arguments
X An object of class FBM. fun.scaling A function that returns a named list of mean and sd for every column, to scale each of their elements such as followed: \frac{X_{i,j} - mean_j}{sd_j}. Default doesn't use any scaling. ind.row An optional vector of the row indices that are used. If not specified, all rows are used. Don't use negative indices. ind.col An optional vector of the column indices that are used. If not specified, all columns are used. Don't use negative indices. block.size Maximum number of columns read at once. Default uses block_size. x A 'double' FBM. y Missing.
Value
A temporary FBM, with the following two attributes:
• a numeric vector center of column scaling,
• a numeric vector scale of column scaling.
Matrix parallelization
Large matrix computations are made block-wise and won't be parallelized in order to not have to reduce the size of these blocks. Instead, you may use Microsoft R Open or OpenBLAS in order to accelerate these block matrix computations. You can also control the number of cores used with bigparallelr::set_blas_ncores().
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 X <- FBM(13, 17, init = rnorm(221)) true <- tcrossprod(X[]) # No scaling K1 <- tcrossprod(X) class(K1) all.equal(K1, true) K2 <- big_tcrossprodSelf(X) class(K2) K2\$backingfile all.equal(K2[], true) # big_tcrossprodSelf() provides some scaling and subsetting # Example using only half of the data: n <- nrow(X) ind <- sort(sample(n, n/2)) K3 <- big_tcrossprodSelf(X, fun.scaling = big_scale(), ind.row = ind) true2 <- tcrossprod(scale(X[ind, ])) all.equal(K3[], true2)
|
2021-09-20 21:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19975559413433075, "perplexity": 4684.370272507581}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00276.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-10-counting-methods-and-probability-10-4-find-probabilities-of-disjoint-and-overlapping-events-10-4-exercises-problem-solving-page-712/45
|
## Algebra 2 (1st Edition)
$0.8488$
The probability is $1$ minus the probability that none of them bring the same item, which is: $1-\frac{10\cdot9\cdot8\cdot7\cdot6\cdot5}{10^6}=0.8488$
|
2021-04-17 23:22:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8281733989715576, "perplexity": 469.90969990129196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464065.57/warc/CC-MAIN-20210417222733-20210418012733-00096.warc.gz"}
|
https://tex.stackexchange.com/questions/277478/why-do-i-get-a-message-error-when-using-tkzdrawline
|
# Why do I get a message error when using \tkzDrawLine
What is the problem with the \tkzDrawLine command from thetkz-euclide package. It induces an error message when compiling:
! Undefined control sequence. l.1 \tkz @line@start l.36 \tkzDrawLine(C,C'') The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., \hobx'), typeI' and the correct spelling (e.g., I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined.
! Undefined control sequence. l.1 \tkz @line@end l.36 \tkzDrawLine(C,C'')
The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., \hobx'), typeI' and the correct spelling (e.g., I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined.
\documentclass[french,tikz,border=2.5mm]{standalone}
\usepackage[ansinew]{inputenc}% caractères accentués
\usepackage[T1]{fontenc} % fontes extended computer modern (EC)
\usepackage{lmodern} % l'affichage correct des caractères diacritiqués français
\usepackage{babel}% \usepackage[french]{babel} typographie française
\usepackage{xcolor}
\usepackage{tikz,tkz-euclide,siunitx}
\usetkzobj{all}
%\setcounter{page}{4}
\begin{document}
\begin{tikzpicture}
\tkzDefPoint(0,0){A}
\tkzDefPoint(55:8.8){C}
\tkzDefPoint(55:5.2){B}
\tkzDefShiftPoint[B](20:3){B'}
\tkzDefShiftPoint[C](20:-3){C'}
\tkzDefShiftPoint[C](180:5){C''}
\tkzDrawSegment[line cap =round, double distance=3mm](A,C)
\tkzDrawPoints(A,B,C,C'')
\begin{scope}[very thick]
\tkzDrawVector[-Stealth](B',B)
\tkzDrawVector(C',C)
\end{scope}
\tkzLabelPoint(C){$$C$$}
\tkzLabelPoint(A){$$A$$}
\tkzLabelPoint(B){$$B$$}
\tkzLabelPoint(C''){$$C''$$}
\tkzDrawLine(C,C'')
\end{tikzpicture}
\end{document}
• Are you sure you should use ()'s and not {}'s for some of these? (I'm not bear a computer so I cannot test) Nov 9 '15 at 18:59
• yes we use ()'s with tkz-euclide package macros. Nov 9 '15 at 19:06
• Then start debugging by commenting out all tkz lines, then remove the commenting one line at a time, compiling each time Nov 9 '15 at 19:20
• the line that is the problem is the last one : \tkzDrawLine(C,C'') Nov 9 '15 at 19:35
• It appears that tkz-euclide is not compatible with the babel TikZ library. Nov 9 '15 at 20:56
The problem is with the babel library. When it's loaded, some TikZ commands are passed through \scantokens and so one must ascertain that @ has the correct category code. Unfortunately, tkz-euclide doesn't, in the \tkzDrawLine macro (actually in the internal version \@tkzDrawLine.
The simplest workaround is to add \makeatletter at the appropriate spot.
\documentclass[french,tikz,border=2.5mm]{standalone}
\usepackage[ansinew]{inputenc}% caractères accentués
\usepackage[T1]{fontenc} % fontes extended computer modern (EC)
\usepackage{lmodern} % l'affichage correct des caractères diacritiqués français
\usepackage{babel}% \usepackage[french]{babel} typographie française
\usepackage{xcolor}
\usepackage{tikz,tkz-euclide,siunitx}
\usepackage{etoolbox} % for \patchcmd
\usetkzobj{all}
% make \tkzDrawLine compatible with the babel TikZ library
\makeatletter
\patchcmd{\tkz@DrawLine}{\begingroup}{\begingroup\makeatletter}{}{}
\makeatother
\begin{document}
\begin{tikzpicture}
\tkzDefPoint(0,0){A}
\tkzDefPoint(55:8.8){C}
\tkzDefPoint(55:5.2){B}
\tkzDefShiftPoint[B](20:3){B'}
\tkzDefShiftPoint[C](20:-3){C'}
\tkzDefShiftPoint[C](180:5){C''}
\tkzDrawSegment[line cap =round, double distance=3mm](A,C)
\tkzDrawPoints(A,B,C,C'')
\begin{scope}[very thick]
\tkzDrawVector[-Stealth](B',B)
\tkzDrawVector(C',C)
\end{scope}
\tkzLabelPoint(C){$$C$$}
\tkzLabelPoint(A){$$A$$}
\tkzLabelPoint(B){$$B$$}
\tkzLabelPoint(C''){$$C''$$}
\tkzDrawLine(C,C'')
\end{tikzpicture}
\end{document}
|
2021-09-20 21:15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057081699371338, "perplexity": 11837.522758607322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057091.31/warc/CC-MAIN-20210920191528-20210920221528-00556.warc.gz"}
|
https://www.transtutors.com/questions/levine-inc-which-produces-a-single-product-has-prepared-the-following-standard-cost--2556401.htm
|
# Levine Inc., which produces a single product, has prepared the following standard cost sheet for ... 1 answer below »
Levine Inc., which produces a single product, has prepared the following standard cost sheet for one unit of the product.
Direct materials (9 pounds at $1.90 per pound)$17.10 Direct labor (4 hours at $10.00 per hour)$40.00 During the month of April, the company manufactures 160 units and incurs the following actual costs. Direct materials purchased and used (2,100 pounds) 4,410 Direct labor (680 hours) \$6,664 Compute the total, price, and quantity variances for materials and labor. Total materials variance Materials price variance Materials quantity variance Total labor variance Labor price variance Labor quantity v ariance
HITEN B
Number of units manufactured 160 Standard material per unit 9 pounds Total standard material 1,440 Standard material rate per hour 1.9 Total standard direct material cost 2,736 Actual Direct material cost
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
## Recent Questions in Accounting - Others
Looking for Something Else? Ask a Similar Question
|
2020-09-30 16:29:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32838869094848633, "perplexity": 11392.62262016918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00022.warc.gz"}
|
http://physics.stackexchange.com/questions/3928/multiple-classical-paths-from-hamiltons-principle?answertab=oldest
|
# Multiple classical paths from Hamilton's principle
Previous posts such as this ask about types of stationary point in Hamilton's Principle. There is, however, another aspect to discuss: the question as to whether the extremal path is unique.
One geometric way to envisage this is to assume that multiple paths are simultaneously extremal. I believe that this is an explanation for lenses, but I have not seen lenses explained as multiple classical solutions to Hamilton's Principle. (The multiple paths being the 360 degrees of rays between source and focus, etc also demonstrable through Fermat's principle.)
One can generalise lenses, but also consider a simpler case. Let the surface of a sphere be the action (phase space) surface which is minimized in classical paths. Thus (ignore antipodals here) between two points A and B the geodesic is the unique classical path. In quantum form the WKB approximation would no doubt have constructive maxima on this path.
However if the sphere has a disk (containing that geodesic) cut out, the shortest path now has exactly two choices: around one or the other rim from A to B. Presumably WKB would maximize the quantum paths on these two (although I havent proved this). If so then classically we have a quantum-like phenomenon: a particle has a choice in going from A to B. Experimentalists might see this and wonder whether the particle went from A to B via the LHS, the RHS or both....
-
It seems like you're asking about a classical analog to superselection sectors of quantum mechanics. One situation where this occurs is when considering particle motion of a manifold with non-trivial topology - i.e. with holes and handles. In such cases there can be more than one extremal path from A to B. When you talk about optics it is important to keep in mind that there are two different regimes, those of wave optics and geometric optics. In the second case one has well-defined "trajectories" and you can find extremal trajectories. Not so in the first case. – user346 Jan 26 '11 at 17:45
Non-trivial topologies always make Physics more interesting. Incidentally this question also has latent within it the question as to whether Phase Space trajectories can ever bifurcate. I had once thought not, but with all these hidden String Dimensions around these days who can tell... – Roy Simpson Jan 26 '11 at 17:57
[Another comment to answer transplant]
It seems like you're asking about a classical analog to the superselection sectors of quantum mechanics. One situation where this occurs in classical mechanics is when considering particle motion of a manifold with non-trivial topology - i.e. with holes and handles. In such cases there can be more than one extremal path from A to B. An example is an arcade pin-ball machine, if you're familiar with those.
Also, when you talk about optics it is important to keep in mind that there are two different regimes, those of wave optics and geometric optics. In the second case one has well-defined "trajectories" and you can find extremal trajectories. Not so in the first case.
-
Dear Roy, for a chosen initial configuration $x_i(t_i)$ and final configuration $x_f(t_f)$, there can exist more than one local extrema of the action. However, this point is irrelevant in classical physics. At every moment, including the initial moment $t_i$, the particle also has a well-defined velocity $\dot x_i(t_i)$, and the principle of least action is just a way to derive the differential equations that determine the motion. For a given $x(t_i)$ and $\dot x(t_i)$, the evolution will be inevitably unique.
In your case, the initial velocity either says that the particle will avoid the removed disk from the left side, or from the right side, or it will hit the missing disk (and perhaps gets reflected from it) - unless you include some potential that repels the particle from the disk, there is no reason why the particle should be obliged to avoid it. So no issue of the type you mention exists in classical physics.
Quantum mechanics
The situation is different in quantum mechanics. All trajectories contribute and as the double-slit experiment shows, there may be interference between many classical histories. The interference pattern in the double-slit experiment may be obtained by adding the neighborhood of "two classical trajectories" only - these trajectories are piecewise linear and go through the two slits.
Quantum tunneling may also be phrased in terms of a contribution of complexified trajectories in the complexified time - that are also local extrema of the action.
A much more interesting situation arises in quantum field theories. The vacuum - that lasts eternally - is a global minimum of the action. However, in the Euclidean spacetime, they may also exist other local minima. Because they're local minima, they also solve the classical equations of motion.
In quantum field theory, such solutions are called instantons because they are localized both in space and in the Euclidean time (near one instant of time). They contribute to probabilities of various processes because one must sum over all histories, including those that get mapped to the instanton if one uses the Euclidean formalism (via the Wick rotation). In particular, they produce the 't Hooft interaction in gauge theories - a product of all fermions in the theory (a fact that is derived from the fermionic zero modes on the background of the instanton).
Instantons have to be stable (minima of the Euclidean action, not maxima) and they're typically protected by a topological charge by which they differ from the vacuum configuration (some "winding number" or "homotopy"). There can also exist unstable solutions - the saddle points that are minima with respect to most directions but they are maxima with respect to a finite number of directions in the configuration space. Such saddle-like mixed extrema are called sphalerons.
-
Lubos, Just on the classical side for now: dont lenses work by Fermat's Principle, i.e. as classical waves? Classical waves being electromagnetic and I suppose gravitational. Thus multiple solutions exist classically if we deal with classical waves? – Roy Simpson Jan 26 '11 at 17:15
Also on the classical particle side chaotic uncertainty in the initial velocity could give this effect perhaps. – Roy Simpson Jan 26 '11 at 17:21
|
2015-03-29 04:41:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7355810403823853, "perplexity": 315.51279891509904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298177.21/warc/CC-MAIN-20150323172138-00177-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://dba.stackexchange.com/questions/180060/autovacuum-challenges-after-upgrade-to-postgresql-9-6
|
# autovacuum challenges after upgrade to postgresql 9.6
We have a cluster with several dozen databases. It was running great on 9.3 for a year (growing over time) when we did the upgrade to 9.6 last month. Since that time, the database will freeze on updates of large tables (tables with 100k+ rows), from between a minute, to 30-60 minutes.
Specifically, one of our processes updates several columns one-at-a-time in such a table, and in the code log, I can see where it pauses and hangs, for up to 30 minutes, and then picks up later as if nothing happened, and updates many consecutive columns in the same with no such hang, within seconds.
Increasing logging in the database showed nothing at first - and analysis of memory usage on the server and in postgres showed nothing unusual for resource usage. Eventually, I was able to see, with debug-level logging, that postgresql would go through vacuuming all databases, and the query in question would un-stick after the autovacuumer would hit that database, though not on the first loop of all databases, but after several passes. Since then, I have been messing with the autovacuum settings. I tried turning autovacuuming off, as it was off on 9.3 and somehow we survived - but that did not help. So I turned it on, and have been changing various settings as I learn about them. I got it so that the production server does not hang in this way through frequent autovacuuming, adding 16 workers, etc.
I am pasting the settings I have in general for our db server - the general memory settings were created with pgtune, the autovacuum settings I have been essentially tinkering with.
I am learning but am very much shooting in the dark - if anyone can provide guidance on how best to handle the analysis (through logs, etc) and optimizing of the autovacuuming on this instance of 9.6, that is what I am after.
Various queries at the official site and here designed to show locks don't turn up any waiting processes at all - but I do see at least one each entry for a process with an 'AccessShareLock' and 'RowExclusiveLock' in the pg admin iv dashboard, under Locks, when the query to update the column is running.
Again, what is frustrating - I don't see what is so complicated about a column update of a few hundred thousand records, one-at-a-time - it should be easy to see what is causing the hang (something about autovacuuming), and also how to fix it. Whereas the settings below seem to be working on production, I am not clear on the why, don't know how to improve them, and don't know how to create a working set of configurations on a dev server. Your help is appreciated.
autovacuum = on
log_autovacuum_min_duration = 200
autovacuum_max_workers = 16
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 5000
#autovacuum_analyze_threshold = 500
#autovacuum_vacuum_scale_factor = 0.2
#autovacuum_analyze_scale_factor = 0.1
#autovacuum_freeze_max_age = 200000000
#autovacuum_multixact_freeze_max_age =
autovacuum_vacuum_cost_delay = 20ms
autovacuum_vacuum_cost_limit = 2000
default_statistics_target = 100
maintenance_work_mem = 2920MB
checkpoint_completion_target = 0.8
effective_cache_size = 22GB
work_mem = 260MB
wal_buffers = 16MB
shared_buffers = 7680MB
min_wal_size = 80MB
max_wal_size = 2GB
I'd suggest you change your strategy.
Specifically, one of our processes updates several columns one-at-a-time in such a table, and in the code log, I can see where it pauses and hangs, for up to 30 minutes, and then picks up later as if nothing happened, and updates many consecutive columns in the same with no such hang, within seconds.
One such update can easily double the size of your table, since each UPDATE is basically equivalent to one DELETE one INSERT, which leaves each deleted row occupying useless space until the table it is VACUUMed and that space can be reused for the next round of UPDATEs.
You can do:
1. Change the strategy and update as many columns as you can at once. Your process will be much quicker because the data will be accessed many less times.
2. Issue a VACUUM from your code after each UPDATE round, so that the unused spaced is available for the next UPDATE round.
3. Under some circumstances, doing the UPDATEs in batches of (let's say) 1.000, and committing after each batch might ease the sitution.
Any combination of the three techniques will probably ease the updates.
• In the rather complicated context which of course cannot be provided here in detail, the method of one column update at a time is best, I assure you - and will be in place for this version of the code. The updates work fine - great in fact - once 'unstuck' - they just get stuck sometimes, at a 'random' column, and I know for a fact it is not a particular column or data type that causes it. So, while your feedback is insightful, and I am sure you are right, I am going to wait for an answer that helps me determine why the updates are sticking. – csdev Jul 17 '17 at 21:31
• Try option 2 if you can. It's the easier to implement, and might make a substantial difference. – joanolo Jul 17 '17 at 21:33
• cant edit comment above - I missed your suggestion #2, and I am going to try that first, and will mark you correct if it works – csdev Jul 17 '17 at 21:38
• doesnt fix it - verbose vacuum of the destination table says this, where the line with nonremovable rows is constantly updating if I re-run - INFO: vacuuming "public.xyz" INFO: index "xyz_seq1" now contains 565893 row versions in 2897 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: "xyz": found 0 removable, 64447 nonremovable row versions in 923 out of 38054 pages DETAIL: 0 dead row versions cannot be re There were 503 unused item pointers. Skipped 1 page due to buffer pins. – csdev Jul 17 '17 at 21:47
• Do you have any open transactions while all this is happening? Either your own transaction is still locking some row, or you (probably) have some kind of concurrency problem. – joanolo Jul 17 '17 at 21:55
|
2019-10-16 15:41:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3108122646808624, "perplexity": 1538.323617139996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00293.warc.gz"}
|
http://mathhelpforum.com/algebra/50988-zeros.html
|
# Math Help - Zeros
1. ## Zeros
Given a cubic equation, such as x^3-2x, how many rational and irrational zeros are there?
How do I figure this out?
2. There can be as many zeros as the number of the degree
so there are 3 or less zeros
the descartes rule of signs states that if you count the sign changes of the function, the number of sign changes (or minus two of that number) is the number of positive roots (or roots at 0)
so x cubed - 2x has 1 sign change, so there must be 1 positive zero (if there are 3 sign changes, however, there could be 3 OR 1 positive zero)
take f(-x) and count the sign changes to find out the number of negative zeros
it becomes -x cubed + 2x and there is 1 sign change again, so there must be one negative zero
because there must be 3 zeros total, and we know there is 1 positive and 1 negative zero, we also know that there is one complex zero
EDIT oh you said rational and irrational (not complex) woops i wasn't paying attention might not have answered your question
IN THAT CASE:
we have x cubed - 2x
factor out an x
x (x squared - 2)
x=0 is a rational root
x squared=2
absolute value of x = sq. root of 2
x = positive or negative sq. root of 2
so there is a root at 0, sq. root of 2, and negative sq. root of 2
so 2 irrational and 1 rational roots
wait a minute where is the complex root? can anyone answer my question now? i've never had this happen before...
3. Hello,
Originally Posted by juldancer
Given a cubic equation, such as x^3-2x, how many rational and irrational zeros are there?
How do I figure this out?
$x^3-2x=x(x^2-2)$
use the difference of 2 squares : $x^2-2=(x-\sqrt{2})(x+\sqrt{2})$
Hence $x^3-2x=x(x-\sqrt{2})(x+\sqrt{2})$
----------------------------------------
In order to know how many there are, you can use Descartes' rule of signs : http://www.purplemath.com/modules/drofsign.htm
4. Originally Posted by juldancer
Given a cubic equation, such as x^3-2x, how many rational and irrational zeros are there?
How do I figure this out?
$x^3-2x=x(x-\sqrt{2})(z+\sqrt{2})$
So now you tell us how many rational and irrational roots there are.
RonL
5. Can anyone answer my error? I factored correctly and I've used DesCartes rule of Signs perfectly billions of times before...maybe I've just forgotten how to do it correctly but I can't find where I made my error
6. Hi mikedwd,
and we know there is 1 positive and 1 negative zero, we also know that there is one complex zero
This is not possible. If the coefficients of a polynomial are real, then if there is a complex zero, there is another complex zero, namely its conjugate.
The mistake is, in my opinion, in the fact that you count 0 as +0 and not -0. but I'm sorry, I don't know this rule enough to be able to spot more precisely the error...
7. Originally Posted by mikedwd
Can anyone answer my error? I factored correctly and I've used DesCartes rule of Signs perfectly billions of times before...maybe I've just forgotten how to do it correctly but I can't find where I made my error
There is one change of sign and one positive root. No problem.
(a zero root added to a polynomial does not alter the number of times that the signs change, and so is undetectable by the rule of signs)
RonL
8. hmm I'm sure I was told that a zero root is included with positive roots in the rule of signs, but I suppose that was wrong
anyway yes I just realized (im a bit slow today apparently) that you cannot have 1 complex zero...
|
2015-02-01 20:26:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071328043937683, "perplexity": 537.4591088288238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122086930.99/warc/CC-MAIN-20150124175446-00237-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://www.scienceopen.com/document?vid=1bac3dbc-c38a-4c36-a2ab-0674c77f71c3
|
84
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Adipose tissue transcriptomic signature highlights the pathological relevance of extracellular matrix in human obesity
Genome Biology
BioMed Central
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
Analysis of the transcriptomic signature of white adipose tissue in obese human subjects revealed increased interstitial fibrosis and an infiltration of inflammatory cells into the tissue.
### Abstract
##### Background
Investigations performed in mice and humans have acknowledged obesity as a low-grade inflammatory disease. Several molecular mechanisms have been convincingly shown to be involved in activating inflammatory processes and altering cell composition in white adipose tissue (WAT). However, the overall importance of these alterations, and their long-term impact on the metabolic functions of the WAT and on its morphology, remain unclear.
##### Results
Here, we analyzed the transcriptomic signature of the subcutaneous WAT in obese human subjects, in stable weight conditions and after weight loss following bariatric surgery. An original integrative functional genomics approach was applied to quantify relations between relevant structural and functional themes annotating differentially expressed genes in order to construct a comprehensive map of transcriptional interactions defining the obese WAT. These analyses highlighted a significant up-regulation of genes and biological themes related to extracellular matrix (ECM) constituents, including members of the integrin family, and suggested that these elements could play a major mediating role in a chain of interactions that connect local inflammatory phenomena to the alteration of WAT metabolic functions in obese subjects. Tissue and cellular investigations, driven by the analysis of transcriptional interactions, revealed an increased amount of interstitial fibrosis in obese WAT, associated with an infiltration of different types of inflammatory cells, and suggest that phenotypic alterations of human pre-adipocytes, induced by a pro-inflammatory environment, may lead to an excessive synthesis of ECM components.
##### Conclusion
This study opens new perspectives in understanding the biology of human WAT and its pathologic changes indicative of tissue deterioration associated with the development of obesity.
### Most cited references57
• Record: found
### Gene ontology: tool for the unification of biology. The Gene Ontology Consortium.
(2000)
Bookmark
• Record: found
• Abstract: found
• Article: found
Is Open Access
### In silico prediction of protein-protein interactions in human macrophages
(2015)
Background: Protein-protein interaction (PPI) network analyses are highly valuable in deciphering and understanding the intricate organisation of cellular functions. Nevertheless, the majority of available protein-protein interaction networks are context-less, i.e. without any reference to the spatial, temporal or physiological conditions in which the interactions may occur. In this work, we are proposing a protocol to infer the most likely protein-protein interaction (PPI) network in human macrophages. Results: We integrated the PPI dataset from the Agile Protein Interaction DataAnalyzer (APID) with different meta-data to infer a contextualized macrophage-specific interactome using a combination of statistical methods. The obtained interactome is enriched in experimentally verified interactions and in proteins involved in macrophage-related biological processes (i.e. immune response activation, regulation of apoptosis). As a case study, we used the contextualized interactome to highlight the cellular processes induced upon Mycobacterium tuberculosis infection. Conclusion: Our work confirms that contextualizing interactomes improves the biological significance of bioinformatic analyses. More specifically, studying such inferred network rather than focusing at the gene expression level only, is informative on the processes involved in the host response. Indeed, important immune features such as apoptosis are solely highlighted when the spotlight is on the protein interaction level.
Bookmark
• Record: found
• Abstract: found
• Article: found
Is Open Access
### Functional cartography of complex metabolic networks
, (2005)
High-throughput techniques are leading to an explosive growth in the size of biological databases and creating the opportunity to revolutionize our understanding of life and disease. Interpretation of these data remains, however, a major scientific challenge. Here, we propose a methodology that enables us to extract and display information contained in complex networks. Specifically, we demonstrate that one can (i) find functional modules in complex networks, and (ii) classify nodes into universal roles according to their pattern of intra- and inter-module connections. The method thus yields a cartographic representation'' of complex networks. Metabolic networks are among the most challenging biological networks and, arguably, the ones with more potential for immediate applicability. We use our method to analyze the metabolic networks of twelve organisms from three different super-kingdoms. We find that, typically, 80% of the nodes are only connected to other nodes within their respective modules, and that nodes with different roles are affected by different evolutionary constraints and pressures. Remarkably, we find that low-degree metabolites that connect different modules are more conserved than hubs whose links are mostly within a single module.
Bookmark
### Author and article information
###### Journal
Genome Biol
Genome Biology
BioMed Central
1465-6906
1465-6914
2008
21 January 2008
: 9
: 1
: R14
###### Affiliations
[1 ]INSERM, UMR-S 872, Les Cordeliers, Eq. 7 Nutriomique and Eq. 13, Paris, F-75006 France
[2 ]Pierre et Marie Curie-Paris 6 University, Cordeliers Research Center, UMR-S 872, Paris, F-75006 France
[3 ]Paris Descartes University, UMR-S 872, Paris, F-75006 France
[4 ]Assistance Publique-Hôpitaux de Paris (AP-HP), Pitié Salpêtrière Hospital, Nutrition and Endocrinology department, Paris, F-75013 France
[5 ]Franco-Czech Laboratory for Clinical Research on Obesity, INSERM and 3rd Faculty of Medicine, Charles University, Prague, CZ-10000, Czech Republic
[6 ]INSERM, U858, Obesity Research Laboratory, I2MR, Toulouse, F-31432 France
[7 ]Paul Sabatier University, Louis Bugnard Institute IFR31, Toulouse, F-31432 France
[8 ]Centre Hospitalier Universitaire de Toulouse, Toulouse, F-31059 France
[9 ]Assistance Publique-Hôpitaux de Paris (AP-HP), Beaujon Hospital, Pathology department, Clichy, F-92110 France
[10 ]CNRS, UMR 8149, Clichy, F-92110 France
[11 ]IRD UR Géodes, Centre IRD de l'Ile de France, Bondy, F-93143 France
###### Article
gb-2008-9-1-r14
10.1186/gb-2008-9-1-r14
2395253
18208606
|
2021-04-12 13:38:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39459463953971863, "perplexity": 10812.775211795568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038067400.24/warc/CC-MAIN-20210412113508-20210412143508-00535.warc.gz"}
|
http://tomcuchta.com/teach/classes/2016/MATH-1190-Fall-2016-FairmontState/homework/hw8.php
|
[cv] [orcid] [github] [twitter] [researchgate] [youtube] [play chess with me]
Back to the class
Section 3.6 #11: Sketch the graph of $f(x)=x+\dfrac{32}{x^2}$.
Solution: First we take two derivatives of $f$: the first derivative $$f'(x)=1-\dfrac{64}{x^3},$$ and the second derivative $$f''(x)=\dfrac{192}{x^4}.$$ Find the critical points: firstly, $f'$ is undefined at $x=0$ and secondly, consider the equation $f'(x) \stackrel{\rm{set}}{=} 0$: $$1- \dfrac{64}{x^3}=0.$$ This yields the equation $$1 = \dfrac{64}{x^3}.$$ Multiply by $x^3$ (keeping in mind we are now taking $x \neq 0$) to get $$x^3 = 64.$$ Therefore $$x = \sqrt[3]{64}=4.$$ (note there is no $\pm$ here... that only appears when taking an even root!)
We have found the following critical points: $x=0$ and $x=4$. We should also find possible inflection points by asking where $f''$ is undefined, which is at $x=0$, and also consider solutions of the equation $f''(x) \stackrel{\rm{set}}{=} 0$: $$\dfrac{192}{x^4} = 0.$$ Multiplying by $x^4$ (which we are thinking of nonzero here) yields the equation $192=0$ which has no solutions. So the only possible inflection point is at $x=0$.
Now find the intervals of increasing and decreasing:
Now find the concavity behavior:
Finally find any $x$-intercepts by solving $f(x) \stackrel{\rm{set}}{=} 0$: $$x + \dfrac{32}{x^2} = 0.$$ Multiply by $x^2$ to get $x^3 + 32 = 0.$ To solve, subtract by $32$ and take the cube root to get $$x = \sqrt[3]{-32}=-\sqrt[3]{32}.$$ This means the point $(-\sqrt[3]{32},0)$ is on the graph of $f$. Also check to see if a $y$-intercept exists by trying to evaluate $f(0)$ --- it turns out that $f$ is not defined at zero and has an asymptote there.
Take all of this information together to sketch the graph:
Section 3.7 #8: Find two positive numbers that that satisfies the following property: the sum of the first number squared and the second number is $54$ and the product is a maximum.
Solution: Call these numbers $x$ and $y$. We need to optimize the value $$P=xy$$ subject to the constraint that $$x^2+y=54.$$ It is most natural to solve the constraint for $y$ and plug it into the formula for $P$: $$y=54-x^2$$ yielding $$P=x(54-x^2)=54x-x^3.$$ We must now maximize $P$. We will do so with the second derivative test (the first derivative test would also work). Compute $$P'=54-3x^2$$ and $$P''=-6x.$$ First find the critical points by solving $P' \stackrel{\rm{set}}=0$: $$54 = 3x^2,$$ so since $\dfrac{54}{3}=18$, we get $$18 = x^2.$$ Solving for $x$ yields the two critical points $$x = \pm \sqrt{18} = \pm 3 \sqrt{2}.$$ We are told that $x$ and $y$ must be positive numbers in the wording of the problem, so we throw out the negative solution, leaving us the only critical point $x=3\sqrt{2}$. Now to use the second derivative test, plug these critical points into $P''$ to get $$P''(3\sqrt{2})=-6(3\sqrt{2})=-18\sqrt{2} < 0,$$ showing that $P$ has a maximium at $x=3\sqrt{2}$. The problem asked for both numbers, so now we may find the value of $y$ by plugging into the equation $y=54-x^2$: $$y=54-(3\sqrt{2})^2=54-18=36.$$
Section 3.7 #35: The sum of the perimeters of an equilateral triangle and a square is $10$. Find the dimensions of the triangle and the square that produce a minimum total area.
Solution: First draw the triangle and the square. Label the sides appropriately:
Now recall that the area of this square is $\beta^2$ and the area of any triangle is $\dfrac{1}{2} \rm{base} \times \rm{height}$. We must find the height of the given triangle (the base is $\alpha$). To do this, bisect the triangle to get the following:
To find the height (we labeled it as "$?$"), use the Pythagorean theorem to get $$\left( \dfrac{\alpha}{2} \right)^2 + ?^2 = \alpha^2,$$ which yields $$?^2 = \alpha^2 - \dfrac{\alpha^2}{4} = \dfrac{3}{4} \alpha^2.$$ Taking the square root yields $$? = \pm \sqrt{ \dfrac{3}{4} \alpha^2} = \pm \dfrac{\sqrt{3}}{2} \alpha.$$ Since $?$ represents the length of a triangle, we take the positive value. This means the equilateral triangle has base $\alpha$ and height $\dfrac{\sqrt{3}}{2}\alpha$. Hence the area of the equilateral triangle is $$\dfrac{1}{2} \cdot \alpha \cdot \dfrac{\sqrt{3}}{2} \alpha = \dfrac{\sqrt{3}}{4}\alpha^2.$$ Therefore the quantity we must optimize is $$A = \mathrm{area}_{\mathrm{square}}+\mathrm{area}_{\mathrm{triangle}}=\beta^2 + \dfrac{\sqrt{3}}{4} \alpha^2.$$ What is the constraint? We were told the sum of the perimeters of the figures is $10$. The perimeter of the square is $4\beta$ and the perimeter of the triangle is $3\alpha$, therefore the constraint is $$4\beta + 3\alpha = 10.$$ Solving the constraint for, say, $\beta$, yields $$\beta=\dfrac{10-3\alpha}{4}.$$ Consequently $$\beta^2=\dfrac{100-60\alpha+9\alpha^2}{16}.$$ Plug this into our equation for the area to get $$A=\dfrac{100-60\alpha+9\alpha^2}{16} + \dfrac{4\sqrt{3}}{16} \alpha^2 = \dfrac{(9+4\sqrt{3})\alpha^2-60\alpha+100}{16}.$$ To find the critical point, differentiate and set the derivative equal to zero to get $$A'=\dfrac{(18+8\sqrt{3})\alpha-60}{16} \stackrel{\rm{set}}{=} 0.$$ Solve this by multiplying by $16$ to get $$(18+8\sqrt{3})\alpha - 60 = 0.$$ Add $60$ and divide by $18+8\sqrt{3}$ to get $$\alpha = \dfrac{60}{18+8\sqrt{3}} = \dfrac{30}{9+4\sqrt{3}}.$$ To show this is a minimum, we will use the second derivative test. Compute $$A''=\dfrac{18+8\sqrt{3}}{16},$$ a constant function. Therefore if we plug the critical point in to $A''$ we get $$A'' \left( \dfrac{30}{9+4\sqrt{3}} \right) = \dfrac{18+8\sqrt{3}}{16} > 0.$$ So the critical point is a minimum ("horizontal tangent and concave up --> minimum"), as desired. Finally, the value of $\beta$ may be found by plugging into our constraint equation: $$\beta = \dfrac{10-3\left(\frac{30}{9+4\sqrt{3}} \right)}{4} = \dfrac{10\sqrt{3}}{9+4\sqrt{3}}.$$
Problem B:Find all functions $f$ with the property that $f'(x)=3x^2+2x+4$, that is, find $D^{-1}(3x^2+2x+4)$.
Solution: Using the rules of anti-differentiation we get $$\begin{array}{ll} D^{-1}(3x^2+2x+4) &= 3 D^{-1}(x^2) + 2 D^{-1}(x) + 4 D^{-1}(1) \\ &= 3 \left( \dfrac{x^3}{3} \right) + 2 \left( \dfrac{x^2}{2} \right) + 4x + C \\ &= x^3 + x^2 + 4x + C. \end{array}$$
Problem C: Compute the sum $\displaystyle\sum_{k=0}^5 k^2+1$.
Solution: The sum is $$\begin{array}{ll} \displaystyle\sum_{k=0}^5 k^2+1 &= (0^2+1) + (1^2+1) + (2^2+1) + (3^2+1) + (4^2+1) + (5^2+1) \\ &= 1 + 2 + 5 + 10 + 17 + 26 \\ &= 61. \end{array}$$
|
2021-10-25 20:24:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977340459823608, "perplexity": 125.40218625730483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00361.warc.gz"}
|
https://eprint.iacr.org/2018/1227
|
## Cryptology ePrint Archive: Report 2018/1227
Efficient Information Theoretic Multi-Party Computation from Oblivious Linear Evaluation
Louis Cianciullo and Hossein Ghodosi
Abstract: Oblivious linear evaluation (OLE) is a two party protocol that allows a receiver to compute an evaluation of a sender's private, degree $1$ polynomial, without letting the sender learn the evaluation point. OLE is a special case of oblivious polynomial evaluation (OPE) which was first introduced by Naor and Pinkas in 1999. In this article we utilise OLE for the purpose of computing multiplication in multi-party computation (MPC).
MPC allows a set of $n$ mutually distrustful parties to privately compute any given function across their private inputs, even if up to $t<n$ of these participants are corrupted and controlled by an external adversary. In terms of efficiency and communication complexity, multiplication in MPC has always been a large bottleneck. The typical method employed by most current protocols has been to utilise Beaver's method, which relies on some precomputed information. In this paper we introduce an OLE-based MPC protocol which also relies on some precomputed information.
Our proposed protocol has a more efficient communication complexity than Beaver's protocol by a multiplicative factor of $t$. Furthermore, to compute a share to a multiplication, a participant in our protocol need only communicate with one other participant; unlike Beaver's protocol which requires a participant to contact at least $t$ other participants.
Category / Keywords: cryptographic protocols / information theoretic, multi-party computation, oblivious linear evaluation
Original Publication (with minor differences): The 12th WISTP International Conference on Information Security Theory and Practice (WISTP'2018)
|
2020-02-17 10:29:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6760899424552917, "perplexity": 2247.5714526388597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141806.26/warc/CC-MAIN-20200217085334-20200217115334-00468.warc.gz"}
|
https://clay6.com/qa/74904/figure-shows-a-metal-rod-pq-resting-on-the-smooth-rail-ab-and-positioned-be
|
Answer
Comment
Share
Q)
# Figure shows a metal rod PQ resting on the smooth rail AB and positioned between the poles of a permanent magnet. The rail, the rod and the magnetic field are in three mutual perpendicular directions. A galvanometer G connects the rails through a switch K. Length of the rod = 15 cm, B = 0.50 T, resistance of the closed loop containing the rod = $9.0 m \Omega$. Assume the field to be uniform. How much power is required at the same speed, to keep the rod moving at the same speed. (=12 cm/s) when k is closed ?
$(a)\; 90 \times 10^{-3} W$
$(b)\; 0.9 \times 10^{-3} W$
$(c)\; 900 \times 10^{-3} W$
$(d)\; 9 \times 10^{-3} W$
|
2020-09-20 08:32:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172600865364075, "perplexity": 584.4629059206491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400196999.30/warc/CC-MAIN-20200920062737-20200920092737-00514.warc.gz"}
|
https://www.physicsforums.com/threads/does-the-photon-have-a-4-velocity-in-a-medium.843312/
|
Does the photon have a 4-velocity in a medium?
Tags:
1. Nov 15, 2015
PFfan01
From classical electrodynamics textbooks, we know that the Fizeau experiment supports relativistic 4-velocity addition rule. But a recently-published paper says that the photon does not have a 4-velocity. See: "Self-consistent theory for a plane wave in a moving medium and light-momentum criterion", http://www.nrcresearchpress.com/doi/10.1139/cjp-2015-0167#.Vki8xGzovIU
I wonder who's right?
2. Nov 15, 2015
PWiz
I thought this was a well known fact.
3. Nov 15, 2015
PFfan01
What do you mean? Do you mean the Fizeau experiment is not a support in relativistic 4-velocity addition rule?
4. Nov 15, 2015
PWiz
No, I'm saying that the four-velocity is not defined for a photon.
5. Nov 15, 2015
PFfan01
There is a definition of four-velocity of light in the book by W. Pauli, Theory of relativity, (Pergamon Press, London, 1958), Eq. (14), p. 18, Sec. 6. Seems the definition is the same as that for a matter particle.
6. Nov 15, 2015
PWiz
I don't have the book with me. Can you post the definition?
If the definition is the same as that for a massive particle, then it can't be right. Photons move on null lines and experience 0 proper time. Since four velocity is the proper time derivative of four position, it is not defined for a photon.
7. Nov 15, 2015
Staff: Mentor
I agree with PWiz. Another way to think of it is that the four velocity is the four momentum divided by the mass, which is 0. Or that the four velocity is the unit tangent to the worldline and a null worldline can only have null tangents, not unit tangents.
8. Nov 15, 2015
PFfan01
If in free space, you are right. In a medium, the light speed is less than the vacuum light speed. In the book by Pauli, Fizeau running water experiment is used as a support in relativistic 4-velocity addition rule.
9. Nov 15, 2015
PWiz
But a photon always moves at $c$ regardless of the medium (it interacts with other particles in the medium and on a macroscopic scale you can say that the average speed of light reduces, but microscopically individual photons always move at the same speed). The photon still experiences 0 proper time, and you still cannot define its four velocity.
10. Nov 15, 2015
Staff: Mentor
I think that you probably want to ask about classical light waves rather than photons.
In a medium a plane wave will have a phase velocity which is less than c. You can definitely use the relativistic velocity addition formula on the phase velocity, so I assume that you could make a phase four velocity. Although I don't recall seeing anyone do that before.
11. Nov 16, 2015
PWiz
But then there is no contradiction. It might be possible to define a four velocity if you deal with light classically in a medium, and still have an undefined four velocity for the photon treatment.
So addressing the OP, I guess both statements can be right. I would still like to see the four velocity definition for the classical treatment of light in a medium though.
Last edited: Nov 16, 2015
12. Nov 16, 2015
Staff: Mentor
Yes. I agree.
13. Nov 16, 2015
PFfan01
Very interesting argument, but could you please show any references for your argument? Thanks a lot.
PS: The paper by Leonhardt, Ulf (2006), "Momentum in an uncertain light", Nature 444 (7121): 823, doi:https://dx.doi.org/10.1038%2F444823a [Broken] , says that the photon in a dielectric medium moves at the dielectric light speed, but the author did not tell why.
Last edited by a moderator: May 7, 2017
14. Nov 16, 2015
PWiz
References for the 2nd postulate of relativity or for atomic spacing?
15. Nov 16, 2015
PFfan01
The references for your statement that "a photon always moves at c regardless of the medium".
PS: In my understanding, Einstein's second hypothesis is the constancy of light speed in free space.
16. Nov 16, 2015
PWiz
So are you saying that there is no atomic spacing in a medium? Think about it. Between electron interactions in a medium, what does a photon move in? Reading post #9 again might help.
17. Nov 16, 2015
PFfan01
1. I never said "there is no atomic spacing in a medium".
2. I am just asking you to give any references for your statement that "a photon always moves at c regardless of the medium".
3. In fact, it is enough for you to tell me whether your statement is your reasoning from Einstein's second hypothesis or there are any references to support it.
Even if your statement is your reasoning, I am not able to judge whether it is correct or not, because it is far beyond my knowledge.
Sorry.
18. Nov 16, 2015
PWiz
A photon is always moving through empty space (when a photon is moving in a medium it's actually moving through the atomic spaces, which is nothing but empty space) or interacting with other particles. The 2nd postulate says that light always moves at $c$ in empty space (as measured in an inertial frame).
I just restated two well known facts. From these two facts, it follows that a photon always moves at $c$. It's just that the interactions of the photon with other particles in a medium "delay" the photon, so the effective speed of light seems to reduce in any particular medium compared to a vacuum (but the photon moves at $c$ between the interactions). That's all I'm saying.
P.S. I'm not trying to be confrontational here.
Last edited: Nov 16, 2015
19. Nov 16, 2015
PFfan01
So your statement that "a photon always moves at c regardless of the medium" is just your reasoning, without any references to support it. Right?
20. Nov 16, 2015
Staff: Mentor
Really? There are hard boundaries around the atoms that delimit them from "empty space"? And the photons never cross those boundaries? See further comments below.
Not really. You restated a common model for photon propagation in a medium, but that's a lot different from "well-known fact".
In fact, although it's a common model, it's not actually correct. For example:
The interactions you are talking about here are the absorption and emission of photons by atoms in the medium. These interactions do not "delay" one photon; they destroy one photon (when it's absorbed) and create a second photon (when it's emitted). (Note, btw, that the absorption and emission is actually done by electrons in the orbitals of the atom, which means that the photons do in fact have to cross the "boundary" of the atom--the electrons aren't all sitting on the boundary, they are in the interior.)
It is true that, in this somewhat more accurate model, the photons move at $c$ between interactions. However, the model is, as I just said, only somewhat more accurate. We don't actually measure the speed of the photon between interactions; we can't. And if we make our model more accurate still, by bringing in more quantum mechanical details, we will find that the concept of the "speed" of the photon between interactions isn't even well-defined; the quantum amplitudes will have contributions from off shell virtual photons.
The moral is to be very careful what you think of as a "well-known fact".
|
2018-03-18 05:10:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6955033540725708, "perplexity": 626.8495176208282}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00118.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/explain-why-maxima-become-weaker-weaker-increasing-n-fraunhofer-diffraction-due-single-slit_4382
|
# Explain Why the Maxima Become Weaker and Weaker with Increasing N - Physics
Explain why the maxima at theta=(n+1/2)lambda/a become weaker and weaker with increasing n
#### Solution
On increasing the value of n, the part of slit contributing to the maximum decreases. Hence, the maximum becomes weaker.
Concept: Fraunhofer Diffraction Due to a Single Slit
Is there an error in this question or solution?
|
2021-10-28 18:31:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3679144084453583, "perplexity": 947.9029306511835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00316.warc.gz"}
|
http://windfarmdesigns.com/iec-exclusion-areas/
|
# 6. IEC exclusion areas
IEC exclusion areas are represented by the gray areas in the figure below. The white areas show valid turbine positions. IEC exclusion areas can be calculated as a combination of type and sectors. If only one sector and one constraint type is involved, a map of absolute values can be displayed. Otherwise, the constraints and sectors are superimposed as logical maps.
IEC Exclusion areas
Press the IEC Constraints button to view/edit the constraint limits, and press the Extract button after selecting one or several of the following IEC Constraints, to generate the Constraints Map . By IEC constraints and exlusion areas, we refer to the IEC 61400-1 3rd ed. standard (2005), plus its amendment (2010). The standard defines design requirements for wind turbines, including ranges of wind quality aspects that the turbines should tolerate. The ones considered in ParkOptimizer are:
IEC constraints definitions
IEC constraints definitions
## 6.1 Terrain inclination
Terrain inclination is not a part of the IEC standard, but is useful for practical purposes. Terrain inclination are areas unsuitable for turbine placements because of steep slopes and/or steep road access . These constraints are not absolute, and can be remediated by proper civil works at some increased costs. Moreover, steep terrain causes flow inclinations as well (see next section), and these constraints often overlap.
At an early stage of the project development, it is useful to get an indicator of the problem with terrain fro screening of sites, and for initial layouts and derived energy yield estimates.
Terrain inclination is derived from the WindSim input files, \dtm\view_inclination.scl
## 6.2 Flow inclination
Flow inclination uses the uses vertical and horisontal wind speeds from Input files to calculate the flow inclination. The flow inclination is calculated as the maximum inclination over the sectors s at each location (x,y). If the maximum inclination exceeds the constraints defined in IEC constraints definitions (see figure above) then the location is an exclusion and presented as a gray point on the exclusion map.
## 6.3 Shear
The shear calculation uses wind speed Input Files, at heights defined with the Hight Definition button. In addition, the Wind Resource file selected in the Energy Map tab will provide the frequency of each sector : f(s).
Let $v_1, v_2 and v_3$ be the wind speed values corresponding to heights $h_1= Simulation Height+Shear Deviation$, $h_2=Simulation Height$ and $h_3=Simulation Height -Shear Deviation$ respectively. Further, let s be each of the sectors listed in the Use Sectors box of
Constraints Map tab. Then the shear coefficient $\alpha(s)$for sector s , at each point is the (Least Squares) solution of the equations
$v_1(s)/v_2(s) = (h_1/h_2)^{\alpha(s)}$
$v_3(s)/v_2(s) = (h_3/h_2)^{\alpha(s)}$
The shear coefficient $\alpha$ is then the sum over the Use Sectors $\sum_s \alpha(s)\cdot f(s)$.
Note that if the Wind Resource file is not defined, the shear coefficient will be set to the maximum over the sectors of $\alpha(s)$.
## 6.4 Turbulence
Turbulence is calculated from the turbulence intensity files of Input Files at the Simulation Hight, scaled according to a tws measurement file (if defined, in which case a turbulence intensity at the measurement height is also required). The Wind Resource file selected in the Energy Map tab provides frequency distribution of the sectors.
Turbulence GUI
Select the tws measurement file by pressing turbulence Setup:
First, select a .tws wind measurement file containing 10-min wind speed time series [m/s] and its associated standard deviations sd [m/s] within the 10-min time wind speed interval. The .tws file also contains information about its location and measurement height.
The exclusion areas are computed as Ambient effective turbulence intensity, defined as:
(1) $I_a/V_{hub} = {1\over V_{hub}} \cdot[\sum_s \sigma_c(s,V_{hub})^m f(s)]^{1/m}$
where
-m is the Wöhler exponent (default value is 10 for turbine blades (composite), 5 for tower (steel) )
$\sigma_c$ the characteristic turbulence latex]\sigma_c =1.28\sigma_\sigma[/latex], i.e. the 90th percentile of the turbulence standard deviation
$V_{hub}$ is wind speed at hub height.
Method 1: Turbulence intensity map from WindSim is used directly.
To use this method, uncheck “Use Measurement Mast for Turbulence calculation”. The Wöhler coefficient and terrain complexity check still applies to the results. This method is recommended when there is no mast data available, or the mast data is of poor quality.
According to WindSim documentation, $TI = {100}\cdot {{\sqrt{4/3 KE}}\over{\sqrt{UCRT^2+VCRT^2}}}$ (%) for each sector s
the resulting $TI(s)=\sigma_T(s,V_{hub})$ is then used to calculate the the Ambient effective turbulence (see equation (1) above)
Method 2: Turbulence intensity map from WindSim is scaled with mast data.
To use this method, check “Use Measurement Mast for Turbulence calculation”. The Wöhler coefficient and terrain complexity check. This method is recommended when there is no mast data available, or the mast data is of poor quality.
The turbulence at each point ( assuming a measurement file is used ) is calculated as follows. For each sector s, let TI(s) be the turbulence intensity at the Simulation Height, $TI_{meas}(s)$ the turbulence intensity from the measurement file (at the measurement height),$TI_{ws}(s)$ the WindSim turbulence intensity at the same coordinate and height as $TI_{meas}$, and $f(s)$ the sector frequency from the Wind Resource file, then the scaled turbulence intensity $TI_{scaled}(s)$ is the frequency weighted sum over sectors s in Use Sectors
$TI_{scaled}(s) = {TI(s) \cdot f(s) \cdot {TI_{meas}(s)} \over TI_{ws}(s)}$
The resulting $TI_{scaled}(s)$ is then used to calculate
If the wind resource file is not defined, the turbulence at each point is set to the maximum over s in Use Sectors of TI(s)
The parameter ic is used as a correction factor in complex terrain according to the IEC 61400-1 standard. The standard acknowledges that linear flow models underestimate turbulence in complex terrain, therefore a safety/correction factor of 15% increased turbulence is added if the terrain is complex. (For definition of complex terrain, see IEC 61400-1 3rd ed.). WindSim is hower a CFD tool believed to better capture turbulence, but there are still unresolved uncertainties related to turbulence calculations from CFD models, as well as extrapolation of turbulence from mast data.
## 6.5 Extreme wind
Extreme wind estimation is performed by the method of Independent Storms. The extreme-wind value is
obtained based on a measurement tws file and these values are scaled within the park according to wind speeds
and directions of WindSim Input Files at the Simulation Height.
Note that the measurement tws file is assumed to containe 10-minute windspeed averages, and the total duration
should not be less than 3 years
Note also that generating IEC constraints for the whole park will require a fair amount of computation time.
The extreme-wind values for a particular layout will be generated for the IEC_layout report ( Layout->Figure->
Report->IEC constraints for layout ), provided a tws measurement file has been selected. The computation time
is in this case much shorter. See chapter Layout optimization, section xx
Select the tws measurement file via the Setup button :
Extreme wind GUI
Extreme-wind estimation at the location of a measurement mast
The tws_file is assumed to contain 10-minute-averaged wind speed data.
$v_50$ value – the extreme wind estimate:
– The 10-minute average wind speed that is reached on average once every 50 years.
Let – $v(t)$ be the wind speed data from the tws_file .
– $R$ be the duration in years of the tws_file, i.e. the time from start to end of the file subtracted
the duration of gaps in the data.
$v_{max,is}$, maximal values from independent storms:
A list of maximal values from independent storms, $v_{max,is}$, is extracted from $v(t)$, where an independent storm is
determined from the 10-hour averaged values of $v(t), v_{10h}(t)$, and is the maximum of ${v(t_{start}),\dots,v(t_{end})}$, where $t_{start}$ and $t_{start}$ defines an interval where $v_{10h}(t)$ is continuously above $v_{lull} = 5 \enspace \frac{m}{s}$, see figure below.
Independent storms
The number of maximal values extracted from v(t) by this method is typically 100 / year.
$v_{max}$, subset of values from $v_{max,is}$:
The list of maximal values used in the estimation, $v_{max}$, is a selection of the highest values in $v_{max,is}$.
If $v_{max,is}$ is sorted in increasing order, then
$v_{max} = { v_{max,is}(m) , \dots , v_{max,is}(N) }$,
where m is the closest integer solution to
${m \over {N+1}^r} = {1 \over 11 }$
Here N is the number of elements in $v_{max,is}$, $r = N / R$ is the storm rate (events/year).
$v_{50}$
We assume that $v_{max}$ is Gumbel distributed, so the cumulative probability distribution has the form
$F(v) = exp(-exp( -(v - b) / a )$
So that
(1) $v = a(-\ln( -\ln(F(v)))+b = ay+b$
The parameters a and b are determined by weighted least-squares:
Let $y(1) = ln(R) + 0.557$, and $y(i)=y(i-1)-\frac{1}{i-1}$, for $i = 2 , \dots , M$( # elements in $v_{max}$ ),
and set the variance, $s^2$, of the i-th sample point according to $s^2(1) = (3.14^2 / 6)-\frac{1}{M+1}$ and
$s^2(i)=(s^2(i-1))^2-\frac{1}{(i-1)^2}$, for $i = 2 , \dots , M$
We consider v as the independent variable, so that we find the weighted least-squares fit of the line (1)
given the points $(v_{max}(i) ,y(i))$ according to
$y(i) = a'v_{max}(i) + b'$ i=1,\dots,M[/latex], with $a=\frac{1}{a'},\quad b=-b'/a'$,
which is the minimum of $S(a',b') = \sum_i w(i)\cdot [y(i)-a'v_{max}(i)-b']^2$, where the weights are
$w(i) = \frac{1/s^2(i)}{\sum_j ( 1/s^2(j) }$ ( so $\sum_i w(i)=1$ ).
The a’, b’ values minimizing S is found by solving the two equations $dS/da' = 0$ and $dS/db' = 0$.
The $v_{50}$ estimation is then given by
$v_{50} =[-ln(-ln(0.98))]a + b$
The regression is illustrated in the following figure.
Extreme wind Gumbel plot
Extreme-wind at locations in area close to measurement mast
The $v_{50}$ values at locations x, y and height h above ground are estimated by scaling each measurement entry
in the tws_file according to WindSim simulations of the wind-field – magnitude and direction, and performing
a $v_{50}$ estimation on these scaled tws values.
Assume the measurement mast is located at $x_{tws}, y_{tws}$, with measurements at height h. Each measurement
is a magnitude, direction pair $v_{tws}, d_{tws}$
Let $v_{ws}(x,y,s)$ and $d_{ws}(x,y,s)$ be WindSim simulations of the wind speed and direction at height h for sectors
$s = 1 , \cdots , 12$ ( 0′ , … , 330′ ) (or the chosen set of sectors)
For each sector s , there is a scale factor for the location x, y, given by
$f(x,y,s) = \frac{v_{ws}(x,y,s)}{{v_{ws}(x_{tws},y_{tws},s)}}$
valid for scaling v_{tws} values in the directions
$d_{ws}(x_{tws},y_{tws},s)$
Let $f(x,y,d)$ be the interpolated value of f for values of d other than $d_{ws}(x_{tws},y_{tws},s), s=1,\cdots,12$
The $v_{50}$ value at location x, y is then estimated:
– Adjust each $v_{tws}$ in the tws file by $v_{tws}(i) = v_{tws}(i) \cdot f(x,y,d_{tws}(i))$
– Perform the $v_50$ estimation on the adjusted $v_{tws}$ values
## 6.6 Other constraints
Constraints can be added to the Constraints Map by:
• Import Shape file (.shp) The shape file can define several polygons.
• By defining constraints with a drawing editor, pressing New. When a constraint is added with New, it will automatically be saved to the Results Directory, and can later be read with the Open pgn button.
Draw constraints
|
2018-10-20 13:05:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 81, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003098964691162, "perplexity": 3255.2175918897674}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512750.15/warc/CC-MAIN-20181020121719-20181020143219-00500.warc.gz"}
|
https://paws-public.wmflabs.org/paws-public/47694692/03%20-%20Wikidata.ipynb
|
# Wikidata¶
Wikidata is one of the newer families added to the wikimedia projects. It acts as a central storage for structured data for all the wikimedia projects. It solves 2 major problems that wikimedia projects used to face:
• Copied information (If the information changes it has to be manually edited in each website)
• Unstructured data (A better method of storing data which can be queried using SQL or similar)
In [5]:
import pywikibot
In [6]:
wikidata = pywikibot.Site('wikidata', 'wikidata')
wikidata
Out[6]:
DataSite("wikidata", "wikidata")
In [8]:
testwikidata = pywikibot.Site('test', 'wikidata')
testwikidata
Out[8]:
DataSite("test", "wikidata")
# 1. Items and Properties¶
In wikidata, every page is either an item or a property.
Items are used to represent all the things in human knowledge: including topics, concepts, and objects. For example; color (Q1075), Albert Einstein (Q937), Earth (Q2), and cat (Q146) are all considered as items in Wikidata.
Properties are the things that describe and define a item. Each data bit related to an item is a type of property. Properties are different for different types of items. Examples of properties for Python (Q28865) are: license (P275), bug tracking system (P1401), official website (P856), Stack exchange tag (P1482).
The wikidata API helps to query data form wikidata using SparkQL to filter properties. So, for example you can find all countries in the world which have a population between 10 million to 300 million with just 1 query. In the earlier category interface, there would have to be a category for this information or you would have to parse evry country's page to find the population using Natural language processing!
### Exercise - Contribute to wikidata¶
Find an item on wikidata and edit it to add some additional information. Some tips on finding an item:
• Your favourite book or author
• Your city or state, or a popular place near your hometown
• A software, tool, or programming language that you like using
• If you can type in a native language, try translating the label/title of an item
# 2. Examples of bots with wikidata¶
## The wikidata game¶
The wikidata game is an example of a bot which helps users contribute better to wikidata. You can check out the wikidata game at https://tools.wmflabs.org/wikidata-game
The wikidata game finds possible pages which do not have a certain type of information using the structured queries (For example human items which have no gender) and shows the wikipedia page related to that item. Then the user is expected to identify a specific property of the item (For example male or female for the property gender).
## The wikidata resonator¶
Wikidata resonator is a script which pulls data from wikidata and joins all the property data of an item to form a descriptive paragraph about the item. You can check it out at https://tools.wmflabs.org/reasonator/
Other than forming a descriptive paragraph of the item, it also groups similar properites like "Relative" group, "External sources" etc. based on some simple conditions. It also creates a timeline of the item if possible based on the properties that would have a datetime data type. It also generates a QR Code to use for the related wikipedia page and shows related images pulled from the related commons.wikimedia page!
# 3. Fetching data from wikidata using pywikibot¶
The first thing we're going to do is figure out how to get data from wikidata using pywikibot. Here, it's not like a generic mediawiki website, where the text of the page is pulled. Here, the data is structured. We use a ItemPage class in pywikibot which can handle these items in a better way:
In [ ]:
itempage = pywikibot.ItemPage(wikidata, "Q1356097") # Q42 is Douglas Adams
itempagels
In wikidata, the page.text won't work like other mediawiki websites where it gives a string of the whole content in the page. The data and properties are stored in Python dictionary structure:
In [ ]:
itempage.get()
If you want to get the data using the title of the page rather than the item ID, we can get the wikidata article associated to a wikipedia page using:
In [ ]:
itempage == pywikibot.ItemPage.fromPage(pywikibot.Page(pywikibot.Site('en', 'wikipedia'), 'Douglas Adams'))
In [ ]:
Let's take a closer look at the data items given by an item page. It is a dictionary with the following keys:
In [ ]:
itemdata = itempage.get()
itemdata.keys()
### Labels, Descriptions and Aliases¶
Labels are the name or title of the wikidata item.
Aliases are alternate labels for the same item in the same lanugage. For example, "Python (Q28865)" The programming language has the alias "Python language", "Python programming language", "/usr/bin/python", etc.
Descriptions are useful statements which can help distinguish items with similar labels. Wikidata items are unique only by their item ID (Qxx) hence the description helps differentiate behind "Python (Q271218)" the genus of reptiles, and "Python (Q28865)" the programming language, and "Python (Q15728)" the family of missiles!
As wikidata does not have a specific code/language (the code we use is "wikidata") it has data for all languages. Hence, the same item can have a different label in Engligh, Arabic, or French. So, these fields in the data are dictionaries with the key as the language code and the value as the label in that language.
In [ ]:
itemdata['labels']
For convenience, after the itempage.get() is called, the data is stored in the page variable also:
In [ ]:
itemdata['labels'] == itempage.labels
## Exercise - Check whether a given Item has a label in your native language¶
Find the language code for your native language and write a function to check if a given item has a label in that language, and what other aliases it has in that language.
### Claims¶
Claims Are other wikidtaa pages that are linked to the given item using properties. The 'claims' are stored as another dictionary with the keys as property IDs (P1003, P1005, P1006, etc.) and the value is a list of objects pywikibot.page.Claim.
Hence, the claim is the value of all the properties that have been set for the given item.
In [ ]:
itemdata['claims']
In [ ]:
# Similarly, this is available in the page object using:
itemdata['claims'] == itempage.claims
Let's take a look at the P800 (notable work) for the item Q42 (Douglas Adams):
In [ ]:
itempage.claims['P800']
There are multiple claims for the property "P800" and we can ask pywikibot to resolve the claim and fetch the data about the claim:
In [ ]:
itempage.claims['P800'][0].getTarget()
So, we notice that the claim for the first "notable work (P800)" of "Douglas Adams (Q42)" is the item Q25169. As this is another ItemPage we can fetch the english label for this item by doing:
In [ ]:
p800_claim_target = itempage.claims['P800'][0].getTarget()
p800_claim_target.get()
p800_claim_target.labels['en']
So, finally we were able to find one of the most notable work of Douglas Adams using the wikidata API exposed by pywikibot. Imagine doing the same in the english wikipedia !
Thought exercise: How would you figure out the most notable work of the author using the chunk of text given by an English wikipedia Page ?
## Exercise - Check whether item is in India¶
Given a item ID, check whether the Item is in India by checking the value of the "country" property of the item. Write a function that checks this.
Hence, when the function is run on Q987 (New Delhi), it should give True but on Q62 (San Francisco) it should give False.
# 6. Property Pages¶
Sometimes, it is important to be able to fetch data about the property we find itself. For example, if we want to list the english label of the property and the value in a tabular form.
To do this, we use a PropertyPage object to deal with properties:
In [ ]:
propertypage = pywikibot.PropertyPage(wikidata, 'P512')
propertypage
In the PropertyPage, we again can access the data similar to how it was accessed in the ItemPage:
In [ ]:
propertypage.get()
propertypage.labels['en']
# 4. Wikidata data types¶
On Wikidata, we've already seen ItemPages and PropertyPages. But sometimes, a Claim's value need not be another Itempage, and can be some other data type like text, number, datetime, etc. Pywikibot provides a class for each of these data-types for easier accesibility to the value and resolve the claim.
The Data types available in wikidata can be seen at: https://www.wikidata.org/wiki/Special:ListDatatypes
The wikidata datatypes provided and the corresponding name of the wikidata data-type are:
• Item - pywikibot.page.ItemPage - Link to other items at the project.
• Property - pywikibot.page.PropertyPage - Link to properties at the project.
• Global Coordinate - pywikibot.Coordinate - Literal data for a geographical position given as a latitude-longitude pair in gms or decimal degrees.
• Time - pywikibot.WbTime - Literal data field for a point in time.
• Quantity - pywikibot.WbQuantity - Literal data field for a quantity that relates to some kind of well-defined unit.
• Monolingual Text - pywikibot.MonoLingualText - Literal data field for a string that is not translated into other languages.
#### Data types mapping to str¶
Some types of wikidata have are made specially to show them using a different method for example, showing it as a link, etc. But they all map to the python str. They are:
• String - str - Literal data field for a string of glyphs. Generally do not depend on language of reader.
• URL - str - Literal data field for a URL.
• External Identifier - str - Literal data field for an external identifier. External identifiers may automatically be linked to an authoritative resource for display.
• Mathematical formula - str - Literal data field for mathematical expressions, formula, equations and such, expressed in a variant of LaTeX.
In [ ]:
# Item
item = pywikibot.ItemPage(wikidata, "Q42").get()['claims']['P31'][0].getTarget()
print("Type:", type(item))
print("Instance of Douglas Adams:", item, '(', item.get()['labels']['en'], ')')
In [ ]:
# Property
_property = pywikibot.PropertyPage(wikidata, "Property:P31")
_property.get()
print("Type:", type(_property))
print("Property 'instance of':", _property, '(', _property.labels['en'], ')')
In [ ]:
# Global Coordinate
coord = pywikibot.ItemPage(wikidata, "Q668").get()['claims']['P625'][0].getTarget()
print("Type:", type(coord))
print("Coordinate location of India:", coord)
In [ ]:
# Time
_time = pywikibot.ItemPage(wikidata, "Q28865").get()['claims']['P571'][0].getTarget()
print("Type:", type(_time))
print("Inception of Python (programming language):", _time)
In [ ]:
# Quantity
qty = pywikibot.ItemPage(wikidata, "Q668").get()['claims']['P1082'][0].getTarget()
print("Type:", type(qty))
print("Population in India:", qty)
In [ ]:
# Monolingual text
monolingual_text = pywikibot.ItemPage(wikidata, "Q42").get()['claims']['P1477'][0].getTarget()
print("Type:", type(monolingual_text))
print("Birth name of Douglas Adams:", monolingual_text)
In [ ]:
# String
_string = pywikibot.ItemPage(wikidata, "Q28865").get()['claims']['P348'][0].getTarget()
print("Type:", type(_string))
print("Version of Python:", _string)
In [ ]:
# Mathematical Formula
formula = pywikibot.ItemPage(wikidata, "Q11518").get()['claims']['P2534'][0].getTarget()
print("Type:", type(formula))
print("Formula of Pythagorean theorem:", formula)
## Exercise - Find URL and External identifier¶
Using the API, find the type and value of the Official website (P856) and the Freebase identifier (P646) of Python (Q28865).
# 5. Adding more meaning to Properties¶
Frequently, a property value may require additional data. For example, consider the property educated at (P69) in the earlier data fetched from Douglas Adams (Q42). It can be seen on WikiData that "St John's College" is mentioned as one of his education schools from "start time" of 1971 to "end time" of 1974. And it also says his "academic major" is English literature and "academic degree" was Bachelor of Arts.
### Qualifiers¶
All this information would not be found if Wikidata was restricted to a (property, value) storage structure. Hence, Wikidata also allows Qualifiers. Qualifiers expand on the dta provided by a (property, value) pair by giving it context. Qualifiers also consist of a (property, value) !
In the above example, the properties which are being used as qualifiers are:
In [ ]:
itempage.claims['P69']
In [ ]:
itempage.claims['P69'][0].getTarget().get()
itempage.claims['P69'][0].getTarget().labels['en']
In [ ]:
itempage.claims['P69'][0].qualifiers
The qualifiers are again claims, as they are similar to the (property, value) pair for item pages. Let us see what the value of the qualifier is by resolving the claim:
In [ ]:
# Fetch the label of the P512 (academic degree) property
claim = itempage.claims['P69'][0]
claim.qualifiers['P512'][0].getTarget().get()
claim.qualifiers['P512'][0].getTarget().labels['en']
Some qualifiers may have a value which is not another item, for example the "start date" qualifier. In such cases, we need to check the type of the item:
In [ ]:
claim = itempage.claims['P69'][0]
claim.qualifiers['P580'][0].getTarget()
Here, WBTime is a pywikibot class which handles the format of WikiBase Time. Wiki base is the underlying technology that powers the structured editing and so on of Wikidata.
Other functions to modify qualifiers are:
• claim.removeQualifier()
• claim.addQualifier()
• claim.has_qualifier()
## Exercise - Find time studied at school¶
In the case of Douglas Adams as we saw, there are 2 schools mentioned. Using the start time and end time, find the number of years that he studdied in any school and print it out in the format:
<school name>: <start year> to <end year> => <n> years
### Reference (Sources)¶
Other than the qualifier, we would also want the source of the data. Hence, the reference or source field helps in adding, removing, editing these:
In [ ]:
itempage.claims['P69'][0].getSources()
Again, the source is a (property, value) where the property describes what type of source it is. Some popular properties are:
A source is again a list of (property, value) tuples. It can have additional properties like: "original language of work", "publisher", "author", "title", "retrieved", etc. if necessary.
Let us take a look at a source here:
In [ ]:
source = itempage.claims['P69'][0].getSources()[0]
source
In [ ]:
# Get the value of the first tuple in the source:
source['P248']
In [ ]:
source['P248'][0].getTarget().get()['labels']['en']
## Exercise - Check number of values that have a source as English Wikipedia¶
A large number of data in wikidata was pulled from the "English Wikipedia". Go through all (property, value) pairs and check how many of them were taken from the English Wikipedia (Q328).
# 7. Search wikidata using python¶
The Search on wikidata can be triggered using the python api too. To do this, we use the pywikibot.data.api.Request class which provides with helper functions to query the wikidata website and fetch results:
In [ ]:
india_search = pywikibot.data.api.Request(
site=wikidata,
parameters={"action": "wbsearchentities",
"format": "json",
"type": "item",
"language": "en",
"search": "India"})
india_search.submit()
As you can see, this function simply returns the dictionary search results directly. This needs to be parsed to be more useful. By modifying the parameters we can also search for the item using other languages and types.
## Exercise: Create an ItemPage for every search result¶
The search result given by the API is a python dictionary. But it has a lot of data which may not be very useful to us.
1. Loop over every search item and create a ItemPage object and store these in a list.
2. Finally, loop over the ItemPage list and print the english label for each item in the search using the ItemPage class.
# 8. Further Reading¶
To read more ways of using pywikibot to access wikidata, go to https://www.wikidata.org/wiki/Wikidata:Pywikibot_-_Python_3_Tutorial
SparQL queries are SQL like queries that can be run on Wikidata to fetch data from it. To try out SparkQL queries and visualize the data using nice plots, you can use https://query.wikidata.org. It has a lot of example SparQL queries which can be useful to learn SparQL.
|
2020-04-09 10:56:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24911311268806458, "perplexity": 2660.477157402765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371833063.93/warc/CC-MAIN-20200409091317-20200409121817-00344.warc.gz"}
|
https://www.jobilize.com/algebra/section/extension-rates-of-change-and-behavior-of-graphs-by-openstax?qcr=www.quizover.com
|
# 1.3 Rates of change and behavior of graphs (Page 6/15)
Page 6 / 15
Estimate the intervals where the function is increasing or decreasing.
Estimate the point(s) at which the graph of $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ has a local maximum or a local minimum.
local maximum: local minimum:
For the following exercises, consider the graph in [link] .
If the complete graph of the function is shown, estimate the intervals where the function is increasing or decreasing.
If the complete graph of the function is shown, estimate the absolute maximum and absolute minimum.
absolute maximum at approximately absolute minimum at approximately
## Numeric
[link] gives the annual sales (in millions of dollars) of a product from 1998 to 2006. What was the average rate of change of annual sales (a) between 2001 and 2002, and (b) between 2001 and 2004?
Year Sales (millions of dollars) 1998 201 1999 219 2000 233 2001 243 2002 249 2003 251 2004 249 2005 243 2006 233
[link] gives the population of a town (in thousands) from 2000 to 2008. What was the average rate of change of population (a) between 2002 and 2004, and (b) between 2002 and 2006?
Year Population (thousands) 2000 87 2001 84 2002 83 2003 80 2004 77 2005 76 2006 78 2007 81 2008 85
a. –3000; b. –1250
For the following exercises, find the average rate of change of each function on the interval specified.
$f\left(x\right)={x}^{2}\text{\hspace{0.17em}}$ on
$h\left(x\right)=5-2{x}^{2}\text{\hspace{0.17em}}$ on $\text{\hspace{0.17em}}\left[-2,\text{4}\right]$
-4
$q\left(x\right)={x}^{3}\text{\hspace{0.17em}}$ on $\text{\hspace{0.17em}}\left[-4,\text{2}\right]$
$g\left(x\right)=3{x}^{3}-1\text{\hspace{0.17em}}$ on $\text{\hspace{0.17em}}\left[-3,\text{3}\right]$
27
$y=\frac{1}{x}\text{\hspace{0.17em}}$ on
$p\left(t\right)=\frac{\left({t}^{2}-4\right)\left(t+1\right)}{{t}^{2}+3}\text{\hspace{0.17em}}$ on $\text{\hspace{0.17em}}\left[-3,\text{1}\right]$
–0.167
$k\left(t\right)=6{t}^{2}+\frac{4}{{t}^{3}}\text{\hspace{0.17em}}$ on $\text{\hspace{0.17em}}\left[-1,3\right]$
## Technology
For the following exercises, use a graphing utility to estimate the local extrema of each function and to estimate the intervals on which the function is increasing and decreasing.
$f\left(x\right)={x}^{4}-4{x}^{3}+5$
Local minimum at $\text{\hspace{0.17em}}\left(3,-22\right),\text{\hspace{0.17em}}$ decreasing on increasing on
$h\left(x\right)={x}^{5}+5{x}^{4}+10{x}^{3}+10{x}^{2}-1$
$g\left(t\right)=t\sqrt{t+3}$
Local minimum at $\text{\hspace{0.17em}}\left(-2,-2\right),\text{\hspace{0.17em}}$ decreasing on $\text{\hspace{0.17em}}\left(-3,-2\right),\text{\hspace{0.17em}}$ increasing on
$k\left(t\right)=3{t}^{\frac{2}{3}}-t$
$m\left(x\right)={x}^{4}+2{x}^{3}-12{x}^{2}-10x+4$
Local maximum at local minima at $\text{\hspace{0.17em}}\left(-3.25,-47\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(2.1,-32\right),\text{\hspace{0.17em}}$ decreasing on $\text{\hspace{0.17em}}\left(-\infty ,-3.25\right)\text{\hspace{0.17em}}$ and increasing on and
$n\left(x\right)={x}^{4}-8{x}^{3}+18{x}^{2}-6x+2$
## Extension
The graph of the function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ is shown in [link] .
Based on the calculator screen shot, the point is which of the following?
1. a relative (local) maximum of the function
2. the vertex of the function
3. the absolute maximum of the function
4. a zero of the function
A
Let $f\left(x\right)=\frac{1}{x}.$ Find a number $\text{\hspace{0.17em}}c\text{\hspace{0.17em}}$ such that the average rate of change of the function $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ on the interval $\text{\hspace{0.17em}}\left(1,c\right)\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}-\frac{1}{4}.$
Let $\text{\hspace{0.17em}}f\left(x\right)=\frac{1}{x}$ . Find the number $\text{\hspace{0.17em}}b\text{\hspace{0.17em}}$ such that the average rate of change of $\text{\hspace{0.17em}}f\text{\hspace{0.17em}}$ on the interval $\text{\hspace{0.17em}}\left(2,b\right)\text{\hspace{0.17em}}$ is $\text{\hspace{0.17em}}-\frac{1}{10}.$
$b=5$
## Real-world applications
At the start of a trip, the odometer on a car read 21,395. At the end of the trip, 13.5 hours later, the odometer read 22,125. Assume the scale on the odometer is in miles. What is the average speed the car traveled during this trip?
A driver of a car stopped at a gas station to fill up his gas tank. He looked at his watch, and the time read exactly 3:40 p.m. At this time, he started pumping gas into the tank. At exactly 3:44, the tank was full and he noticed that he had pumped 10.7 gallons. What is the average rate of flow of the gasoline into the gas tank?
2.7 gallons per minute
Near the surface of the moon, the distance that an object falls is a function of time. It is given by $\text{\hspace{0.17em}}d\left(t\right)=2.6667{t}^{2},\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ is in seconds and $\text{\hspace{0.17em}}d\left(t\right)\text{\hspace{0.17em}}$ is in feet. If an object is dropped from a certain height, find the average velocity of the object from $\text{\hspace{0.17em}}t=1\text{\hspace{0.17em}}$ to $\text{\hspace{0.17em}}t=2.$
The graph in [link] illustrates the decay of a radioactive substance over $\text{\hspace{0.17em}}t\text{\hspace{0.17em}}$ days.
Use the graph to estimate the average decay rate from $\text{\hspace{0.17em}}t=5\text{\hspace{0.17em}}$ to $\text{\hspace{0.17em}}t=15.$
approximately –0.6 milligrams per day
what is set?
a colony of bacteria is growing exponentially doubling in size every 100 minutes. how much minutes will it take for the colony of bacteria to triple in size
I got 300 minutes. is it right?
Patience
no. should be about 150 minutes.
Jason
It should be 158.5 minutes.
Mr
ok, thanks
Patience
100•3=300 300=50•2^x 6=2^x x=log_2(6) =2.5849625 so, 300=50•2^2.5849625 and, so, the # of bacteria will double every (100•2.5849625) = 258.49625 minutes
Thomas
what is the importance knowing the graph of circular functions?
can get some help basic precalculus
What do you need help with?
Andrew
how to convert general to standard form with not perfect trinomial
can get some help inverse function
ismail
Rectangle coordinate
how to find for x
it depends on the equation
Robert
yeah, it does. why do we attempt to gain all of them one side or the other?
Melissa
whats a domain
The domain of a function is the set of all input on which the function is defined. For example all real numbers are the Domain of any Polynomial function.
Spiro
Spiro; thanks for putting it out there like that, 😁
Melissa
foci (–7,–17) and (–7,17), the absolute value of the differenceof the distances of any point from the foci is 24.
difference between calculus and pre calculus?
give me an example of a problem so that I can practice answering
x³+y³+z³=42
Robert
dont forget the cube in each variable ;)
Robert
of she solves that, well ... then she has a lot of computational force under her command ....
Walter
what is a function?
I want to learn about the law of exponent
explain this
what is functions?
A mathematical relation such that every input has only one out.
Spiro
yes..it is a relationo of orders pairs of sets one or more input that leads to a exactly one output.
Mubita
Is a rule that assigns to each element X in a set A exactly one element, called F(x), in a set B.
RichieRich
|
2021-05-13 13:32:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 45, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6962882876396179, "perplexity": 537.0292335778888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00461.warc.gz"}
|
http://www.yesterdayscoffee.de/
|
# How to undo a commit in git
Really throw away everything (all changes!) from the last commit:
git reset --hard HEAD~1
Delete the commit, but preserve all changes:
git reset --soft HEAD~1
For more details and explanations, read this excellent answer by Ryan Lundy on stackoverflow.
# Other settings for “non-scientific” texts in LaTeX
Figures that have no numbers, but only text in the captions:
\usepackage{caption}
\captionsetup[figure]{labelformat=empty,textfont=footnotesize}
\renewcommand{\thefigure}{}
No indentation at the beginning of a paragraph, but a bigger separation space between paragraphs:
\setlength\parindent{0pt}
Header lines that contain only page number and chapter, not the section:
\usepackage{scrlayer-scrpage}
\clearpairofpagestyles
\automark[chapter]{chapter}
# LaTeX package “wrapfig”
LaTeX is all nice and fancy if you write technical texts, where the pictures are floating in the text (mostly at the top and/or bottom of pages) and you reference them with numbers. But as I do all sorts of things with LaTeX, sometimes I want more “fun” texts which have pictures somewhere in the pages and text flowing around them.
For this purpose, I have now discovered the package wrapfig:
\usepackage{wrapfig}
You can include a picture like this (this one floats left of the text with a width of 7em):
\begin{wrapfigure}{l}{7em}
\centering
\includegraphics[width=\linewidth]{AuthorOfArticle.png}
\end{wrapfigure}
You can control some of the appearance with different settings in the preamble (see the documentation at CTAN), e.g.,
\intextsep0.5ex
# Docker cleanup
Delete all containers
docker rm $(docker container ls -a -q) Delete all unnamed images docker rmi$(docker image ls | grep "<none>" | tr -s ' ' | cut -d ' ' -f 3)
docker image ls lists all images,
grep "<none>" selects all that have the tag (or repository! or part of name!) “<none>” ,
tr -s ' ' merges sequences of spaces to a single space,
cut -d ' ' -f 3 splits each line at the space and gives the third column (the image id).
The list of ids is then passed on to docker rmi to be deleted.
# How to get WiFi running on Suse Leap 42.3 (Broadcom driver)
After the update from Suse Leap 42.2 to Suse Leap 42.3, my Wifi stopped working. Which is kind of bad, because I need internet to figure out what is wrong…
This was the situation right after the update, when it was not working:
> lspci -nnk | grep -A 3 "Network"
04:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)
Subsystem: Hewlett-Packard Company Device [103c:804a]
Kernel driver in use: bcma-pci-bridge
Kernel modules: bcma
> hwinfo --short
network:
eth0 Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller
network interface:
eth0 Ethernet network interface
lo Loopback network interface
> iwconfig
lo no wireless extensions.
eth0 no wireless extensions.
> lsmod | grep "wl"
No WiFi to be seen!
So now this is what I did:
1. Remove the old driver:
> rpm -e broadcom-wl broadcom-wl-kmp-default
2. Find out my exact kernel version (the last part is the part we need, i.e., “default”):
> uname -r
4.4.104-39-default
3. Add the Packman repository to my repositories:
> zypper addrepo http://packman.inode.at/suse/openSUSE_Leap_42.3/ packman
4. Install the drivers, paying attention to my kernel type (…-“default”):
> zypper install broadcom-wl-kmp-default broadcom-wl
You can also download the rpm by hand and install it. In that case, you need to pay attention to the full kernel number. Meaning, for my kernel 4.4.104-39, I should install the driver from broadcom-wl-kmp-default-6.30.223.271_k4.4.49_19-3.6.x86_64.rpm where the numbers after the k match exactly. Using Packman does that for you.
Another issue I had with manual installation was missing keys. At least my configuration forces a valid PGP key and aborts if no key is in the key list. And I didn’t have a key for the downloaded rpms. It is possible to tell rpm to install the packages without checking the key (option --nosignature), but that did not properly install the package (without error messages, of course). When installing with zypper it looks for the key itself and you don’t have to worry.
5. I rebuilt the loaded modules list and then restarted, but I am not sure it is necessary:
> mkinitrd
Finally, the outputs of the above commands are (for reference, the next time it breaks):
> lspci -nnk | grep -A 3 "Network"
04:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01)
Subsystem: Hewlett-Packard Company Device [103c:804a]
Kernel driver in use: wl
Kernel modules: bcma, wl
> hwinfo --short
network:
eth0 Realtek RTL8101/2/6E PCI Express Fast/Gigabit Ethernet controller
network interface:
wlan0 WLAN network interface
eth0 Ethernet network interface
lo Loopback network interface
> iwconfig
lo no wireless extensions.
wlan0 IEEE 802.11abg ESSID:"..."
Mode:Managed Frequency:2.412 GHz Access Point: ...
Bit Rate=65 Mb/s Tx-Power=200 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Encryption key:off
Power Management:off
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0
eth0 no wireless extensions.
> lsmod | grep "wl"
wl 6451200 0
cfg80211 610304 1 wl
And it only took all afternoon … sometimes I hate Linux 🙁
# Inkscape Basics (because I forgot what I knew a week ago)
Resize Inkscape svg canvas to it’s content
Go to File -> Document Properties -> Page
Resize page to drawing or selection
Add guides helps with alignment etc.
Just click ruler and drag down from top, left or even top/left for diagnonal guide
Setup/Change snapping when moving objects around
Go to File -> Document Properties -> Snap
Change color of object
Go to Object -> Fill and stroke (Shift+Ctrl+F)
Posted in Mac
# Get version of a Java package (in Java)
It’s easy!
String version = myObject.getClass()
.getPackage()
.getImplementationVersion();
Where it comes from…? I think from the MANIFESTO in the jar file.
# Overlays with Code Listings
You cannot include a lstlisting (package listings) in a only or visible command in LaTeX beamer. BUT you can define the listing beforehand and then include that inside the only or visible!
Example (from slides about recursion in Java):
\defverbatim{\Lst}{
\begin{lstlisting}
public int fakultaet(int n) {
if ( n == 1 ) {
return 1;
} else {
return n * fakultaet( n-1 ) ;
}
}
\end{lstlisting}
}
\begin{frame}[fragile]
\frametitle{Aufgabe: Fakultät von $n$}
Definition:
\begin{itemize}
\item \lstinline{fakultaet( 1 ) = 1}
\item \lstinline{fakultaet( n ) = n * fakultaet( n-1 ) }
\end{itemize}
\bigskip
Java Code:
\visible<2|handout:0>{\Lst}
\end{frame}
# Setting computer time from the internet [hacky way]
Most of my pool computers show the wrong time and most of them are different. Just for fun, here are the times shown by those running at the moment of the poll:
8:36 (2x), 8:40, 9:36 (2x), 10:35, 10:36 (3x), 10:39 (6x), 11:36 (2x), 11:40
I assume it is the result of setting the time wrong in the installation and then a few semesters of trying to fix some of them (those running at the moment, the first three rows, until the admin was bored, a single one now and then, …), adjusting to daylight savings time or forgetting it and so on.
So this is what I tried to get them back on track (courtesy of AskUbuntu.com):
sudo date -s "\$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
The line first gets a random web page (here google.com) and prints the header of the HTTP response, e.g.,:
HTTP/1.1 302 Found
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Referrer-Policy: no-referrer
Content-Length: 266
Date: Thu, 14 Dec 2017 10:46:49 GMT
The line then retrieves the part of the response with the date using grep. It splits the line with the date at spaces with cut -d ' ' and uses the parts 5 to 8. In this line, part 4 is the day of the week, part 3 is the text Date: and parts 1-2 are empty because of the leading spaces. So using parts 5 to 8 results in a date and time in a format that the tool date can understand. Before passing the time on to date, the letter Z is appended. This Z stands for UTC, meaning the time zone set on the computer will be taken into account.
So the line after evaluating wget, grep and cut for the example page we got will be:
sudo date -s "14 Dec 2017 10:46:49Z"
The option -s sets the date to the specified value. So if the request ran in a reasonable time, we should have a reasonably accurate time set for the computer.
PS: Yes, I know that there is such a thing as NTP and I know that time synchronization is not a problem that you need to hack on your own. But this version is much more freaky and cool!! [Also NTP and the university firewall don’t seem to be friends]
# Discontinuous x axis with pgfplots
Having a discontinuous y axis is common and Stackoverflow has a few solutions for that. I wanted an x axis with a gap (values 0-10 plus value 20). So this is what I did.
I create an axis from 0 to 12 and give 12 the label “20”. I add an extra tick on the x-axis at about halfway between 10 and “12”, where I want the gap and make it thick and white – basically I want a break in the axis. Then over that break I draw the “label” of this tick, which is two vertical lines at an angle, symbolizing the discontinuity. The relevant part of the style:
xmin=0,
xmax=12.5,
xticklabels={0, 2, 4, 6, 8, 10, 20},
extra x ticks={11.1},
extra x tick style={grid=none, tick style={white, very thick}, tick label style={xshift=0cm,yshift=.50cm, rotate=-20}},
extra x tick label={\color{black}{/\!\!/}},
And then I add the data with x-values 20 at x-coordinate “12”:
\addplot coordinates {
(0, 43.3) (1, 43.2) (2, 43.3) (3, 42.9) (4, 42.1) (5, 41.4)
(6, 41.2) (7, 41.7) (8, 41.7) (9, 42.1) (10, 42.1) };
\pgfplotsset{cycle list shift=-1}
\addplot coordinates { (12, 43.8) };
\draw[dotted] (axis cs:10, 42.1) -- (axis cs:12, 43.8);
Adding the last point separately from the rest of the data serves the purpose that I can draw the dotted line by hand. cycle list shift=-1 causes the new “plot” to have the same style as the previous. There might be a way of doing this, but this works.
Hat tip: Stackoverflow, but I currently cannot find the question(s) and answer(s) that helped me solve this. Still, thank you, anonymous people.
|
2018-03-20 13:39:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40917885303497314, "perplexity": 6776.364710431326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647475.66/warc/CC-MAIN-20180320130952-20180320150952-00037.warc.gz"}
|
https://www.itwissen.info/en/vector-modultaion-VM-122932.html
|
vector modultaion (VM)
Analog signals are uniquely described by their frequency, amplitude and phase. The combination of phase and amplitude represents a vector whose magnitude is changed by changing the phase angle and/or amplitude.
Vectors thus contain distinguishable information and can be modulated in the same way as frequency, amplitude or phase. This is then referred to as vector modulation.
In vector modulation, the vector is modulated by means of amplitude or frequency modulation and reconstructed in phase angle and magnitude at the receiving end after transmission at the transmitting end. For this purpose, however, a reference phase angle must also be transmitted, which forms the phase reference for the vector angle at the receiving end.
Such a method is used for the transmission of color difference signals in analog television. The color subcarrier forms the reference. The phase angles of all demodulated signals refer exclusively to the phase reference of the color subcarrier.
Informations:
Englisch: vector modultaion - VM Updated at: 06.06.2010 #Words: 151 Links: frequency (f), amplitude, phase, vector, information Translations: DE Sharing:
|
2022-10-04 22:24:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552578687667847, "perplexity": 1487.351259300066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00259.warc.gz"}
|
https://math.stackexchange.com/questions/392257/is-x-pseudocompact
|
# Is $X$ pseudocompact
The following example with a little modified from the handbook of set theoretic topology, Page 574:
Let $\kappa$ be any cardinal for which there exists a family $\{H_\alpha: \alpha < \kappa\}$ of infinite subset of $\omega$ such that
1) $\beta < \alpha$ implies $|H_\beta \setminus H_\alpha|<\omega$ and
2) for every $H \in [\omega]^\omega$, there exists $\alpha <\kappa$ such that $|H\setminus H_\alpha|=\omega$.
The underlying set for the desired space $X$ is the set $\kappa \cup \omega$, where we consider $\kappa$ and $\omega$ to be disjoint. The topology is definied as follows: For each $\alpha < \kappa$ we define a basic neighbourhood of $\alpha$ for each $\beta < \alpha$ and each $F\in [\omega]^{<\omega}$ by $$N(\alpha: \beta, F)=\{ \text{ any set } A \subset \kappa \text{ with } \alpha \in A\} \cup ((H_\beta \setminus H_\alpha)\setminus F).$$ Points in $\omega$ are declared to be isolated. With this topology on $X=\kappa \cup \omega$ it is clear that $\omega$ is a countable dense set of isolated points, and $\kappa$ is a closed discrete subset of $X$ whose subspace topology is the usual discrete topology on $\kappa$.
Question: Is $X$ pseudocompact?
• What role does $F$ play in the definition of $N ( \alpha : \beta , F )$? – user642796 May 15 '13 at 9:07
• Thanks. I'm also not exactly sure what you mean by "$\{ \text{ any set } A \subset \kappa \text{ with } \alpha \in A\}$". Could you perhaps mean to say that the basic open neighbourhoods of $\alpha$ are of the form $A \cup ( H_\beta \setminus ( H_\alpha \cup F ) )$ where $A \subseteq \kappa$ contains $\alpha$, $\beta < \alpha$ and $F \subseteq \omega$ is finite? (Note that in this case $\{ \alpha , \beta \} \cup ( H_\beta \setminus H_\alpha )$ would be an open set containing $\beta$, but includes no basic open neighbourhood of $\beta$.) – user642796 May 15 '13 at 10:03
• @Arthur Fischer: Yes. Just as explains, I modified the example in which $\kappa$ is hoped to be a closed discrete subset of $X$ whose subsapce topology is the discrete topology on $\kappa$. I don't know whether there is a contradiction while constucting the example. – Paul May 15 '13 at 10:43
• @Paul, as written conditions (1) and (2) are contradictory. (Assuming (1), take $H = H_0$ in (2).) Did you mean to have the roles of $\alpha$ and $\beta$ switched in (1)? – Paul McKenney May 15 '13 at 20:41
|
2019-08-25 04:45:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272826313972473, "perplexity": 115.22823975691925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323067.50/warc/CC-MAIN-20190825042326-20190825064326-00059.warc.gz"}
|
https://datascience.stackexchange.com/questions/15302/multilabel-classification-increasing-the-confidence
|
# Multilabel Classification - increasing the confidence
I have a basic multilabel topic classifier(Tfidf vectorizer with OneVsRest Classifier) built on some customer reviews. I observed that there are some classes with right features but it still predicts with very low confidence.
1. What is the reason for such a classification in general?
2. How do I boost the confidence?
• I got the best results for my dataset by using skmultilearn's MLKNN algorithm compared to OneVsRest with SGDClassifier. Dec 24, 2019 at 23:09
It's hard to say without knowing the data.
Often times the classes are imbalanced. This could cause your problems. So let's say that the "features are right" for 5% of your data and they belong to class $A$. You would expect the 5% to be classified correctly as $A$ almost all the time. But what about the remaining 95% in class $B$? The classifier learns that it would be often correct if the sample is classified as $B$. This leads to a low classification rate for class $A$.
You can tackle this by setting different miss-classification costs for the classes. Often you multiply the error rate by these costs to scale them accordingly. Some classifiers let you set this during training.
The standard case would be as follows:
$$C = \left( \begin{matrix} 0 & 1 \\ 1 & 0 \\ \end{matrix} \right)$$
Meaning that correct classification (diagonal) is not punished and miss-classifying $A$ as $B$ and vice versa is punished in the same way. However, this assumes uniformly distributed classes. To account for the case explained above, you may set it the following way:
$$C = \left( \begin{matrix} 0 & 0.05 \\ 0.95 & 0 \\ \end{matrix} \right)$$
Here i set it as the inverse of the class distribution. However, you may try different settings.
|
2022-07-04 03:04:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.628961443901062, "perplexity": 905.3073157455875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104293758.72/warc/CC-MAIN-20220704015700-20220704045700-00234.warc.gz"}
|
https://www.techwhiff.com/issue/two-angles-are-supplementary-the-measure-of-the-smaller--571755
|
# Two angles are supplementary. The measure of the smaller angle is four more than one third the measure of the larger angle. The measure of the larger angle is
###### Question:
Two angles are supplementary. The measure of the smaller angle is four more than one third the measure of the larger angle. The measure of the larger angle is
### If government tax revenues change automatically and in a countercylical direction over the course of the business cycle, this would be called a(n):
A. Discretionary fiscal policy
B. Expansionary fiscal policy
C. Political business cycle
D. Nondiscretionary fiscal policy
If government tax revenues change automatically and in a countercylical direction over the course of the business cycle, this would be called a(n):
A. Discretionary fiscal policy
B. Expansionary fiscal policy
C. Political business cycle
D. Nondiscretionary fiscal policy...
### What benefits can be delivered by conssuming blueberries,nuts,spinage,oats and beans
what benefits can be delivered by conssuming blueberries,nuts,spinage,oats and beans...
### The story never tells us why hunger artist sudden become unpopular. Why do you think the hunger artist's act was unpopular? Why do you think audiences lost interest?
The story never tells us why hunger artist sudden become unpopular. Why do you think the hunger artist's act was unpopular? Why do you think audiences lost interest?...
### HELLPPP PLZZ ASAPPP! Which two phase transitions can a solid undergo? Select the two correct answers. - Vaporization - Freezing - Deposition - Melting - Sublimation - Condensation
HELLPPP PLZZ ASAPPP! Which two phase transitions can a solid undergo? Select the two correct answers. - Vaporization - Freezing - Deposition - Melting - Sublimation - Condensation...
### About how many bables are born per 1,000 people on the earth each year?
About how many bables are born per 1,000 people on the earth each year?...
### A set of water travels at 10 m/s, and 5.0 waves pass you in 2.0s. what is the wavelength of the waves?
a set of water travels at 10 m/s, and 5.0 waves pass you in 2.0s. what is the wavelength of the waves?...
### What technique did Winslow Homer use in Prisoners from the Front to focus the viewer's attention on the figures? 1.Homer outlined the figures with heavy brushstrokes. 2.Homer used muted colors for the background. 3.Homer placed light figures against a dark background. 4.Homer used orthogonal lines to draw the eye to the figures.
What technique did Winslow Homer use in Prisoners from the Front to focus the viewer's attention on the figures? 1.Homer outlined the figures with heavy brushstrokes. 2.Homer used muted colors for the background. 3.Homer placed light figures against a dark background. 4.Homer used orthogonal lines...
### Simplify: -5g + 2h - 3g + g -2h
Simplify: -5g + 2h - 3g + g -2h...
### -3 divided by -2/5 in fraction form
-3 divided by -2/5 in fraction form...
### Which of the following was Eisenhower's most significant legislative achievement? A. The Highway Act of 1956 B. The extension of Social Security benefits C. Increasing the minimum wage D. The establishment of modern Republicanism
Which of the following was Eisenhower's most significant legislative achievement? A. The Highway Act of 1956 B. The extension of Social Security benefits C. Increasing the minimum wage D. The establishment of modern Republicanism...
### The function f(x) is graphed below. Use the graph of the function to find, f(1). -2 -1 1 2
The function f(x) is graphed below. Use the graph of the function to find, f(1). -2 -1 1 2...
### What is the exact volume of the cylinder? A. 18π in3 B. 36π in3 C. 54π in3 D. 63π in3
What is the exact volume of the cylinder? A. 18π in3 B. 36π in3 C. 54π in3 D. 63π in3...
### True or false. The contiguous united states is about twice as large as south america
true or false. The contiguous united states is about twice as large as south america...
### EASY QUESTION GIVING BRAINLY IF RIGHT.suppose a real estate agent received 5% commision on the first 200 000$of a houses selling price and a 6% remaining amount.a. if the house sells for 345 000$ how much commision does the real estate agent make on the sale of a house?
EASY QUESTION GIVING BRAINLY IF RIGHT.suppose a real estate agent received 5% commision on the first 200 000$of a houses selling price and a 6% remaining amount.a. if the house sells for 345 000$ how much commision does the real estate agent make on the sale of a house?...
### An ecosystem is comprised of biotic and abiotic factors. The biotic factors are living things that influence an ecosystem such as humans, animals, plants, bacteria, and fungi. The abiotic factors are all the non-living elements that impact the ecosystem such as water availability and quality, sunlight, weather, soil conditions, and air quality. What abiotic aspects of an ecosystem can be affected by the biotic aspects? A) animals eating roots from plants adding to soil erosion B) diseased plants
An ecosystem is comprised of biotic and abiotic factors. The biotic factors are living things that influence an ecosystem such as humans, animals, plants, bacteria, and fungi. The abiotic factors are all the non-living elements that impact the ecosystem such as water availability and quality, sunlig...
### 8x+12=-3x+3+31 show algebraic work plz
8x+12=-3x+3+31 show algebraic work plz...
### 2. 4 2 4. Express the polynomial x - x + 2x in standard form and then classify it. II A cubic trinomial quadratic trinomial quintic trinomial mials uartic binomial
2. 4 2 4. Express the polynomial x - x + 2x in standard form and then classify it. II A cubic trinomial quadratic trinomial quintic trinomial mials uartic binomial...
|
2022-10-03 15:45:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18793503940105438, "perplexity": 5508.3629121768745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00257.warc.gz"}
|
https://dmoj.ca/problem/coci12c5p1
|
## COCI '12 Contest 5 #1 Ljestvica
View as PDF
Points: 5 (partial)
Time limit: 1.0s
Memory limit: 32M
Problem type
Veronica attends a music academy. She was given a music sheet of a composition with only notes (without annotations), and needs to recognise the scale used. In this problem, we will limit ourselves to only the two most frequently used (and usually taught in schools first) scales: -minor and -major. This doesn't make them simpler or more basic than other minor and major scales – all minor scales are mutually equivalent save for translation, and so are major scales.
Still, out of the tones of an octave used in modern music, -minor and -major scales do use the tones with shortest names: -minor is defined as an ordered septuple , and -major as .
Notice that the sets of tones of these two scales are equal. What's the difference? The catch is that not only the set of tones, but also their usage, determines a scale. Specifically, the tonic (the first tone of a scale), subdominant (the fourth tone) and dominant (the fifth tone) are the primary candidates for accented tones in a composition. In -minor, these are , , and , and in -major, they are , , and . We will name these tones main tones.
Aren't the scales still equivalent save for translation? They are not: for example, the third tone of -minor is three half-tones higher than the tonic , while the third tone of -major is four halftones higher than the tonic . The difference, therefore, lies in the intervals. This makes minor scales "sad" and major scales "happy".
Write a program to decide if a composition is more likely written in -minor or -major by counting whether there are more main tones of -minor or of -major among the accented tones (the first tones in each measure). If there is an equal number of main tones, determine the scale based on the last tone (which is guaranteed to be either for -minor or for -major in any such test case).
For example, examine the well-known melody "Frère Jacques":
The character | separates measures, so the accented tones are, in order: . Ten of them are main tones of -major, while six are main tones of -minor. Therefore, our best estimate is that the song was written in -major.
#### Input Specification
The first and only line of input contains a sequence of at least , and at most , characters from the set A, B, C, D, E, F, G, |. This is a simplified notation for a composition, where the character | separates measures. The characters | will never appear adjacent to one another, at the beginning, or at the end of the sequence.
#### Output Specification
The first and only line of output must contain the text C-dur (for -major) or A-mol (for -minor).
#### Sample Input 1
AEB|C
#### Sample Output 1
C-dur
#### Sample Input 2
CD|EC|CD|EC|EF|G|EF|G|GAGF|EC|GAGF|EC|CG|C|CG|C
#### Sample Output 2
C-dur
## Comments
• commented on Sept. 22, 2022, 5:21 p.m.
Why is the first sample output C-Major? There is 1 A and 1 E, which makes 2 notes for a-minor, while there is only 1 C, so 1 note for C-Major. The description says the final note only decides the key if the number of notes in a-minor and C-Major are the same.
• commented on Sept. 22, 2022, 6:02 p.m. edited
Write a program to decide if a composition is more likely written in A-minor or C-major by counting whether there are more main tones of A-minor or of C-major among the accented tones (the first tones in each measure).
So you count only the first note, and since there's only A and C, and C is the last note, it's C-major
• commented on Sept. 3, 2022, 1:32 p.m.
Where are the solutions?
• commented on Sept. 3, 2022, 3:37 p.m.
When you solve the problem.
• commented on Aug. 21, 2022, 5:03 p.m.
I keep failing batch 2 and 4. Could anyone give me any advice as to why? Thanks
• commented on Aug. 20, 2022, 3:50 a.m.
Hey guys! I've been doing this problem for three days and I still can't understand why I am failing some cases. I've tried different variations of inputs but my program still works. Any advice?
• commented on April 20, 2022, 3:09 p.m. edited
A thing to check if your code is failing only some of the tests. In case the total main notes of A-minor and C-major are equal, the determining factor is not the last main note, but the last note of the entire composition.
• commented on May 5, 2022, 5:39 p.m.
Thank you! The light bulb popped up when I read what you wrote.
• commented on March 23, 2022, 3:40 p.m. edited
I'm really confused with what my mistake is. You can check my code to understand what i've done. I added a lot of prints so u can check on what is being read on my program (a way i found out helps a lot at trying to correct mistakes). I used the first example as an input to check if i would be able to count the correct amount of A-minor main tones (A,D,E) C-major (C,F,G). For some random reason, the program is only reading the first letter of the input and i have no clue why...
Please help.
I don't know if u can see my submission but i can also post it here if anyone needs me to.
I haven't even tried to submit it for real on the submissions, i was only running the program on the IDE. The submission is for u to see what i wrote.
• commented on March 23, 2022, 6:44 p.m.
You can use the Sample Inputs to find out what went wrong.
• commented on March 23, 2022, 8:51 p.m. edit 3
I took ur advice and it helped a lot!
I have gone through my code and found out what went wrong. I have corrected my mistakes but yet, there's some tests which im failing.
I have updated my code, but don't know what im missing now. I'm assuming the max number of notes per measurement is 4, is that a problem?
Apparently, it stops counting after some digits, its really weird...
Btw thanks for all the help! i've noticed you're really active on the website and i sincerely thank you, in the name of all people here. One thanks isnt enough for the amout of help you give out!
• commented on March 24, 2022, 8:00 a.m.
I don't think your algorithm for finding accented tones are right. If you input
A|B|C|D|E|F|G
Your program doesn't output.
• commented on March 24, 2022, 12:41 p.m. edit 2
I think i have got my code right but i still fail batch 3 (first test) and batch 6 (in the last test, "Your output (clipped)".
Can anyone point out what's going on?
• commented on Jan. 26, 2022, 5:53 p.m.
Keep failing Batch #4 and #6 and I cannot figure out why. Could somebody please help review my code and help me understand where I am going wrong?
• commented on Jan. 26, 2022, 7:22 p.m.
If there is an equal number of main tones, determine the scale based on the last tone (which is guaranteed to be either A A for A A -minor or C C for C C -major in any such test case).
You only need to find the last character, there is no need to iterate backwards
• commented on Jan. 21, 2022, 9:37 a.m.
Is there a way to find out what tests my code is failing on? Thanks.
• commented on Jan. 22, 2022, 11:25 p.m. edited
no there isn't
|
2022-10-04 09:56:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.429840087890625, "perplexity": 1475.3762084137477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00397.warc.gz"}
|
https://math.stackexchange.com/questions/1564887/find-matrix-of-linear-operator-mathcala-mathcalp-5-rightarrow-mathcal
|
# Find matrix of linear operator $\mathcal{A} : \mathcal{P_5}\rightarrow \mathcal{P_5},\mathcal{A}(p)=-2p+(3x-1)p^{'}$
Find matrix of linear operator $\mathcal{A} : \mathcal{P_5}\rightarrow \mathcal{P_5},\mathcal{A}(p)=-2p+(3x-1)p^{'}$
$\mathcal{P_5}$ is the space of polynomials with degree not greater than $5$.
$\mathcal{A}(p)=(-2p-p^{'})+3p^{'}x+0x^2+0x^3+0x^4+0x^5$
$$\mathcal{A} \begin{bmatrix} p_1 \\ p_2 \\ p_3 \\ p_4 \\ p_5 \\ \end{bmatrix}= \begin{bmatrix} -2p_1-p_1^{'} \\ 3p_2^{'} \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{bmatrix}$$
How to get matrix of $\mathcal{A}$?
• You’re not going to get very far with only five components in the vector on the lhs. – amd Dec 7 '15 at 23:33
• $\mathcal{P}_5$ is has dimension 6 and i guess $p'$ denotes the derivative of $p$. – testman Dec 7 '15 at 23:35
## 1 Answer
Let $p(x) = ax^5+bx^4+cx^3+dx^2+ex+f$ be an arbitrary polynomial in $\mathcal{P}_5$. Then \begin{align} (\mathcal{A}p)(x)&= -2p(x)+(3x-1)p'(x) \\ &= -2p(x)+(3x-1)(5ax^4+4bx^3+3cx^2+2dx+e) \\ &=13ax^5+(10b-5a)x^4+(7c-4b)x^3\\ &\quad +(4d-3c)x^2+(e-2d)x-2f-e. \end{align}
Using the canonical transformation $T:\mathbb{R}^6 \to \mathcal{P}_5, (a,b,c,d,e,f)^T\mapsto p$ you can write $\mathcal{A}p = TBT^{-1}p$ where $$B = \begin{bmatrix} 13 & 0 & 0 & 0 & 0 & 0 \\ -5 & 10 & 0 & 0 & 0 & 0 \\ 0 & -4 & 7 & 0 & 0 & 0 \\ 0 & 0 & -3 & 4 & 0 & 0 \\ 0 & 0 & 0 & -2 & 1 & 0 \\ 0 & 0 & 0 & 0 & -1 & -2 \\ \end{bmatrix}.$$
|
2019-12-08 00:42:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999709129333496, "perplexity": 413.82767252329126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540503656.42/warc/CC-MAIN-20191207233943-20191208021943-00331.warc.gz"}
|
http://toc.cse.iitk.ac.in/articles/v018a013/index.html
|
Volume 18 (2022) Article 13 pp. 1-65
RANDOM 2018 Special Issue
Round Complexity Versus Randomness Complexity in Interactive Proofs
Revised: January 15, 2022
Published: June 7, 2022
Consider an interactive proof system for some set $S$ that has randomness complexity $r(n)$ for instances of length $n$, and arbitrary round complexity. We show a public coin interactive proof system for $S$ of round complexity $O(r(n)/\log r(n))$. Furthermore, the randomness complexity is preserved up to a constant factor, and the resulting interactive proof system has perfect completeness.
|
2022-08-17 22:41:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3275618553161621, "perplexity": 880.0232104096842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00471.warc.gz"}
|
https://math.stackexchange.com/questions/1561530/modes-of-convergence-real-analysis-folland-ch-2-problem-39
|
# Modes of Convergence, Real Analysis Folland, Ch 2 Problem 39
If $f_n\rightarrow f$ almost uniformly, then $f_n\rightarrow f$ a.e. and in measure.
Proof: Since $f_n\rightarrow f$ almost uniformly, then for every $\epsilon > 0$ there is a measurable set $F$ with $\mu(F) < \epsilon$ such that the sequence $\{f_n\}$ converges uniformly on $X\setminus E$.
I am not sure where to go from here, any suggestions is greatly appreciated.
• If $f_n \to f$ almost uniformly on each $E_n$, then $f_n \to f$ pointwise on $\bigcup_n E_n$ (why?). How does that help you? – PhoemueX Dec 5 '15 at 22:34
Since $f_n$ converges almost uniformly, for any $\delta>0$, there exists a measurable set $E_\delta$ with measure less than $\delta$ such that $f_n$ converges uniformly on $E_\delta^c$. Thus there are $E_k$ with $\mu(E_k)<1/k$ such that $f_n$ converges uniformly on $E_k^c$.
Let $F_n=\bigcap_{k=1}^nE_k$. Then $$\mu(F_n)\leqslant \mu(E_n)<1/n\quad\text{and so}\quad \lim_{n\to\infty}\mu(F_n)=0$$ Since $F_1\supset\cdots\supset F_n\supset\cdots$, let $F=\bigcap_{n=1}^{\infty}E_n$ and by Monotone class theorem $$\mu(F)=\mu\left(\bigcap_{n=1}^{\infty}E_n\right)=\lim_{n\to\infty}\mu(F_n)=0$$ For any $x\in F^c=\bigcup_{n=1}^{\infty}E_n^c$, there is a $N$ that $x\in E_N^c$. Since $f_n$ converges uniformly on $E_N^c$, $f_n$ converges pointwise on $E_N^c$. This proves that $f_n$ converges pointwise a.e. Also $f_n$ converges in measure since $f_n$ converges pointwise a.e.
• Wouldn't it follow from the fact that $F = \cap E_k \subset E_k$ that $\mu(F) \leq \mu(E_k)<1/k$ for all $k$? Wouldn't it imply that $\mu(F) = 0$? Why did you first construct the sets $F_n$ above? Thank you. – math.h Sep 29 '19 at 14:15
|
2020-01-22 14:26:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989645481109619, "perplexity": 59.865351215903644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00054.warc.gz"}
|
https://electricalacademia.com/basic-electrical/parallel-rl-circuit/
|
Home / Basic Electrical / Parallel RL Circuit
# Parallel RL Circuit
Want create site? Find Free WordPress Themes and plugins.
This guide covers Parallel RL Circuit Analysis, Phasor Diagram, Impedance & Power Triangle, and several solved examples along with the review questions answers.
The combination of a resistor and inductor connected in parallel to an AC source, as illustrated in Figure 1, is called a parallel RL circuit. In a parallel DC circuit, the voltage across each of the parallel branches is equal. This is also true of the AC parallel circuit.
The voltages across each parallel branch are:
• The same value.
• Equal in value to the total applied voltage ET.
• All in phase with each other.
Figure 1 Parallel RL circuit.
Therefore, for a RL parallel circuit
n parallel DC circuits, the simple arithmetic sum of the individual branch currents equals the total current. The same is true in an AC parallel circuit if only pure resistors or only pure inductors are connected in parallel.
However, when a resistor and inductor are connected in parallel, the two currents will be out of phase with each other. In this case, the total current is equal to the vector sum rather than the arithmetic sum of the currents.
Recall that the voltage and current through a resistor are in phase, but through a pure inductor the current lags the voltage by exactly 90 degrees. This is still the case when the two are connected in parallel.
## Parallel RL Circuit Phasor Diagram
The relationship between the voltage and currents in a parallel RL circuit is illustrated in the vector (phasor) diagram of Figure 2 and summarized as follows:
• The reference vector is labeled E and represents the voltage in the circuit, which is common to all elements.
• Since the current through the resistor is in phase with the voltage across it, IR (2 A) is shown superimposed on the voltage vector.
• The inductor current IL (4 A) lags the voltage by 90 degrees and is positioned in a downward direction lagging the voltage vector by 90 degrees.
• The vector addition of IR and IL gives a resultant that represents the total (IT), or line current (4.5 A).
• The angle theta (θ) represents the phase between the applied line voltage and current.
Figure 2 Parallel RL circuit vector (phasor) diagram.
As is the case in all parallel circuits, the current in each branch of a parallel RL circuit acts independent of the currents in the other branches. The current flow in each branch is determined by the voltage across that branch and the opposition to current flow, in the form of either resistance or inductive reactance, contained in the branch.
Ohm’s law can then be used to find the individual branch currents as follows:
The resistive branch current has the same phase as the applied voltage, but the inductive branch current lags the applied voltage by 90 degrees. As a result, the total line current (IT) consists of IR and IL 90 degrees out of phase with each other.
The current flow through the resistor and the inductor form the legs of a right triangle, and the total current is the hypotenuse. Therefore, the Pythagorean theorem can be applied to add these currents together by using the equation:
In all parallel RL circuits, the phase angle theta (θ) by which the total current lags the voltage is somewhere between 0 and 90 degrees. The size of the angle is determined by whether there is more inductive current or resistive current.
If there is more inductive current, the phase angle will be closer to 90 degrees. It will be closer to 0 degrees if there is more resistive current. From the circuit vector diagram you can see that the value of the phase angle can be calculated from the equation:
Current in Parallel RL Circuit Example 1
For the parallel RL circuit shown in Figure 3, determine:
1. Current flow through the resistor.
2. Current flow through the inductor.
3. The total line current.
4. The phase angle between the voltage and total current flow.
5. Express all currents in polar notation.
6. Use a calculator to convert all currents to rectangular notation.
Figure 3 Circuit for example 1.
Solution:
$\text{a}\text{. }{{\text{I}}_{\text{R}}}\text{=}\frac{\text{E}}{\text{R}}\text{=}\frac{\text{120V}}{\text{30 }\!\!\Omega\!\!\text{ }}\text{=4A}$
$\text{b}\text{. }{{\text{I}}_{\text{L}}}\text{=}\frac{\text{E}}{{{\text{X}}_{\text{L}}}}\text{=}\frac{\text{120V}}{\text{40 }\!\!\Omega\!\!\text{ }}\text{=3A}$
$\text{c}\text{. }{{\text{I}}_{\text{T}}}\text{=}\sqrt{\text{I}_{\text{R}}^{\text{2}}\text{+I}_{\text{L}}^{\text{2}}}\text{=}\sqrt{{{\text{4}}^{\text{2}}}\text{+}{{\text{3}}^{\text{2}}}}\text{=5A}$
$d.\theta ={{\tan }^{-1}}\left( \frac{{{I}_{L}}}{{{I}_{R}}} \right)={{\tan }^{-1}}\left( \frac{3}{4} \right)={{36.9}^{o}}$
$\begin{matrix}\text{e}\text{. }{{\text{I}}_{\text{T}}}\text{=5}\angle \text{-36}\text{.}{{\text{9}}^{\text{o}}} & {{\text{I}}_{\text{R}}}\text{=4}\angle {{\text{0}}^{\text{o}}} & {{\text{I}}_{\text{L}}}\text{=3}\angle \text{-9}{{\text{0}}^{\text{o}}} \\\end{matrix}$
$\text{f}\text{.}\begin{matrix}\text{ }{{\text{I}}_{\text{T}}}\text{=4-j3} & {{\text{I}}_{\text{R}}}\text{=4+j0} & {{\text{I}}_{\text{L}}}\text{=0-j3} \\\end{matrix}$
## Parallel RL Circuit Impedance
The impedance (Z) of a parallel RL circuit is the total opposition to the flow of current. It includes the opposition (R) offered by the resistive branch and the inductive reactance (XL) offered by the inductive branch.
The impedance of a parallel RL circuit is calculated similarly to a parallel resistive circuit. However, since XL and R are vector quantities, they must be added vectorially. As a result, the equation for the impedance of a parallel RL circuit consisting of a single resistor and inductor is:
Where the quantity in the denominator is the vector sum of the resistance and inductive reactance. If there is more than one resistive or inductive branch, R and XL must equal the total resistance or reactance of theses parallel branches.
When the total current (IT) and the applied voltage are known, the impedance is more easily calculated using the Ohm’s law as follows:
The impedance of a parallel RL circuit is always less than the resistance or inductive reactance of any one branch. This is because each branch creates a separate path for current flow, thus reducing the overall or total circuit opposition to the current flow.
The branch that has the greater amount of current flow (or lesser amount of opposition) has the most effect on the phase angle. This is the opposite of a series RL circuit. In a parallel RL circuit, if XL is larger than R, the resistive branch current is greater than the inductive branch current so the phase angle between the applied voltage and total current is closer to 0 degrees (more resistive in nature).
Impedance in Parallel RL Circuit Example 2
For the parallel RL circuit shown in Figure 4, determine:
1. Impedance (Z) based on the given R and XL values.
2. Current flow through the resistor and inductor.
3. The total line current.
4. Impedance (Z) based on the total current (IT) and the applied voltage values.
Figure 4 Circuit for example 2.
Solution:
$Z=\frac{R{{X}_{L}}}{\sqrt{{{R}^{2}}+X_{L}^{2}}}=\frac{50\times 80}{\sqrt{{{50}^{2}}+{{80}^{2}}}}=42.4\Omega$
\begin{align}& {{I}_{R}}=\frac{E}{R}=\frac{100V}{50\Omega }=2A \\& {{I}_{L}}=\frac{E}{{{X}_{L}}}=\frac{100V}{80\Omega }=1.25A \\\end{align}
${{I}_{T}}=\sqrt{I_{R}^{2}+I_{L}^{2}}=\sqrt{{{2}^{2}}+{{1.25}^{2}}}=2.36A$
$Z=\frac{E}{{{I}_{T}}}=\frac{100V}{2.36A}=42.4\Omega$
## Power in Parallel RL Circuit
In the parallel RL circuit, the VA (apparent power) includes both the watts (true power) and the VARs (reactive power), as shown in Figure 5. The true power (W) is that power dissipated by the resistive branch, and the reactive power (VARs) is the power that is returned to the source by the inductive branch.
The relationship of VA, W, and VARs is the same for the RL parallel circuit as it is for the RL series circuit. The following is a summary of these formulas:
• The true power in watts is equal to the voltage drop across the resistor times the current flowing through it:
• The reactive power in VARs is equal to the voltage drop across the inductor times the current flowing through it:
• The apparent power in VA is equal to the applied voltage times the total current:
Figure 5 Power components of a RL parallel circuit.
Figure 6 shows the power triangle for a RL parallel circuit. Apply the Pythagorean theorem, and the various power components can be determined using the following equations:
Figure 6 Power triangle for a RL parallel circuit.
## Power Factor in Parallel RL Circuit
Power factor (PF) in a RL parallel circuit is the ratio of true power to the apparent power just as it is in the series RL circuit. There are, however, some differences in the other formulas used to calculate power factor in the series and parallel RL circuits.
In a series RL circuit, the power factor could be found by dividing the voltage drop across the resistor by the total applied voltage. In a parallel circuit the voltage is the same but the currents are different, and power factor can be calculated using the formula
Another power factor formula that is different involves resistance and impedance. In the parallel RL circuit, the impedance will be less than the resistance. Therefore, when PF is computed using resistance and impedance, the formula used is
Parallel RL Circuit Calculations Example 3
For the parallel RL circuit shown in Figure 7, determine:
1. Current flow through the resistor.
2. True power in watts.
3. Current flow through the inductor.
4. Reactive power in VARs.
5. Inductance of the inductor.
6. Total current flow.
7. Circuit impedance.
8. Apparent power in VA.
9. Power factor.
10. The circuit phase angle θ.
Figure 7 Circuit for example 3.
Solution:
• Step 1. Make a table and record all known values.
• Step 2. Calculate the current through the resistor and enter the value in the table.
• Step 3. Calculate the true power and enter the value in the table.
• Step 4. Calculate the current through the inductor and enter the value in the table.
• Step 5. Calculate the reactive power and enter the value in the table.
• Step 6. Calculate the inductance of the inductor and enter the value in the table.
• Step 7. Calculate the total current and enter the value in the table.
• Step 8. Calculate the impedance and enter the value in the table.
• Step 9. Calculate the apparent power and enter the value in the table.
• Step 10. Calculate the power factor and enter the value in the table.
• Step 11. Calculate the circuit phase angle θ and enter the value in the table.
Review Questions
1. List three characteristics of the voltage across each branch of a parallel RL circuit.
2. In a parallel RL circuit the total current is equal to the vector sum rather than the arithmetic sum. Why?
3. What is used as the reference vector in the vector diagram of a parallel RL circuit?
4. Assume the resistive element of a parallel RL circuit is increased. Will this cause the phase angle of the circuit to increase or decrease? Why?
5. In a parallel RL circuit the impedance or total opposition is always less than that of the individual resistance or inductive reactance. Why?
6. Define the terms apparent power, reactive power, and true power as they apply to the parallel RL circuit.
7. Current measurements of a parallel RL circuit indicate a current flow of 2 amperes through the resistive branch and 4 amperes through the inductive branch. Determine:
1. The value of the total current flow.
2. The phase angle between the voltage and total current.
8. For the parallel RL circuit shown in Figure 8, determine:
1. Apparent power.
2. True power.
3. Reactive power.
4. Circuit power factor.
Figure 8 Circuit for review question 8.
9. Complete a table for all given and unknown quantities for the parallel RL circuit shown in Figure 9.
Figure 9 Circuit for review question 9.
1. The voltage across each branch of a parallel RL circuit is the same value, equal in value to the total applied voltage, ET, and in phase with each other.
2. The total current in a parallel RL circuit is equal to the vector sum of the branch currents because the branch currents are out of phase with each other.
3. The reference vector in a parallel RL circuit is the applied voltage E.
4. If the resistive element of a parallel RL circuit is increased the resistive current will be decreased and the phase angle will be increased because the circuit is now more inductive.
5. Each branch creates a separate path for current flow thus acting to reduce the overall or total circuit resistance.
6. In the parallel RL circuit, the VA (apparent power) includes both the Watts (true power) and the VARs (reactive power), the true power (Watts) is that power dissipated by the resistive branch, and the reactive power (VARs) is the power that is returned to the source by the inductive branch.
7. (a) 4.47 A, (b) 63.4°
8. (a) 3120 VA, (b) 2880 W, (c) 1200 VARs, (d) 92.3% lagging
E I R /XL /Z W/VA/VARs PF R 100 V 10 A 10 Ω 1000 W 0 L 100 V 5 A 20 Ω 500 VARs 90 Total 100 V 11.2 A 8.93 Ω 1120 VA 26.8 89.3%
Did you find apk for android? You can find new Free Android Games and apps.
|
2020-01-29 17:03:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6190882325172424, "perplexity": 850.6148189495766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251801423.98/warc/CC-MAIN-20200129164403-20200129193403-00028.warc.gz"}
|
http://en.wikipedia.org/wiki/Ultracold_neutrons
|
# Ultracold neutrons
Ultracold neutrons (UCN) are free neutrons which can be stored in traps made from certain materials. The storage is based on the reflection of UCN by such materials under any angle of incidence.
## Properties
The reflection is caused by the coherent strong interaction of the neutron with atomic nuclei. It can be quantum-mechanically described by an effective potential which is commonly referred to as the Fermi pseudo potential or the neutron optical potential. The corresponding velocity is called the critical velocity of a material. Neutrons are reflected from a surface if the velocity component normal to the reflecting surface is less or equal the critical velocity.
As the neutron optical potential of most materials is below 300 neV, the kinetic energy of incident neutrons must not be higher than this value to be reflected under any angle of incidence, especially for normal incidence. The kinetic energy of 300 neV corresponds to a maximum velocity of 7.6 m/s or a minimum wavelength of 52 nm. As their density is usually very small, UCN can also be described as a very thin ideal gas with a temperature of 3.5 mK.
Due to the small kinetic energy of an UCN, the influence of gravitation is significant. Thus, the trajectories are parabolic. Kinetic energy of an UCN is transformed into potential (height) energy with ~102 neV/m.
The magnetic moment of the neutron, produced by its spin, interacts with magnetic fields. The total energy changes with ~60 neV/T.
## History
It was Enrico Fermi who realized first that the coherent scattering of slow neutrons would result in an effective interaction potential for neutrons traveling through matter, which would be positive for most materials.[1] The consequence of such a potential would be the total reflection of neutrons slow enough and incident on a surface at a glancing angle. This effect was experimentally demonstrated by Fermi and Walter Henry Zinn [2] and Fermi and Leona Marshall.[3] The storage of neutrons with very low kinetic energies was predicted by Yakov Borisovich Zel'dovich[4] and experimentally realized simultaneously by groups at Dubna [5] and Munich.[6]
## Reflecting materials
Material: VF[7] vC[8] η (10−4)[8] Beryllium 252 neV 6.89 m/s 2.0-8.5 BeO 261 neV 6.99 m/s Nickel 252 neV 6.84 m/s 5.1 Diamond 304 neV 7.65 m/s Graphite 180 neV 5.47 m/s Iron 210 neV 6.10 m/s 1.7-28 Copper 168 neV 5.66 m/s 2.1-16 Aluminium 54 neV 3.24 m/s 2.9-10
Any material with a positive neutron optical potential can reflect UCN. The table on the right gives an (incomplete) list of UCN reflecting materials including the height of the neutron optical potential (VF) and the corresponding critical velocity (vC). The height of the neutron optical potential is isotope-specific. The highest known value of VF is measured for 58Ni: 335 neV (vC=8.14 m/s). It defines the upper limit of the kinetic energy range of UCN.
The most widely used materials for UCN wall coatings are Beryllium, Beryllium oxide, Nickel (including 58Ni) and more recently also diamond-like carbon (DLC).
Non-magnetic materials such as DLC are usually preferred for the use with polarized neutrons. Magnetic centers in e.g. Ni can lead to de-polarization of such neutrons upon reflection. If a material is magnetized, the neutron optical potential is different for the two polarizations, caused by
$V_F(pol.)=V_F(unpol.)\pm\mu_N\cdot B$
where $\mu_N$ is the magnetic moment of the neutron and $B=\mu_0\cdot M$ the magnetic field created on the surface by the magnetization.
Each material has a specific loss probability per reflection,
$\mu(E,\theta)=2\eta\sqrt{\frac{E\cos^2\theta}{V_F-E\cos^2\theta}}$
which depends on the kinetic energy of the incident UCN (E) and the angle of incidence (θ). It is caused by absorption and thermal upscattering. The loss coefficient η is energy-independent and typically of the order of 10−4 to 10−3.
## Experiments with UCN
The production, transportation and storage of UCN is currently motivated by their usefulness as a tool to determine properties of the neutron and to study fundamental physical interactions. Storage experiments have improved the accuracy or the upper limit of some neutron related physical values.
### Measurement of the neutron lifetime
Today's world average value for the neutron lifetime is $885.7\pm0.8~s$,[9] to which the experiment of Arzumanov et al.[10] contributes strongest. Ref.[10] measured $\tau_n=885.4\pm0.9_{\mathrm{stat}}\pm0.4_{\mathrm{syst}}~s$ by storage of UCN in a material bottle covered with Fomblin oil. Using traps with different surface to volume ratios allowed them to separate storage decay time and neutron lifetime from each other. There is another result, with even smaller uncertainty, but which is not included in the World average. It was obtained by Serebrov et al.,[11] who found $878.5~\pm0.7_{\mathrm{stat}}\pm0.4_{\mathrm{syst}}~s$. Thus, the two most precisely measured values deviate by 5.6σ
### Measurement of the neutron electric dipole moment
The neutron electric dipole moment (nEDM) is a measure for the distribution of positive and negative charge inside the neutron. No nEDM has been found until now (May 2008). Today's lowest value for the upper limit of the nEDM was measured with stored UCN (see main article).
### Observation of the gravitational interactions of the neutron
Physicists have observed quantized states of matter under the influence of gravity for the first time. Valery Nesvizhevsky of the Institute Laue-Langevin and colleagues found that cold neutrons moving in a gravitational field do not move smoothly but jump from one height to another, as predicted by quantum theory. The finding could be used to probe fundamental physics such as the equivalence principle, which states that different masses accelerate at the same rate in a gravitational field (V Nesvizhevsky et al. 2001 Nature 415 297).
### Measurement of the A-coefficient of the neutron beta decay correlation
The only reported measurement of the beta-asymmety is from the Los Alamos group. Here's the first paper using UCN to complete a measurement of A using UCN was a 4.5% measurement reported in http://prl.aps.org/abstract/PRL/v102/i1/e012301 the next result from the LANSCE group is also a PRL first authored by Jianglai Liu in 2010 which was roughly 1.5% and a new result will be out in the coming year.
## References
1. ^ E. Fermi, Ricerca Scientifica 7 (1936) 13
2. ^ E. Fermi, W.H. Zinn, Phys. Rev. 70 (1946) 103
3. ^ E. Fermi, L. Marshall, Phys. Rev. 71 (1947) 666
4. ^ Ya.B. Zeldovich, Sov. Phys. JETP-9 (1959) 1389
5. ^ V.I. Lushikov et al., Sov. Phys. JETP Lett. 9 (1969) 23
6. ^ A. Steyerl, Phys. Lett. B29 (1969) 33
7. ^ R. Golub, D. Richardson, S.K. Lamoreaux, Ultra-Cold Neutrons, Adam Hilger (1991), Bristol
8. ^ a b V.K. Ignatovich, The Physics of Ultracold Neutrons, Clarendon Press (1990), Oxford, UK
9. ^ W.-M. Yao et al. (Particle Data Group), J. Phys. G 33, 1 (2006) and 2007 partial update for edition 2008 (URL: http://pdg.lbl.gov)
10. ^ a b S. Arzumanov, L. Bondarenko, S. Chernyavsky, W. Drexel et al., Phys. Lett. B 483 (2000) 15
11. ^ A. Serebrov, V. Varlamov, A. Kharitonov, A. Fomin et al., Phys. Lett. B 605 (2005) 72
|
2014-07-25 02:12:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7703120708465576, "perplexity": 1988.6209286334113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00224-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.varsitytutors.com/act_math-help/how-to-find-range
|
# ACT Math : How to find range
## Example Questions
### Example Question #92 : Statistics
What is the range of the grades Helen received on all three exams?
84
82
12
9
7
7
Explanation:
The range of a set of data is the difference between the highest and lowest values in the set.
Highest = 86
Lowest = 79
Range = 86 - 79 = 7
### Example Question #1 : How To Find Range
If train A heads from Albuquerque, New Mexico towards Phoenix, Arizona, at 50 miles an hour and train B heads from Phoenix to Albuquerque at 70 miles an hour leaving at the same time at 4:00PM, at what time will the trains intersect taking the same path, assuming the distance between the two cities is 450 miles.
Between 9:00 and 10:00 PM
Between 8:00 and 9:00 PM
Between 6:00 and 7:00 PM
Between 7:00 and 8:00 PM
Between 7:00 and 8:00 PM
Explanation:
A quick look at the answer choices tells you that the question is not looking for the exact time of intersection, but rather between what two hours the trains will intersect. The best approach to a problem like this is to quickly make a table listing distance traveled of each train and the corresponding time. An example is illustrated below.
We see at 7:00 PM the trains have not yet intersected (and are actually 90 miles apart, | miles.
At 8:00 PM the trains have traveled a total of 480 miles together, and thus must have already intersected. Therefore, between 7:00 and 8:00 PM is the correct answer.
Questions such as these should take no longer than twenty seconds, as they only require simple arithmetic. They tend to be more intimidiating than difficult, as well as many other questions on the ACT.
### Example Question #1 : How To Find Range
Considering the entire menu shown for Lena’s Italian Kitchen, what is the range of the prices of the many food options?
$3.50$4.50
$1.50$7.00
$6.09 Correct answer:$7.00
Explanation:
The range is difference between the largest value and the smallest value.
The largest value is the ravioli for $10.00 The smallest value is the house salad for$3.00
The range = $10.00 -$3.00 = \$7.00
### Example Question #2811 : Act Math
Find the range of the following set of numbers:
Explanation:
Range is the difference between the largest number in the set and the smallest number in the set. Our largest number in the set is and our smallest number is .
### Example Question #3 : Range
The following is a list of the amount of students in each of five 3rd grad classrooms:
23, 27, 19, 31, 33. What is the range of the number of students in each classroom?
Explanation:
The range is the highest minus the lowest value in a given data set contains. Thus the low end of the range is the low number, or 19 students, and the high end of the range is the high number, or 33.
Subtracting the lowest value from the highest we get,
.
Therefore the range of the data set is .
### Example Question #2 : How To Find Range
Find the range of the following set of numbers.
Explanation:
To find range, simply subtract the smallest number from the largest number. Thus,
### Example Question #2 : Range
Find the range of the following set of numbers:
Explanation:
To find the range, simply take the difference of the smallest and largest numbers.
The largest number is, and the smallest number is .
Thus,
.
### Example Question #1 : Range
Find the range of the following set of numbers:
|
2017-10-23 08:03:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4762500822544098, "perplexity": 764.9784477401723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00700.warc.gz"}
|
https://www.getabarcode.co.uk/isbn-image.html
|
ISBN Image
More Views
ISBN Image
£3.99
Availability: In stock
Already have your ISBN number but need the image? We can provide the image for you in 2 formats, namely png and eps. Images will be forwarded the same day that the order is completed.
£3.99
Description
Details
Already have your ISBN number but need the image? We can provide the image for you in 2 formats, namely png and eps. Images will be forwarded the same day that the order is completed.
Reviews
£4.49
£14.99
£3.99
Product Tags
Use spaces to separate tags. Use single quotes (') for phrases.
|
2019-02-20 06:16:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602605104446411, "perplexity": 5511.709681217419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00414.warc.gz"}
|
http://mlxplore.lixoft.com/case-studies/using-delay-in-modeling-the-example-of-rheumatoid-arthritis/
|
Select Page
# Using delays, the example of rheumatoid arthritis
### Introduction
Rheumatoid arthritis (RA) is an immune-mediated inflammatory disease and is characterized by a chronic inflammation and synovial hyperplasia leading to the destruction of cartilage and bone. Approximately one percent of the world-wide population suffers from RA. The model is presented in a poster published at PAGE in 2011 by Gilbert Koch. The research goal is to develop a multi response model to describe the time course of the total arthritic score and the strongly delayed ankylosis score measured in collagen induced arthritic (CIA) mice. The authors used a three compartment delay differential equation model to get a deeper understanding between cytokine level, inflammation and bone destruction.
### A multi-response model for rheumatoid arthritis
The modeling part was done in three steps. We will follow the steps presented by Koch in his PhD thesis.
Modeling Step 1: The cytokine behavior in time.
The cytokine GM-CSF (noted $G$) drives the disease. Its kinetics is described by:
$\frac{dG}{dt} = k_3-\frac{k_1}{k_2}(1-e^{-k_2t})G(t) - E(Cc(t))G(t)$
The last term models the effect (function $E$) of the anti-GM-CSF antibody concentration ($Cc$) on the cytokine GM-CSF level $G$. The effect of the antibody is described by the following expression: $E(Cc)=(\sigma_1*\exp(- \sigma_2*Cc) + \sigma_3)*Cc$.
The concentration of the anti-body is described by a 2 compartment model with bolus administration and linear elimination.
Modeling Step 2: Multi-response approach to model the TAS and AKS
The mathematical model of the arthritic disease is split into an inflammatory part $I(t)$ and an ankylosis (bone and cartialge destruction) part $D(t)$ and the sum $R_1(t) = I(t) + D(t)$ defines the response $R_1(t)$, which corresponds to the experimentally measured total arthritic score (TAS). In addition a second response function is defined as $R_2(t) = D(t)$ which corresponds to the ankylosis score (AKS).
To build a model for the time course of the inflammation $I(t)$ and the ankylosis $D(t)$, the concept of lifespan modeling is introduced: the overall inflammation $I(t)$ is controlled by two processes, the inflow $k_{in}(t)$ and the outflow $k_{out}(t)$. Assuming that the inflammation caused by these processes remains a certain time period $\tau$ and is driven by the amount of GM-CSF, one obtains $k_{out}(t) = k_{in}(t-\tau)$ and $k_{in}(t) = k_4 G(t)$ where $k_4$ is a first order rate constant. Then the total balance equation for the inflammation reads:
$\frac{dI}{dt} (t) = k_4G(t) - k_4G(t - \tau)$
Similarly, for the ankylosis $D(t)$ one obtains $k_{in}(\textrm{ankyloses}) = k_{out}(\textrm{inflammation})$. Applying a first order loss term $k_{out}(\textrm{ankyloses}) = k_5D(t)$ leads to the equation:
$\frac{dD}{dt} (t) = k_4G(t-\tau) - k_5G(t)$
The presence of $G(t)$ and $G(t - \tau)$ reflects that the inflammation and the ankylosis are driven by GM-CSF. Moreover, the action of GM-CSF on the ankylosis is delayed by $\tau$. A schematic view of the model, proposed in Koch’s PhD thesis, is given here:
### How to model it in Mlxtran
The purpose here is to define the model in Mlxtran language. The system writes as a PKPD model with a DDE. The resulting set of parameters is (alpha, beta, CL, V1, V2) for the PK model, (sigma1, sigma2, sigma3) for the effect model, and (k1, k2, k3, k4, k5, tau) for the RA model along with 3 additional initial conditions (I0, a,b). Therefore, the [LONGITUDINAL] subsection starts with:
[LONGITUDINAL]
input = {a, b, I0, alpha, beta, CL, V1, V2, sigma1, sigma2, sigma3, k1, k2, k3, k4, k5, tau}
Then, we start the EQUATION: block with the initial conditions:
EQUATION:
; initialization of the time
t0 = 0
; initialization of the variables of interest
I_0 = I0
D_0 = 0
G_0 = a*exp(b*t)
Notice that the initialization of the variable G is not only at time 0 but also across the past $[-\tau, 0]$. One continues with the PKPD model:
K12 = alpha*beta*V2/CL
K21 = alpha*beta*V1/CL
Cc = pkmodel(k12=K12,k21=K21,V=V1,Cl=CL)
E = (sigma1*exp(- sigma2*Cc) + sigma3)*Cc
and the ODE/DDE equations along with the definition of the TAS and the AKS:
ddt_G = k3 - E*G - (k1/k2)*(1- exp(- k2*t))*G
ddt_I = k4*G - k4*delay(G,tau)
ddt_D = - k5*D + k4*delay(G,tau)
TAS = I+D
AKS = D
Finally, the individual parameters representing the variability of the delay are defined as a normal distribution and one writes:
[INDIVIDUAL]
input = {tau_pop, omega_tau}
DEFINITION:
tau = {distribution = normal, mean = tau_pop, sd = omega_tau}
The model is then the sum of all this code and is implemented in arthritisModel_mlxt.txt
### Model and treatment exploration: project definition
To define the project, one must define the model (done in the previous paragraph, section <MODEL>), the parameter values (section <PARAMETER>) and the output (section <OUTPUT>). To explore the model, we define in the section <DESIGN> three administrations where we vary the amount of antibody.
• “Trt_0”, where we administrate nothing
• “Trt_1”, where we administrate 1 mg/kg (and thus 70mg) each week during 5 weeks
• “Trt_1”, where we administrate 10 mg/kg (and thus 700mg) each week during 5 weeks
Finally, a section is added to define the graphics we want to look at. The project is implemented in arthritisModel_mlxplore.txt as follows
;model definition
<MODEL>
file='./arthritisModel_mlxt.txt'
<DESIGN>
trt_0 = {time = {1, 8, 15, 22, 29}, amount = 0}
trt_1 = {time = {1, 8, 15, 22, 29}, amount = 70}
trt_10 = {time = {1, 8, 15, 22, 29}, amount = 700}
;parameter initial values
<PARAMETER>
; Initialization parameters
a = 1
b = 0.5
I0 = 2.52
; PK model
alpha = 0.02327
beta = .045
CL = 2.5
V1 = 15
V2 = 25
; Effect model
sigma1 = 0.154
sigma2 = 0.065
sigma3 = 0.003
; Arthritis model parameters
k1 = 0.183
k2 = 0.092
k3 = 5
k4 = 0.064
k5 = 0.016
tau_pop = 11.2
omega_tau = 3
;prediction outputs and grid
<OUTPUTS>
list={TAS,AKS, G, I, D,Cc}
grid=0:.01:40
<RESULTS>
[GRAPHICS]
pTAS = {y={TAS}, ylabel = 'Total Arthritic Score I(t)+D(t)', xlabel = 'time (days)'}
pAKS = {y={AKS}, ylabel = 'Ankylosis Score D(t)', xlabel = 'time (days)'}
pG = {y={G}, ylabel = 'GM-CSF G(t)', xlabel = 'time (days)'}
pI = {y={I}, ylabel = 'Inflammation I(t)', xlabel = 'time (days)'}
gridarrange(pTAS, pAKS, pG, pI, 2,2)
<SETTINGS>
[GRAPHICS]
nb_simulations = 200
### Model exploration: graphical results
First we can explore the predictions following the 3 different treatments:
Then, if one want to explore the impact on the inter-individual variability on the delay, one can click on iiv leading to the following figure:
### Conclusion
In this example, we have implemented a model with ODEs, and DDEs with complex initial conditions using Mlxtran. Using Mlxplore, we have explored the influence of different treatments on the predictions. Mlxplore also enables to visualize the inter-individual variability.
|
2017-12-13 18:36:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6193447113037109, "perplexity": 4588.683124475839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530668.28/warc/CC-MAIN-20171213182224-20171213202224-00109.warc.gz"}
|
https://www.ncatlab.org/nlab/show/internal+hom+of+algebras+over+a+commutative+monad
|
# nLab internal hom of algebras over a commutative monad
Contents
### Context
#### Mapping spaces
internal hom/mapping space
### Geometric homotopy theory
#### Higher algebra
higher algebra
universal algebra
## Theorems
#### Monoidal categories
monoidal categories
## In higher category theory
#### 2-Category theory
2-category theory
Definitions
Transfors between 2-categories
Morphisms in 2-categories
Structures in 2-categories
Limits in 2-categories
Structures on 2-categories
# Contents
## Idea
Algebras over a commutative monad, in many cases, admit an internal hom analogous to the one of modules over a commutative ring.
On a monoidal closed category, this internal hom is in many cases right-adjoint to a tensor product, giving a hom-tensor adjunction for the algebras too. Again, this generalizes the case of modules.
## Construction
Let $C$ be a closed category with equalizers, denote its unit by $1$ and its internal homs by $[X,Y]$. Let also $(T,\mu,\eta)$ be a commutative monad as defined for closed categories (here), with strength $t':[X,Y]\to [T X, T Y]$ and costrength $s':T[X,Y]\to [X, T Y]$.
Let now $(A,a)$ and $(B,b)$ be $T$-algebras. Thanks to the costrength, the internal hom $[A,B]$ of $C$ has a canonical “pointwise” $T$-algebra structure,
$T[A,B] \to [A, T B] \to [A,B].$
(See here for the details.)
The internal hom of $A$ and $B$ in $C^T$ is defined to be the equalizer of the following parallel pair: where $a^*:[A,B]\to [T A,B]$ is the internal precomposition with $a:T A\to A$, and $b_*:[T A,T B]\to [T A,B]$ is the internal postcomposition with $b:T B\to B$.
Denote this object by $[A,B]_T$.
(See Brandenburg, Remark 6.4.1, as well as the original work Kock ‘71, Section 2.)
###### Theorem
(Kock ‘71, Theorem 2.2) Let $C$ be a closed category with equalizers, and $(T,\mu,\eta)$ a commutative monad on $C$. Then $[-,-]_T$ makes the Eilenberg-Moore category $C^T$ itself a closed category, with
• unit object given by the free algebra $T1$, where $1$ is the unit object of $C$;
• All the structure maps induced by those of $C$.
### Interpretation
The internal hom $[A,B]$ of $A$ and $B$ in $C$ can be thought of as “containing all the morphisms $A\to B$ of $C$”. However, not all those morphisms are necessarily morphisms of $T$-algebras. A map $f:A\to B$ is a morphism of algebras if and only if $b\circ T f = f\circ a$. In terms of internal homs, this condition is exactly given by the parallel maps above. The equalizer of that pair can be thought of as of containing “all the maps satisfying that condition”.
## Examples and implications
The fact that “linear maps form themselves a vector space” is a general phenomenon for commutative monads on categories with equalizers. For instance,
• For the case of the distribution monad, this says that convex combinations of affine functions are affine;
• For the case of the power set monad, this says that pointwise suprema of join-preserving maps between join-semilattices are join-preserving.
## On monoidal closed categories
If the category $C$ is monoidal closed, under some condition the internal hom of algebras is part of a closed monoidal structure on the algebras themselves. See here for more information.
## References
• Martin Brandenburg, Tensor categorical foundations of algebraic geometry (arXiv:1410.1716)
• William Keigher, Symmetric monoidal closed categories generated by commutative adjoint monads, Cahiers de Topologie et Géométrie Différentielle Catégoriques, 19 no. 3 (1978), p. 269-293 (NUMDAM, pdf)
• Anders Kock, Monads on symmetric monoidal closed categories, Arch. Math. 21 (1970), 1–10.
• Anders Kock, Strong functors and monoidal monads, Arhus Universitet, Various Publications Series No. 11 (1970). PDF.
• Anders Kock, Closed categories generated by commutative monads, 1971 (pdf)
• Gavin J. Seal, Tensors, monads and actions (arXiv:1205.0101)
Last revised on February 3, 2020 at 14:40:31. See the history of this page for a list of all contributions to it.
|
2022-10-07 19:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 40, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677528500556946, "perplexity": 909.20004650902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00383.warc.gz"}
|
https://physics.stackexchange.com/questions/147044/how-to-calculate-excluded-volume-in-onsagers-hard-rod-model
|
# How to calculate excluded volume in Onsager's hard-rod model?
Can somebody please provide a derivation of how to calculate excluded volume of two rods with angle of intersection being $\gamma$. rods are cylinders, capped with semi-spheres. Onsager theory of hard rods is based on this and I cant seem to find a derivation for excluded volume.
In order to calculate the excluded volume between two spherocylindric rods, the relative angle between them changes the base area of the excluded volume, usually taken as a parallelogram, and the thickness of rods, i.e. $D$ the height of the excluded volume. Consider the picture below, taken from (Basic Concepts for Simple and Complex Liquids, from Jean-Louis Barrat, Jean-Pierre Hansen)
As you can see, we have a base in the form of a parallelogram with area $L^2 \sin\gamma$ and with thickness $2D$, excluded volume: $$V_{ex}=2L^2D|\sin \gamma|.$$
The $2$ comes from the fact that the volume $L^2D|\sin \gamma|$ is excluded for the other rod on both sides.
|
2019-08-18 23:46:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299682378768921, "perplexity": 463.1125848721363}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00253.warc.gz"}
|
http://www.vliz.be/wiki/Shelf_sea_exchange_with_the_ocean
|
# Shelf sea exchange with the ocean
Ga naar: navigatie, zoeken
## Introduction
Deep ocean basins have typical depths of 4000 m or more. However, continents are typically surrounded by continental “shelf” above which the depth of the sea is typically 0-200 m. At the edge of the shelf, there is a steep slope down to oceanic depths in most places. The extent of the shelf (sea) varies from zero around atolls and volcanic sea mounts (for example) to tens of kilometres typically but hundreds of kilometres off NW Europe, Argentina and around much of the Arctic Ocean (Figure 1).
Figure 1: Global map showing continental shelf areas in cyan. Public Domain, https://commons.wikimedia.org/w/index.php?curid=617528
Questions of global cycling entail the quantity, transformation and fate of materials carried between the shelf seas and ocean, and hence processes of ocean-shelf transport and exchange. For example, transport from the open ocean across the shelf edge is estimated to bring most of the nitrogen and half of the phosphate used in global shelf-sea export production [1]; these transports support the shelf seas’ enhanced primary production and thence 90% of the world’s commercial fish catch [2]. Hence many ocean-shelf sea interaction studies have taken place (Table 1 lists some), illustrating strong biogeochemical interests but necessary physics underpinning.
Physical processes control the large-scale movement and irreversible small-scale mixing of water and its constituents. At the shelf edge, steep bathymetry may inhibit ocean-shelf exchange: integration of the momentum equation through depth and around a depth contour shows that net cross-contour transport is entirely attributable to ageostrophy (Huthnance[3], Appendix A). That is, transport across depth contours is enabled by (i) time scales of a day or less, or (ii) space scales of a few kilometres or less, or (iii) depth contours changing direction on similarly short space scales or (iv) enough friction to stop the flow in a day or so.
The combination of sloping topography and stratification gives rise to shelf-edge processes satisfying one or more of the criteria (i-iv). These include coastal-trapped waves; instability, meanders and friction-induced Ekman transport associated with along-slope currents; eddies; upwelling, fronts and filaments: down-welling, cascading; tides, surges; internal tides and waves; surface waves; topography (capes and canyons).
In view of the complex set of processes, often complex topography and the small scales favouring cross-slope flow, our approach to better representation of ocean-shelf interaction is to develop models with fine resolution (of order 1 km). These then need testing against detailed measurements in areas with contrasting conditions (hence varied processes).
The above processes occur with varying intensity according to context, but some (e.g. wind-induced upwelling or down-welling, tides) enable cross-slope transports of order $1 \; m^2/s$ or more (per metre of shelf edge). This is small compared with transports of order $100 \; m^2/s$ (per metre width) in strong ocean currents such as the Gulf Stream. However, the global length of shelf edge, $O(5 \times 10^5 km)$ [4], gives a global aggregate cross-slope transport probably exceeding the transport in any ocean current.
In the following, typical estimates of exchanges are given in $m^2/s$: volume per second across a 1 m sector of shelf edge. These are the same units as for dispersion coefficient $K$, a different quantity but related: $K dC/dx$ or $K \Delta C / L_x$ represents transport $|u'C'|$ due to unresolved fluctuations of current $u'$ and constituent $C'$. Thus:
$K \sim | u'| L_x$ and exchange $|u'|h/2 \sim Kh/2L_x \qquad (1) ,$
with $h, \; L_x$ respectively the depth at the shelf edge and the cross-shelf scale (e.g. the shelf width). If as occasionally below we estimate $K$, then a (dimensionless) factor $h/2L_x$ is applicable to estimate the associated exchange.
## Processes
### Coastal-trapped waves
These are the basic waves that travel along the continental shelf and slope. Scales are typically one to several days and tens to hundreds of kilometres according to the width of the continental shelf and slope. The lowest mode (0) ‘Kelvin’ waves, also coastally trapped, travel cyclonically around ocean basins but with typical scales of thousands of kilometres both alongshore and for offshore decrease of properties.
Figure 2. Left panel: Sketch of form of elevation for mode 1 coastal-trapped wave. Right panel: Sketch of cross-slope structure of mode 3 coastal-trapped wave. The 3 lines sloping away from the sea floor denote zeros of pressure.
Figure 3: Sketch showing main sense of coastal-trapped wave propagation around ocean boundaries. Energy (but not phase) may travel in the reverse sense of the small arrows.
Coastal-trapped waves have been widely observed, along coastlines of various orientations and all continents in both the Northern and Southern Hemispheres. Mode 1 with simple structure (usually one offshore zero of bottom pressure; figure 2 left panel) has been most often identified; its peak coastal elevation is relatively easily measured. Higher modes (with more offshore zeros; figure 2 right panel) have been identified off Oregon, the Middle Atlantic Bight and New South Wales (Australia).
Coastal-trapped waves are not an independent cause of transports; their significance is in propagating shelf-wide day-to-day variations. Examples are:
• oceanic tides with large coastal sea-level variations and (on broad shelves) large currents;
• storm surges; strong currents and large changes of sea level induced by weather (atmospheric pressure and especially winds);
• wind-forced upwelling;
• along-slope currents and poleward undercurrents common on the eastern sides of oceans;
• responses to oceanic eddies and alongshore pressure gradients.
Thus transports induced in one location may appear later at another. The sense of phase propagation is cyclonic around an ocean basin (anti-clockwise in the northern hemisphere; clockwise in the southern hemisphere; figure 3). However, short-wave energy may travel slowly in the opposite sense, especially if stratification is weak [3].
### Along-slope currents
Figure 4: Sea-Surface Temperature showing two warm rings drawing colder shelf water across the shelf edge (black line) on their eastern flanks. At http://oceancurrents.rsmas.miami.edu/glossary.html
Figure 5: The poleward Leeuwin current off Western Australia shown by warm sea-surface temperature. At https://en.wikipedia.org/wiki/Leeuwin_Current.
Western boundary currents such as the Agulhas, Brazil Current, Gulf Stream and Kuroshio can be strong, sufficient to form meanders and eddies in some locations (e.g. Gulf Stream Rings; figure 4). They may also extend to the bottom to give a frictional cross-slope Ekman transport. Reviews of the contribution of Gulf Stream Warm-Core Rings to shelf/slope water exchange [5][3] suggest an average $O(0.5 \; m^2/s)$ north-east of the separation of the Gulf Stream from the shelf.
Along-slope currents are common at eastern ocean margins [6]. Equatorward surface currents are associated with upwelling in the north and south Pacific and Atlantic. Poleward flows over the upper continental slope occur off western USA, western Australia (figure 5) and western Europe, for example, forced by alongshore gradients in adjacent ocean density. In upwelling areas such poleward flow may be an undercurrent not showing at the surface.
Upwelling is a consequence of wind-forced off-shelf surface Ekman transport $\tau_w / \rho f .$ (Here $\tau_w$ is wind stress, $\rho$ is sea density and $f$ is Coriolis parameter.) The associated along-shelf flow may become strong enough to go unstable and develop meanders and filaments (figure 6); these enhance cross-shelf transport beyond $\tau_w / \rho f .$ In any case, flow at the bottom results in a bottom stress $\tau_b$ and corresponding Ekman transport $\tau_b / \rho f .$ which is off-shelf under poleward flow and on-shelf under equatorward flow.
Figure 6: Upwelling of California with meanders and filaments shown by cooler sea-surface temperature. At http://oceansjsu.com/105d/exped_climate/10.html
Figure 7: Fluxes (Sv) above 150m (blue) and below 150m depth (red). All fluxes are across the 200m contour shown; positive is onto the shelf except next to Norway (positive to north). From Huthnance et al.[7]
Given a typical drag coefficient 0.00125 for wind stress over the sea [8] and 0.0025 for currents above the sea bed [9], these Ekman transports are about $1 \; m^2/s$ for a typical wind of 8 m/s or near-bed current 0.2 m/s. For example, modelled 1960-2004 mean down-welling circulation for the north-west European shelf from Brittany to the Norwegian Trench was about 1.2 Sv [7] (figure 7) as a result of winds driving surface waters onto the shelf and bottom frictional Ekman transport off the shelf $(1 Sv= 10^6 \; m^3/s).$ The latter is associated with prevailing poleward flow of order 2 Sv along the continental slope (typically in 200 to 1000 m depth) around Ireland and Scotland.
### Cascading
Figure 8: Cascade off Foxe Basin, based on data from Campbell [10]: salinity (solid lines) and temperature (shaded areas) in August–September 1955. From Ivanov et al.[11].
Dense water formed by winter cooling of shallow shelf seas may cascade down the slope under gravity (figure 8), eventually leaving the sloping bottom at its density level. [The excess density may be enhanced if sea ice forms, adding salt to the unfrozen water. Increased density resulting from salination through evaporation can also occur in hot dry conditions.] Typical values of cascading fluxes are estimated as $0.5-1.6 \; m^2/s$ [12], significant when and where they occur but highly intermittent.
### Tides
Tides are a consequence of gravitational forces on the solid earth and especially on the ocean. The forcing is global in scale and mainly acts on wide ocean basins. However, the largest tidal elevations are seen at coasts bounding broad shelf seas, where tidal currents are strongest and much of the global tidal energy dissipation occurs. Thus tides are an example of full ocean – shelf sea interaction, with further amplification on broad shelves.
Consider a uniform shelf of width $W$ and semi-diurnal tidal range $R$. A volume $WR$ per unit length has to be supplied in 6.2 hours to fill the “tidal prism” between low and high tide. If sinusoidal in time, the required tidal current across the shelf edge in depth $h$ reaches $\pi WR/hT$ ($T$ is the semidiurnal tidal period). For example, the Celtic Sea (continental shelf) south-west of the UK is up to 400 km wide with large tidal range, e.g. 4 m at Newlyn. $\pi WR/hT$ with $h$ = 150 m gives about 0.75 m/s. Thus in places the peak ebb and flood currents exceed 0.5 m/s; the corresponding exchanges are as much as $80 \; m^2/s.$
However, semi-diurnal tidal exchanges are reversed every 6.2 hours; this is too soon for much to happen in the water, reducing their effect. Long-term exchange is associated with tides through shear dispersion, coefficient about $10^3|U^2|$ or $500 \; m^2/s$ [13]. From (1) we estimate the associated exchange as $0.1 \; m^2/s$ or more $(h= 150 \; m, \; L_x \le 400 \; km).$
### Surges
This refers to flows driven by weather systems so that spatial scales typically correspond to the width of shelf and time-scales are several hours to a day or two. Shallow shelf seas are relatively easily accelerated. Hence mode 1 coastal-trapped waves tend to be generated on account of their matching scales [3]. Thus longer-period wind forcing tends to be associated with along-slope flow; higher frequencies break the geostrophic constraint to allow cross-slope flow, especially near a mode-1 frequency maximum for which the wave tends to have anti-cyclonically polarised flow near the shelf edge. Otherwise surges tend to be over the shelf, not in deeper water, with maximum elevation at the coast.
### Internal tides and waves
Figure 9: Thermistor chain cross-section of temperature (° C) through internal wave packet over continental slope depth 400 m. Data from RV Colonel Templer from 0100 to 0200 UTC on the 19th August 1995, Hebrides shelf edge west of Scotland near 56.5°N, 9.1°W. At http://www.whoi.edu/science/AOPE/people/tduda/isww/text/small/jsmall.htm (figure 2 from Small et al. [14]).
Internal tides are formed when shelf-edge topography causes vertical displacements in cross-slope tidal flow. If they were entirely linear, then no net transport would result during a tidal cycle. However (i) layer thickness may correlate with oscillatory velocity and (ii) strong cross-slope tidal currents generate large-amplitude internal tides.
(i) For a single layer or a surface wave, this correlation implies an ‘eddy’ transport equivalent to the Stokes Drift. For multiple layers it forms a component of the Stokes Drift. Such ‘eddy’ transport is liable to be offset by opposing mean flow $\hat u$ in the same layer or elsewhere.
(ii) If vertical displacements are a substantial fraction of water depth, then the motion is non-linear, the waves steepen and form one or more solitons (figure 9) which carry water “bodily” as part of the wave form (e.g. Celtic Sea [15], South China Sea [16]). Then the transport can be estimated from the soliton amplitude, length and speed; up to $1 \; m^2/s$ in extreme cases such as the Celtic Sea, NW Australia and Georges Bank (Huthnance[3], reviewing several authors’ work).
Internal waves occur at all frequencies $\sigma$ from the Coriolis frequency $f$ to the buoyancy frequency $N$, and all wave directions. They occur throughout the ocean from wind and tidal forcing; their oceanic spectrum is empirically near-universal except close to sources [17]; the energy density corresponds to an estimated shelf-ward energy fIux of the order of 1 kW/m [18]. Their strong vertical and horizontal currents, and breaking in places, cause turbulent mixing affecting many ocean processes. If the sea-floor slope closely matches the waves’ characteristic slope $[(\sigma^2-f^2)/(N^2-\sigma^2)]^{1/2}$ then the currents may be amplified several times within a bottom boundary layer.
The shelf edge provides an effective source of internal waves: at tidal frequencies as above with higher-frequency contributions if non-linearities are significant; as standing lee waves in longer-period flow along a rough continental slope [19] and analogously around a seamount [20]. Moreover, internal wave motion along characteristics is reflected off the sloping bottom, with a change of wavelength. Internal wave energy may also transmit to the shelf; but reflection is strong if the bottom slope exceeds the characteristic slope for the wave [21]. The most energetic known internal waves are generated in Luzon Strait (NE South China Sea) and cause turbulence exceeding 10,000 times background ocean turbulence [22]
### Surface waves
These may contribute to circulation and exchange via non-linear rectification of their currents. Estimated surface wave currents [23] are $0.015 w - 0.035 w$ under wind speed $w$, decaying with depth on a scale $w/3$ in mks units. The Stokes Drift (Lagrangian – Eulerian) transport, $\lt (\vec X .\vec \nabla) \vec u \gt \sim 0.01 \; w^2$ [where $\vec X = \int \vec u dt$] is thus comparable with the Ekman transport $\tau/ \rho f \sim 0.01 \; w^2$ in mks units. Typically, the waves and Stokes Drift have an on-shelf component owing to the greater scope for generation provided by the off-shelf area of ocean. This transport is concentrated near the surface; it represents a difference (tracked water movement - average velocity at one point) rather than absolute circulation and is likely to be offset by opposing flow elsewhere in the water column.
Shelf edge topography only affects the longest waves. For example, with the criterion (wavenumber × depth) < 2, only waves of period > 14 s “feel” the bottom at 100 m depth.
## Impacts
These include surface elevations and waves, especially at the coast. Circulation of water, and energy going into turbulence and mixing, all affect water properties and contents.
### Surface elevations and waves
Theory and modelling suggest that sea-level variability on the longest spatial scales transmits from ocean to shelf-sea and coast [24][25]. Tides are a notable example. However, satellite altimetry shows a minimum of sea-level variance at the shelf edge, compared with the shelf sea and open ocean [26], yet coherence along the continental slope. This suggests that shelf seas and open oceans have separate causes of sea-level variations (e.g. weather and eddies respectively) but some inhibition to transmission across the continental slope. Weather over the shelf tends to generate coastal-trapped wave mode 1, which typically has a node near the shelf edge, and perhaps typical eddy scales are too short. This is the subject of continuing research.
Figure 10: Map of rates of change in sea surface height (geocentric sea level) for the period 1993–2012 from satellite altimetry. Colour interval is 2 mm/year, zero is pale blue/white boundary. From Church et al. [27].
Sea level is rising globally and typical rates at the coast are similar to those in the open ocean now inferred from satellite altimetry. However, local sea level change can differ from the global average (figure 10). In the western tropical Pacific, sea level rise rates were up to 10 mm/yr averaged over 1993 to 2012 compared with a global mean of about 3 mm/yr. In contrast, sea level fell during this period in most of the east Pacific from Alaska to Peru. Varying winds, variability such as El Niño, oceanic thermal expansion and melted ice can alter ocean currents and associated sea level differences [27]. Ice re-distribution as water affects the earth’s gravitational field and hence “level”. These factors are all large-scale and are expected to transmit from ocean basins across the shelf to the coast. [Other factors are local or associated with land movement: water extraction, sediment compaction, tectonics, storms, earthquakes, landslides, uplift after the last ice age.]
The largest (and usually longest) surface waves are generated over a large “fetch” (extent of open ocean giving uninterrupted wind forcing). Hence they tend to come from deep ocean to shelf sea where they may have most impact: long waves steepen in shallower water; soft coasts and mobile sediments are vulnerable to stresses from wave motion.
### Flushing / water renewal
These concepts are best applied to semi-enclosed seas with coasts bounding well-defined inflows and outflows. [In open sea areas, flushing and residence times can be arbitrarily shortened merely by decreasing the area in question.] However, these concepts help to assess the effectiveness of ocean-shelf transports. Thus around $42^{\circ}N$ off north-west Spain, average exchange across the 200m depth contour was estimated as $3.1 \; m^2/s$ [28], a large value and sufficient to replace the shelf–sea cross-section $3.32 \; km^2$ in only about 12 days. Estimates of Irish Sea outflow are similarly $2 - 3 \; m^2/s$ [i.e. mostly $(0.7-1.1) \times 10^5 \; m^3/s$ through the North Channel width 35 km [29]]. However, this is only the volume of the Irish Sea in about one year (resupplied mostly by water originally from the Atlantic). Similarly inflows to the North Sea are 1.5-2 Sv mostly from the North [30], corresponding to $2 - 3 \; m^2/s$ on average across Scotland – Shetland – Norway. However, the North Sea is large and this inflow only amounts to the North Sea volume in about one year. Indeed, within the North Sea, the northern area tends to be flushed in a shorter time whereas in the centre the through-flow is much slower.
### Turbulence and Mixing
As elsewhere, important processes for generating turbulence and consequent mixing near the continental shelf edge are: wind forcing and surface waves for surface mixing, internal tides and waves for mixing in the interior, tidal currents and internal waves near the bottom (e.g. Huthnance [3]). The intensity varies greatly according to context. At the Celtic Sea shelf edge, internal waves are very variable [31] but large, especially at (irregular) times near spring tides, and are associated with pycnocline mixing and density reduction near the bottom [32]. An upward flux of nitrate with a spring-neap cycle further demonstrates internal tidal forcing, associated with the passage of internal solitons around spring tides [33].
### Biogeochemistry
Figure 11: Sketch of south-north section through the North Sea. The shallow south is vertically mixed. In the north, respiration is mainly below the summer thermocline (dotted line) and subject to exchange with the North Atlantic. From Thomas et al.[34].
Many studies have discussed enhanced primary production on continental shelves in upwelling regions. These are typically at sub-tropical eastern ocean margins; winds with an equatorward component (at least seasonally) drive surface Ekman transport offshore. These surface waters are replaced by nutrient-rich waters from below, fuelling phytoplankton growth. Prevailing winds give the north-west European continental shelf overall down-welling circulation. Nevertheless, studies suggest a net supply of nitrate from the ocean to the North Sea [35] and to the west of Scotland [36]. Moreover, sinking of ensuing plankton leads to an off-shelf near-bottom carbon flux west of Scotland [36] as suggested more widely from Ireland to the Norwegian Trench on the basis of the down-welling circulation [37] [38].
Seasonally stratified shelf seas may act as sinks of atmospheric $CO_2$ (the shelf-sea carbon “pump”). Primary production takes up atmospheric $CO_2$; sinking, sub-thermocline respiration of organic matter and off-shelf transport in a bottom layer follow before winter mixing gives re-exposure to the atmosphere (figure 11). Nitrate and carbon cycles over Goban Spur, as constructed in OMEX [39], also involve import of nitrate to the shelf from the deeper ocean and export of carbon from the shelf break to deeper on the continental slope. Distinctive mixing by internal tides and solitons at the Celtic Sea shelf edge, with the resulting upward nitrate flux, fuels a sub-surface chlorophyll maximum demonstrating consequent growth of phytoplankton. Compared with the adjacent Celtic (shelf) Sea and northeast Atlantic Ocean, shelf-edge vertical mixing of nitrate is enhanced and correspondingly the largest phytoplankton cells are found in surface waters there [33][40]. The shelf-edge internal tide (globally ubiquitous) is suggested to support shelf-edge fisheries by providing large-celled phytoplankton for first-feeding fish larvae without any need to coincide with a spring bloom; moreover large-cell phytoplankton favour particulate organic carbon export and sequestration in deep water and sediments [40].
Table1: Some ocean – shelf sea interaction studies. Fennel ref. [41].
## Further Reading
Huthnance JM, 1995. Circulation, exchange and water masses at the ocean margin: the role of physical processes at the shelf edge. Progress in Oceanography 35(4), 353-431.
Johnson J, Chapman P (eds.), 2009-2011. Deep ocean exchange with the shelf. Ocean Science, Special Issue 18. http://www.ocean-sci.net/special_issue18.html
Journal of Marine Systems Volume 32, Issues 1–3 pp. EX1-EX2, 1-252 (April 2002): Exchange Processes at the Ocean Margins
Liu, K-K, Atkinson L, Quinones R, Talaue-McManus L (Eds.), 2010. Carbon and Nutrient Fluxes in Continental Margins: A Global Synthesis. Springer-Verlag Berlin Heidelberg, ISBN 978-3-540-92735-8. http://www.springer.com/us/book/9783540927341.
Robinson AR, Brink KH (editors). 1998. The Global Coastal Ocean. The Sea, Volumes 10, 11, 13, 14. John Wiley & Sons, New York. (1998, 1998, 2005, 2006 respectively).
## References
1. Liu KK, Atkinson L, Quiñones RA, Talaue-McManus L, 2010. Biogeochemistry of Continental Margins in a Global Context. Pp3-24 in Carbon and Nutrient Fluxes in Continental Margins: A Global Synthesis (eds Liu KK, Atkinson L, Quiñones RA, Talaue-McManus L), Springer.
2. Pauly D, Christensen V, Guenette Set al. 2002. Towards sustainability in world fisheries. Nature 418(6898), 689-695.
3. Huthnance JM, 1995. Circulation, exchange and water masses at the ocean margin: the role of physical processes at the shelf edge. Progress in Oceanography 35(4), 353-431.
4. Robinson AR, Brink KH, Ducklow HW, Jahnke RA, Rothschild BJ, 2005. Interdisciplinary multiscale coastal dynamical processes and interaction. The Sea 13 (Robinson AR, Brink KH, eds.), 3-35.
5. Joyce TM, 1991. Review of U.S. contributions to warm-core rings. Reviews of Geophysics, S29, 610-616.
6. Neshyba SJ, Mooers CNK, Smith RL, Barber RT, 1989. Poleward flows along eastern ocean boundaries. Coastal and Estuarine Studies 34. Springer-Verlag New York, Inc. ISBN 0-387-97175-O. 374 pp.
7. Huthnance JM, Holt JT, Wakelin SL, 2009. Deep ocean exchange with west-European shelf seas. Ocean Science 5, 621-634, doi:10.5194/os-5-621-2009.
8. Kara, AB, Wallcraft AJ, Metzger EJ, Hurlburt HE, Fairall CW, 2007. Wind stress drag coefficient over the global ocean. Journal of Climate 20, 5856-5864. DOI: http://dx.doi.org/10.1175/2007JCLI1825.1
9. Green MO, McCave IN, 1995. Seabed drag coefficient under tidal currents in the eastern Irish Sea. Journal of Geophysical Research 100 (C8), 16057–16069. DOI: 10.1029/95JC01381
10. Campbell NJ, 1964. The origin of cold high salinity water in Foxe Basin. Journal of Fisheries Research Board of Canada 21, 45–55.
11. Ivanov VV, Shapiro GI, Huthnance JM, Aleynik DL, Golovin PN, 2004. Cascades of dense water around the world ocean. Progress in Oceanography 60, 47-98.
12. Shapiro GI, Huthnance JM, Ivanov VV, 2003. Dense water cascading off the continental shelf. Journal of Geophysical Research 108(C12), 3390. doi:10.1029/2002JC001488.
13. Prandle D, 1984. A modelling study of the mixing of 137Cs in the seas of the European continental shelf. Philosophical Transactions of the Royal Society of London A, 310, 407–436.
14. Small J, Hornby B, Prior M, Scott J, 1998. Internal solitons in the ocean: prediction from SAR. At http://www.whoi.edu/science/AOPE/people/tduda/isww/text/small/jsmall.htm
15. Vlasenko V, Stashchuk N, Inall ME, Hopkins JE, 2014. Tidal energy conversion in a global hot spot: On the 3‐D dynamics of baroclinic tides at the Celtic Sea shelf break. Journal of Geophysical Research: Oceans 119 (6), 3249-3265.
16. Lien R-C, Tang TY, Chang MH, D’Asaro EA, 2005. Energy of nonlinear internal waves in the South China Sea. Geophysical Research Letters 32 (5). DOI: 10.1029/2004GL022012
17. Garret CJR and Munk WH (1979) Internal Waves in the Ocean. Ann. Rev. Fluid Mech. 11, 339-369
18. Huthnance JM, 1981. Waves and currents near the continental shelf edge. Progress in Oceanography 10, 193-226.
19. Thorpe SA, 1992. The generation of internal waves by flow over the rough topography of a continental slope. Proceedings of the Royal Society of London, A439, 115-130.
20. Chapman DC, Haidvogel DB, 1993. Generation of internal Iee waves trapped over a tall isolated seamount. Geophysical and Astrophysical Fluid Dynamics 69, 33-54.
21. Hall RA, Huthnance JM, Williams RG, 2013. Internal wave reflection on shelf slopes with depth-varying stratification. Journal of Physical Oceanography 43, 248-258.
22. Alford MH, Peacock T, MacKinnon JA, Nash JD, Buijsman MC, Centurioni LR, Chao S-Y, Chang M-H, Farmer DM, Fringer OB, Fu K-H, Gallacher PC, Graber HC, Helfrich KR, Jachec SM, Jackson CR, Klymak JM, Ko DS, Jan S, Johnston TMS, Legg S, Lee I-H, Lien R-C, Mercier MJ, Moum JN, Musgrave R, Park J-H, Pickering AI, Pinkel R, Rainville L, Ramp SR, Rudnick DL, Sarkar S, Scotti A, Simmons HL, St Laurent LC, Venayagamoorthy SK, Wang Y-H, Wang J, Yang YJ, Paluszkiewicz T, Tang T-Y, 2015. The formation and fate of internal waves in the South China Sea. Nature 521, 65-69.
23. Kenyon KE, 1969. Stokes drift for random gravity waves. Journal of Geophysical Research 74, 6991-6994.
24. Huthnance JM, 1987. Along-shelf evolution and sea levels across the continental slope. Continental Shelf Research 7, 957-974.
25. Huthnance JM, 2004. Ocean‐to‐shelf signal transmission: A parameter study. Journal of Geophysical Research: Oceans 109, C12029, 11pp. doi:10.1029/2004JC002358.)
26. Hughes CW, Meredith MP, 2006. Coherent sea-level fluctuations along the global continental slope. Philosophical Transactions of the Royal Society of London, A, 364 (1841). 885-901. 10.1098/rsta.2006.1744
27. Church JA, Clark PU, Cazenave A, Gregory JM, Jevrejeva S, Levermann A, Merrifield MA, Milne GA, Nerem RS, Nunn PD, Payne AJ, Pfeffer WT, Stammer D, Unnikrishnan AS, 2013: Sea Level Change. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
28. Huthnance JM, Van Aken HM, White M, Barton ED, Le Cann B, Coelho EF, Fanjul EA, Miller P, Vitorino J, 2002. Ocean margin exchange - water flux estimates. Journal of Marine Systems 32 (1-3), 107-137.
29. Knight PJ, Howarth MJ, 1999. The flow through the north channel of the Irish Sea. Continental Shelf Research 19, 693-716.
30. Otto L, Zimmerman JTF, Furnes G, Mork M, Saetre R, Becker G, 1990. Review of the physical oceanography of the North Sea. Netherlands Journal of Sea Research 26 (2), 161-238.
31. Green JAM, Simpson JH, Legg S, Palmer MR, 2008. Internal waves, baroclinic energy fluxes and mixing at the European shelf edge. Continental Shelf Research 28, 937-950.
32. Palmer MR, Stephenson GR, Inall ME, Balfour C, Düsterhus A, Green JAM, 2015. Turbulence and mixing by internal waves in the Celtic Sea determined from ocean glider microstructure measurements. Journal of Marine Systems 144, 57-69.
33. Sharples J, Tweddle JF, Green JAM, Palmer MR, Kim Y-N, Hickman AE, Holligan PM, Moore CM, Rippeth TP, Simpson JH, Krivtsov V, 2007. Spring-neap modulation of internal tide mixing and vertical nitrate fluxes at a shelf edge in summer. Limnology and Oceanography 52 (5), 1735-1747.
34. Thomas H, Bozec Y, Elkalay K, de Baar HJW, 2004. Enhanced open ocean storage of CO2 from shelf sea pumping. Science 304, 1005-1008.
35. Pätsch J, Kühn W, 2008. Nitrogen and carbon cycling in the North Sea and exchange with the North Atlantic—A model study. Part I. Nitrogen budget and fluxes. Continental Shelf Research 28 (6), 767-787.
36. Proctor R, Chen F, Tett PB, 2003. Carbon and nitrogen fluxes across the Hebridean shelf break, estimated by a 2D coupled physical-microbiological model. Science of the Total Environment 314, 787-800.
37. Holt J, Wakelin S, Huthnance J, 2009. Down-welling circulation of the northwest European continental shelf: A driving mechanism for the continental shelf carbon pump. Geophysical Research Letters 36, L14602.
38. Wakelin SL, Holt JT, Blackford JC, Allen JI, Butenschön M, Artioli Y, 2012. Modeling the carbon fluxes of the northwest European continental shelf: Validation and budgets. Journal of Geophysical Research Oceans 117 (C5), C05020. DOI: 10.1029/2011JC007402.
39. Wollast R, Chou L, 2001. The carbon cycle at the ocean margin in the northern Gulf of Biscay. Deep-Sea Research II, 48 (14-15), 3265-3293.
40. Sharples J, Moore CM, Hickman AE, Holligan PM, Tweddle JF, Palmer MR, Simpson JH, 2009. Internal tidal mixing as a control on continental margin ecosystems. Geophys. Res. Lett., 36, L23603, doi:10.1029/2009GL040683.
41. Fennel K, 2010. The role of continental shelves in nitrogen and carbon cycling: Northwestern North Atlantic case study. Ocean Science 6, 539-548.
The main author of this article is John HuthnancePlease note that others may also have edited the contents of this article. Citation: John Huthnance (2019): Shelf sea exchange with the ocean. Available from http://www.coastalwiki.org/wiki/Shelf_sea_exchange_with_the_ocean [accessed on 21-10-2019] For other articles by this author see Category:Articles by John Huthnance For an overview of contributions by this author see Special:Contributions/John Huthnance
|
2019-10-21 19:53:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556640028953552, "perplexity": 11481.898945405883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00268.warc.gz"}
|
http://nauticalsas.it/qkvp/regex-between-quotes.html
|
# Regex Between Quotes
In this article, we go through a lot of great ways to use Regular Expression, or Regex, Patters, and their. Regular Expression to Extract text bounded by single or double quotes. """ # handle active datetime with same day result = _DATETIMERANGE_SAME_DAY_REGEX. The term. As a conservative activist, she staunchly opposed feminism and abortion, disdained multiculturalism and spoke out against LGBT rights. I urgently require a regular expression for writing names in the text field. This is the one case where the escape sequences work as expected based on the documentation. C++11 has suport for a few regular expression grammars like ECMAScript, awk, grep and a few others, all examples from this tutorial use the ECMAScript syntax. txt)) * Search for available software on a debian package system (Ubuntu) >> aptitude search NAME * Search for a word. But here is the problem. A regular expression (or RE) specifies a set of strings that matches it; the functions in this module let you check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). and you just want the portions that are actually between the quotes, the quickest and easiest way to get it is through a regular expression match. Usually such patterns are used by string searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Split methods are similar to the String. regular expression in the handler. To use Regex. Call it an ill behaved import-csv command since the csv file specification does allow for newline characters between quotes. A regular expression is a pattern that can match various text strings. Hi Tom, Here are the business requirements: 1. Both can be very similar, but Lua pattern matching is more limited and has a different syntax. Caution The addslashes() function is run on each matched backreference before the substitution takes place. You can also escape them between double quotes. Out-File SampleDataSorted. groups()] start = datetime. First two minor syntactic remarks: * Using the "/" as an operator is deprecated; add "m" in front of it. The pattern should be enclosed in single or double quotes like any other string. For example to match 1+1=2, the correct regex is 1\+1=2. It is a pattern that is matched against the text to be searched. A regular expression (or RE) specifies a set of strings that matches it; the functions in this module let you check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). regex package that contains. For example, the following code finds a text in double quotes, where the text can contain single quotes:. I'm trying to build a regular expression (in Python) which essentially matches on any word which precedes the asterisk character, but only when the asterisk occurs between < and >. REGEXEXTRACT(text, regular_expression) text - The input text. If the regular expression remains constant, using this can improve performance. This works pretty well but we get an extra underscore character _. There is an easy way to match values within single quotes or double quotes using Regex. Sed can also address a range of lines in the same manner, using a comma to separate the 2 addresses. The regular expression extractor is used to capture the dynamic data- from the response header or body, from the main sample or subsamples, or even from another JMeter variable- and store it into. It prevents the regex from matching characters before or after the phrase. The Oracle REGEXP_REPLACE () function replaces a sequence of characters that matches a regular expression pattern with another string. Single quotes, double quotes, backslashes (\) and NULL chars will be escaped by backslashes in substituted backreferences. after namespace= match till next double quotes (avoiding escaped backward slash altogether). takes zero or more characters up until it matches the character that is: stored in the group named "quote" (i. Python has a built-in package called re, which can be used to work with Regular Expressions. Monday, November 27, 2017 - 2:34:59 PM - mg. C# Regex class offers methods and properties to parse a large text to find patterns of characters. Here are the quick lines to carry out regular tasks using regular expression in Editplus. Repetition Operators. I would like an expression that changes the subject to "Person: Jack joe" and nothing. Posix Regular Expressions This is the declarative approach to regular expressions. Note: precision and var ange[1] remain unchanged for the entire problem. Note: I included the regex '\t?' even though it is a little incorrect; because - if it worked - it would simply match the sequence "a TAB char that may be followed by another char". split /regex/, string splits string into a list of substrings and returns that list. This post is a long-format reply to Jonathan Jordan's recent post. The regex matches everything between a quote character, and the last quote character in the string. \b is a word boundary; So in this case - what this says is "keep 'VA' for me if it's between any 2 word boundaries" This should get you to an answer, and also help your regex exploration :-) Have a good. == MediaWiki 1. and you just want the portions that are actually between the quotes, the quickest and easiest way to get it is through a regular expression match. NET Regex class library directly. Hi all, Can you suggest me how to remove the New Line Character,Carriage Return and Tab Characters from a string? please suggest me how to build a Regex for this characters?. Wikipedia has a table comparing the different regex engines. Strings have a match method to match them against a regular expression and a search method to search for one, returning only the starting position of the match. Text enclosed by @ignore' or by failing @ifset' or @ifclear' conditions is ignored in the sense that it will not contribute to the formatted output. 1 The premier regular expression development tool. IgnoreCase - searches regardless of the letter case and RegExp. Here's an interesting regex problem: I seem to have stumbled upon a puzzle that evidently is not new, but for which no (simple) solution has yet been found. The regex (,[^,]*,) searches for a comma and sequence of characters followed by a comma which results in matching the 2nd column, and replaces this pattern matched with just a comma, ultimately ending in deleting the 2nd column. The backslash can be used to escape regex characters. Not sure of an eaiser way to do this. Strings are placed between double quotes. › [Solved] Remove CR/LF that are in between double quotes. Note that the values for assigning centuries are based on my knowledge of my “data. Strings can be placed either between single quotes ' or double quotes " and they have slightly different behavior. Diciamo che c'è una stringa come questa:. To summarize the main difference between single and double quotation marks: Single quotation marks do not interpret, and double quotation marks do. One of the first things a photographer learns about image formats is that JPEG image compression is “lossy”, meaning that the smaller file produced by greater compression comes at the cost of lower image quality. There are quite a few RegEx tips on this website, they can be good tutorials for your understanding of RegEx. When you quote a character, you ask the shell to leave it alone - and pass it on unchanged to the utility. [David McCreedy and others at IBM] *) Add TPF processing for the socket read to the rfc1413 code. ID,Name,State 5,Stephanie,Arizona 4,Melanie,Oregon 2,Katie,Texas. Usually this pattern is used for "find" or "find and replace" operations on text, or input validation. @Winston Like you said, I am trying to parse an expression with a lot of braces, quotes and other characters. [email protected] They can be also used as a data generator, following the concept of reversed regular expressions, and provide randomized test data for use in test databases. The sample expressions in Table 11-2 are quoted for use within Tcl scripts. Dollar ($) matches the position right after the last character in the string. The items in between will be linked with one another. Introduction to Perl string. You can find me on Twitter (@MrThomasRayner),. version_info < (3,): from cStringIO import StringIO else: from io import StringIO xrange = range from tokenize import generate_tokens a = 'par1=val1,par2=val2,par3="some text1, again some text2, again some text3",par4="some text",par5=val5' def parts. If your regular expression includes the single quote character, then enter two single quotation marks to represent one single quotation mark within the expression. The single or double quotes inside the parentheses are part of the SAS syntax. The Regex Fun extension provides four new parser functions for performing regular expression handling within wiki pages. "Between Shades of Gray" by Ruta Sepetys, published in 2011, is a fictional story based on real-life testimonies of people's experiences during the Stalinist repression of the mid-twentieth. As cute as the "now you have two problems" quote is, it seems that Jamie wasn't the first to come up with the idea. A regular expression does away with the quotes around the terminals, and the spaces between terminals and operators, so that it consists just of terminal characters, parentheses for grouping, and operator characters. So that the text can be inserted into a MySQL database, the apostrophes must be escaped (\') (Maybe some other characters as well. *\) The regex does a sub match on the text between the first set of quotes and returns the sub match displayed above. And then split the rest of the string with comma (,) I don't want to do it checking every single character setting flags for start and end of double quotes. So basically I'm trying to replace newline with a string maybe even outside powershell before bringing it into powershell. What I want is to remove every comma (,) that is enclosed between double quotes " ". To use Regex. Regular Expressions for Data Science (PDF) Download the regex cheat sheet here. A regular expression (also called regex) is a way to work with strings, in a very performant way. As I’m still not a regex mastermind I couldn’t come up with it just like that. Regular Expression to Extract text bounded by single or double quotes. It enables a user to inject a custom tokenizer for processing a CSV line. It is a technique developed in theoretical computer science and formal language theory. If you need to use the matched substring within the same regular expression, you can retrieve it using the backreference \num , where num = 1. Extract between quotes Extract text bounded by single or double quotes Comments. Hi all, Can you suggest me how to remove the New Line Character,Carriage Return and Tab Characters from a string? please suggest me how to build a Regex for this characters?. regular-expression My boss is trying to split a comma-delimited string with Regex. If you want to add the movie to other text to create, you can concatenate the movie title inside double quotes with a formula like this:. With Perl regex enabled, type your expression and hit Next. This reference page for the Mule Expression Language Creates a java. Url Validation Regex | Regular Expression - Taha nginx test match whole word special characters check Extract String Between Two STRINGS Blocking site with unblocked games Match anything enclosed by square brackets. Quote(String S) Method in java. Regular Expressions are provided under java. In vim, you can select by using type “v i” follows by quote, parenthesis, etc. Call it an ill behaved import-csv command since the csv file specification does allow for newline characters between quotes. Thanks for contributing an answer to Emacs Stack Exchange! Please be sure to answer the question. In this case ‘Edit” for “Jack’. Through the constructor function of the RegExp object. Regular expressions (regex) match and parse text. Replace values in Pandas dataframe using regex While working with large sets of data, it often contains text data and in many cases, those texts are not pretty at all. A regular expression (also called regex) is a way to work with strings, in a very performant way. Since you know that the double quotes will always be matched, you might try this instead of your first two replaces: r. Select-string, regex for stuff in quotes, tostring? Welcome › Forums › General PowerShell Q&A › Select-string, regex for stuff in quotes, tostring? This topic has 0 replies, 1 voice, and was last updated 8 years, 4 months ago by Forums Archives. For example, the text: ('Surf's Up!. If someone entered a search phrase that has \E in it, then our regular expression will be malformed, having two \E instances and only one \Q instance. So what is that regex doing? Let’s break it down into it’s parts. Glad it worked. # python 3 c = r "this and that" print (c) # prints a single line Quote Multi-Lines. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. The string is split as many times as possible. If the regular expression remains constant, using this can improve performance. Regular expression patterns in HaRP work over ordinary Haskell lists ([]) of arbitrary type. IF you want to become proficient in unix, regex. Net framework uses a traditional NFA regex engine, to learn more about regular expressions look for the book Mastering Regular Expressions by Jeffrey Friedl “Mere enthusiasm is the all in all. This is remind me of vim. After the last character in the data, if the. In this tutorial, I will use the term “string” to indicate the text that I am applying the regular expression to. Active 2 years, 10 months ago. MultiLine - performs a multi-line. to perform an operation on specified string using a regular expression pattern A sequence of characters that define a search pattern. I might try this tack though if JavaScript has the \w for word letters. Each string is called a 'word' or an. Sounds easy. We use cookies and similar technologies to give you a better experience, improve performance, analyze traffic, and to personalize content. "RE: So my general principle would be this. If a part of a regular expression is enclosed in parentheses, that part of the regular expression is grouped together. Such is the life of a programmer :). Doing so, ensures that the entire expression is interpreted by the SQL function and can improve the readability of your code. help with regex for windows event log 1 Answer. We'll take a look at 4 scenarios where you might want to place an apostrophe or single quote in a string. The output file is only written if it differs from the existing file. Quotes, or "speech marks" play a key role with Regex. (1) Add * between the two specified marks that you will extract text between, and type them into the Text box. In Python, such sequence of characters is included inside single or double quotes. String replaceAll(String regex, String replacement): It replaces all the substrings that fits the given regular expression with the replacement String. Continue matching ANY characters 3. There is no great solution for this and depending on your use case I would recommend writing a quick script that actually uses your first expression and creates a new file with all of the matches (or something like this). Especially if you consider that one of the acronyms of Perl is Practical Extraction and Reporting Language and for that you need to use lots of strings. The pattern and Pattern Properties keywords use regular expressions to express constraints. Using regular. The Java Regex or Regular Expression is an API to define a pattern for searching or manipulating strings. A great example I reckon. 005, using the Perl-compatible regular expressions library written by Philip Hazel). A regular expression is not language specific but they differ slightly for each language. you cannot use double quotes to express String literals, because MEL expressions already appear enclosed in double quotes in. Using workflow variable in RegEx Action In my previous thread I had gotten a partial answer and did not want to drag out the discussion any further. It is widely used to define the constraint on strings such as password and email validation. Features: Can do whole directory, or limit by dir depth, or by list of files. You can do this in C# with the static method Regex. Regular Expression (Delimit with quotes) String (Delimit with quotes) Rule 5. Making statements based on opinion; back them up with references or personal experience. The only difference between the example above and the example below is the type of quotes. before, after, or between characters. ETL scenarios can be messy. Regex split quoted strings with escape quotes. Tip: If your regular expressions don't seem to be working then you probably need to use single quotation marks over the sentence and then use backslashes on every single special character. Here is a quick solution: [code java] String s = "\"hello\", \"palantir\", \"quora\", \"marc\", \"quora\""; String[] elements = s. You write your Perl regular expression as the argument of the PRXPARSE function. Special Characters. Repetition Operators. Most programming languages have a split() function that takes a regular expression and a string. We can use the slashes if we have a string with both double and single quotes and we don't want to escape them. Powershell makes use of regular expressions in several ways. As I’m still not a regex mastermind I couldn’t come up with it just like that. Posix Regular Expressions This is the declarative approach to regular expressions. In its simplest form, when no regular expression type is given, grep interpret search patterns as basic regular expressions. A regular expression is a character sequence that is an abbreviated definition of a set of strings (a regular. I'm going to show you how to do something with regular expressions that's long been thought impossible. The key to the solution is a so called “negative lookahead“. Excluding Matches With Regular Expressions. The RegExp: ([a-e]{2}) will match the following text twice: It's another great day! Click here to load this example into the RegExp validator. What I want to do is just have the subject and not the numbers. I could not parse it with csv, but if your string looks like python code, you can use python's own tokenizer: # python 2 and 3 import sys if sys. The pattern determines which strings belong to the set. Character limits for /G:FILE search strings. Since you usually type regular expressions within shell commands, it is good practice to enclose the regular expression in single quotes (') to stop the shell from expanding it before passing the argument to your search tool. Additionally you can specify the following Boolean flags that affect the search procedure: RegExp. If you have an older version of the Framework, you might want to install Expresso 3. The regex does a sub match on the text between the first set of quotes and returns the sub match displayed above. I am working on a PowerShell script. Regular Expression Example 1. using regex in vb. Single quotes only (use value of capture group #1):. Single quotes, double quotes, backslashes (\) and NULL chars will be escaped by backslashes in substituted backreferences. Google products use RE2 for regular expressions. To quote a string of multiple lines, use triple quotes. One of the common patterns in that space is the quoted-string, which is a fantastic context in which to discuss the. The rest of the string is "", which obviously does not match the first /^(a)/. What I want is to remove every comma (,) that is enclosed between double quotes " ". The regular expression looks for a missing word among all of those listed between brackets and separated by the && double character combination. NET regular expressions. Note: To delete the fields in the middle gets more tougher in sed since every field has to be matched literally. The quotes are dumped to another database. ch] PR#1666 *) For maximum portability, the environment passed to CGIs should only contain variables whose names match the regex /[a-zA-Z][a-zA-Z0-9_]*/. STRING is interpolated the same way as PATTERN in m/PATTERN/. After learning Java regex tutorial, you will be able to test your regular expressions by the Java Regex Tester Tool. The last sentence does not indicate a bug. A regular expression is a special sequence of characters that helps you match or find other strings or sets of strings, using a specialized syntax held in a. 6 Testing Regular Expressions¶ Since JMeter 2. 5k Inspirational Quotes Quotes 19k Truth Quotes 18. When you create a regular expression by calling the RegExp constructor function, the resulting object gets created at run time, rather than when the script is loaded. - It is now possible to define a scope for the search inside News. A regular expression is a method used in programming for pattern matching. I'm interested in either better alternatives that match the same thing or an explanation on why my regex is matching a single quote and a double quote followed by any character when the last character is a backreference (i. The main difference to other regex extensions such as RegexFunctions or RegexParserFunctions, besides richer functionality, is that this extension provides a function #regexquote for encoding user-provided strings for the use within a regular expression, a function #regex. The type and name of the object will be set between the braces. Buck at BrainyQuote. Single and double quotes delimit character constants. %[^"] will match any string characters up to the first double quote. I always enjoy a good regex problem so no worries from me ;). string between quotes + nested quotes SpreadFormatter Vasya regex date substitution Find YouTube Links Replace Path Validate an ip address Last folder with his slash Driss Test Kundennummern Téléphone validateFilename html parse. 5 Other useful Perl string functions. How can this be done?. If you don't know how to use them, try consulting the man pages for ed, egrep, vi, or regex. C# Regex is used to work with regular expressions in C#. The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. The is often in very messier form and we need to clean those data before we can do anything meaningful with that text data. * Redirect Delay: The redirect delay is the number of seconds to wait after displaying flash messages like "Issue created successfully", and before the user gets redirected to the next page. It enables a user to inject a custom tokenizer for processing a CSV line. A regular expression is a pattern that is matched against a subject string from left to right. Perl's Special Variables. By Paul Hoffman. Many Unix tools such as egrep, sed, or awk use a pattern matching language that is similar to the one described here. You can refer to the same text as previously matched by a capturing group with backreferences, like \1, \2 etc. There is a Website to test Java Regular expressions. split method returns parts of the string which are generated from the split. Here are some examples using grep: Text version. NET regular expression tester with real-time highlighting and detailed results output. t, which includes "this", "that". RegEx: Text Between Quotes? - posted in Ask for Help: How can I use RegEx to get multiple texts between quotes and then assign a variable?For example, I have the following string: I would like to assign a different variable to the text between the quotes for both id and uc. The replacement string can either be a regular expression that contains references to captured. Extract between quotes Extract text bounded by single or double quotes Comments. (This form is a kind of regular expression. Posix Regular Expressions This is the declarative approach to regular expressions. For each row, replace double quotes with empty string. Here are RegEx which return only the values between quotation marks (as the questioner was asking for): Double quotes only (use value of capture group #1): "(. I would like an expression that changes the subject to "Person: Jack joe" and nothing. Thanks for contributing an answer to Vi and Vim Stack Exchange! Please be sure to answer the question. * Redirect Delay: The redirect delay is the number of seconds to wait after displaying flash messages like "Issue created successfully", and before the user gets redirected to the next page. I have already decided to ditch the 'use-regex-entirely' approach, and working towards writing a manual parser, that takes the help of a little regex now and then. Right now the regular expression gives me the quote, i. Repetition operators repeat the preceding regular expression a specified number of times. Fortunately the grouping and alternation facilities provided by the regex engine are very capable, but when all else fails we can just perform a second match using a separate regular expression - supported by the tool or native language of your choice. Here's the problem. ETL scenarios can be messy. Change notes from older releases. The key to the solution is a so called “negative lookahead“. For a discussion of regular expression syntax and usage, see an online resource such as www. A regular expression (abbreviated regex or regexp and sometimes called a rational expression) is a sequence of characters that forms a search pattern, mainly for use in pattern-matching and "search-and-replace" functions. Also, characters from the. 1 === * The installer now includes a check for a data corruption issue with certain versions of libxml2 2. Allows the regex to match the phrase if it appears at the beginning of a line, with no characters before it. Regular Expression in Java is most similar to Perl. Single and double quotes delimit character constants. In this case, the value between double quotes (") is matched on multiple lines. A regular expression. "Between Shades of Gray" by Ruta Sepetys, published in 2011, is a fictional story based on real-life testimonies of people's experiences during the Stalinist repression of the mid-twentieth. Can have more than 1 regex/replace pairs. The regex language is a powerful shorthand for describing patterns. Clicking on "No Thanks" is forcing me to write a comment. When I parse the above line, "Some words got inserted into a column, and then words after comma" got insert to. Python RegEx is widely used by almost all of the startups and has good industry traction for their applications as well as making Regular Expressions an asset for the modern day progr. A word character is a character from a-z, A-Z, 0-9, including the _ (underscore) character. Please help. The ‘rstrip’ is called on the input just to strip the line of it’s newline, since the next ‘print’ statement is going to add a newline by default. split /regex/, string splits string into a list of substrings and returns that list. There is no great solution for this and depending on your use case I would recommend writing a quick script that actually uses your first expression and creates a new file with all of the matches (or something like this). Java pattern problem: In a Java program, you want to determine whether a String contains a regular expression (regex) pattern, and then you want to extract the group of characters from the string that matches your regex pattern. 5k Hope Quotes 14k. So Could you please let me know whether this is possible. If there’s no g, then regexp. # python 3 d = """this will be printed in 3 lines""" print (d) # output: # this # will be printed # in 3 lines Summary 'Single' quote and "double" quote chars are equivalent. A great example I reckon. Split(Char[]) method, except that Regex. qr/STRING/msixpodualn. In addition, backslash is used to escape the following character inside character constants. See Regexps. A regular expression does away with the quotes around the terminals, and the spaces between terminals and operators, so that it consists just of terminal characters, parentheses for grouping, and operator characters. Here are RegEx which return only the values between quotation marks (as the questioner was asking for): Double quotes only (use value of capture group #1): "(. """ # handle active datetime with same day result = _DATETIMERANGE_SAME_DAY_REGEX. pythex is a quick way to test your Python regular expressions. In this example, we are 1 SUGI 29 Tutorials. This allows maximum flexibility over the format. With the above regular expression pattern, you can search through a text file to find email addresses, or verify if a given string looks like an email address. While at Dataquest we advocate getting used to consulting the Python documentation, sometimes it's nice to have a handy PDF reference, so we've put together this Python regular expressions (regex) cheat sheet to help you out!. (with a help from Steve Kirkendall) The main differences between Perl and Vim are: Perl doesn't require backslashes before most of its operators. We will try to be as explanatory as possible to make you understand the usage and also the points that need to be noted with the usage. Code: "Iam allfine" abcdef abcd "all is not well". Among these string functions are three functions that are related to regular expressions, regexm for matching, regexr for replacing and regexs for subexpressions. Re: Regular expression, replace all commas between double quotes ONLY BluShadow Jun 10, 2015 8:57 AM ( in response to Minoo. Regular Expression. Google products use RE2 for regular expressions. You can find me on Twitter (@MrThomasRayner),. share | improve this question ". csv file that have comma contained within double quotes. (1) Add * between the two specified marks that you will extract text between, and type them into the Text box. before, after, or between characters. But 0000 to 0999 doesnt. Python RegEx: Regular Expressions can be used to search, edit and manipulate text. I have the following text in a file: TYPE_A = "1" AND TYPE_B = "6" AND TYPE_C = "8755asd-". private string ReplaceQuotes( string sentence) { Regex re. To meet this challenge, we often use a pattern parsing language called Regex (which stands for Regular Expressions). These differences between Perl matching rules, and POSIX matching rules, mean that these two regular expression syntaxes differ not only in the features offered, but also in the form that the state machine takes and/or the algorithms used to traverse the state machine. When parsing a text file that contains quoted strings - such as a VB source file - you might want to quickly locate and extract all the quoted strings. Java Regex classes are present in java. Check if things can be done WITHOUT REGEX, if yes skip regexp, if not then use REGEX" I'd probably start with whatever approach is easiest first, then explore other options if performance is insufficient, or just for a check on the other. A regular expression. exec (str) returns the first match exactly as str. Along with 16+ years of hands-on experience he holds a Masters of Science degree and a number of database certifications. Regex maybe the most popular language in the programming world. Supports JavaScript & PHP/PCRE RegEx. If you don't know how to use them, try consulting the man pages for ed, egrep, vi, or regex. Get Hours, Days, Minutes and Seconds difference between two dates [Future and Past] Convert Float32 to. HaRP is implemented as a pre-processor to ordinary Haskell. (the restore or at least ProcessTable still needs to rewrite, but it's out of scope and done in HG) [*] 2017-05-17: (WC-6255): Do not add quotes when address is a distribution list ( fix for DL with @ inside ) [*] 2017-05-17: (WC-6231):Install page updated [*] 2017-05-15: (WC-6231):Install page updated [-] 2017-05-15: (SV-10867):Invalidating. With Perl regex enabled, type your expression and hit Next. This is remind me of vim. \end{quote} Here is the \bibfield{postnote} argument and is the value of the \bibfield{pages} field. to perform an operation on specified string using a regular expression pattern A sequence of characters that define a search pattern. As a more complex example, the regular expression B[an]*s matches any of the strings Bananas, Baaaaas, Bs, and any other string starting with a B, ending with an s, and containing any number of a or n characters in between. The following little Python script does the trick. Regular Expression to Extract text bounded by single or double quotes. Split methods are similar to the String. You've discovered that the double quote symbol " will not work inside a Java print instruction. Using workflow variable in RegEx Action In my previous thread I had gotten a partial answer and did not want to drag out the discussion any further. Sometimes it is easy to forget that these commands are using regex becuase it is so tightly integrated. So that the text can be inserted into a MySQL database, the apostrophes must be escaped (\') (Maybe some other characters as well. You'll need to find an alternate way to tell the compiler to print this symbol, instead of interpreting it. If you test this regex on Put a "string" between double quotes, it matches "string" just fine. Parsing everything between quotation using Learn more about regexp, matlab, string. But 0000 to 0999 doesnt. The Online. Table 12-1 gives a brief description of each regular expression function. Target Text Extractor is an online app designed to find and extract text surrounded or defined by specific character patterns. There is a section of text between single-quotes that I need to extract (without the single quotes) the text (its a file name). A regular expression is not language specific but they differ slightly for each language. ch] PR#1666 *) For maximum portability, the environment passed to CGIs should only contain variables whose names match the regex /[a-zA-Z][a-zA-Z0-9_]*/. Returns an empty string, or the string specified in no_match if the expression does not match. The "$" character is a good example. Url Validation Regex | Regular Expression - Taha nginx test match whole word special characters check Extract String Between Two STRINGS Blocking site with unblocked games Match anything enclosed by square brackets. To use Regex. If either don't match, then anything else is allowed except for a quote or backslash character. Strings can be placed either between single quotes ' or double quotes " and they have slightly different behavior. With Perl regex enabled, type your expression and hit Next. This behavior doesn’t bring. I was wondering if there was a way to simply say replace the matches you already found ?. If "'" is used as the delimiter, no variable interpolation is done. %[^"] will match any string characters up to the first double quote. ReplaceRegExp is a directory based task for replacing the occurrence of a given regular expression with a substitution pattern in a selected file or set of files. So what is that regex doing? Let’s break it down into it’s parts. Keywords: SAS DATA step PRX functions Perl Regular Expressions Created Date. When you have imported the re module, you can. Re: Regular expression, replace all commas between double quotes ONLY BluShadow Jun 10, 2015 8:57 AM ( in response to Minoo. The problem being you need to determine how many quotes you have read and if a comma is inside or outside of the double quoted mode of the value. Python RegEx In this tutorial, you will learn about regular expressions (RegEx), and use Python's re module to work with RegEx (with the help of examples). Let's now apply this knowledge and learn how to build a pattern to match an entire word using these constructs. datetime(syear, smonth, sday, ehour, emin) return OrgTimeRange(True. Character constants. For example, the following code finds a text in double quotes, where the text can contain single quotes:. ) I wonder if I can use a Regular Expression to do the escaping. 005, using the Perl-compatible regular expressions library written by Philip Hazel). Linux provides tool named grep for filter text data or output according to given string or regular expression. Jun 18, 2004 by Dave Cross One of the best ways to make your Perl code look more like … well, like Perl code – and not like C or BASIC or whatever you used before you were introduced to Perl – is to get to know the internal variables that Perl uses to control various aspects of your program’s execution. In Groovy, you can create this instance directly from the literal string with your regular expression using the =~ operator. If you put characters between single quotes ', then almost all the characters, except the single-quote itself ', are interpreted as they are written in the code. Pinal Dave is a SQL Server Performance Tuning Expert and an independent consultant. Regex is a very powerful tool that is available at our disposal & the best thing about using regex is that they can be used in almost every computer language. Balancing parentheses is an impossible regex problem. Wikipedia has a table comparing the different regex engines. if I have a string: abcdefg how do I pull out the string between the and (i. This regular expression ^\w+(\s\w+)*$will only allow a single space between words and no leading or trailing spaces. If you need to use the matched substring within the same regular expression, you can retrieve it using the backreference \num , where num = 1. I've broken it down into 4 steps: Remove ^M. The Online. Double Quotes and Regular Expressions Double quotes around a string are used to specify a regular expression search (compatible with Perl 5. *?)'") ' Loop through Matches. Now what if you want to ignore those commas while splitting? Here's working code that will…. CSV file into database, but is having problem parsing. Regular expressions are used to perform pattern-matching and "search-and-replace" functions on text. SQL Server Regular Expressions for Data Validation and Cleanup; Using Regular Expressions to Find Special Characters with T-SQL. length() - 1). Parses single or double-quoted strings while preserving escaped quote chars ~~~~~ 1. Net RegEx syntax, they can be used directly in current SSMS 2017. First, you should clarify which regex you want to create, to be specific, what should "match" exactly. A regular expression is a method used in programming for pattern matching. String processing is fairly easy in Stata because of the many built-in string functions. He and I are both working a lot in Behat, which relies heavily on regular expressions to map human-like sentences to PHP code. Code reorganization for cleanliness, regex changes for testing, as well as doc and build updates. Instead, regex support is provided by the standard regular expression library. In backreferences, the strings can be converted to lower or upper case using \\L or \\U (e. Table 12-1 gives a brief description of each regular expression function. Regex maybe the most popular language in the programming world. Sounds easy. They are widely used in almost all the programming languages like C#, Python, Java and many…. The regular expression in java defines a pattern for a String. Hope you find this useful! Thank you to Tom Shelton, Fergus Cooney, Steven Swafford, Gjuro Kladaric, and others for your contributions. length() - 1). Double quotes around a string are used to specify a regular expression search (compatible with Perl 5. To find regex matches or to search-and-replace with a regular expression, you need a Matcher instance that binds the pattern to a string. A number of strings, separated by a delimiter,. In vim, you can select by using type "v i" follows by quote, parenthesis, etc. The first character in a string is numbered 0, the second character in a string is numbered 1, and so forth. It is JavaScript based and uses XRegExp library for enhanced features. If you test this regex on Put a "string" between double quotes, it matches "string" just fine. *\) The regex does a sub match on the text between the first set of quotes and returns the sub match displayed above. Java String replace() Method example In the following example we are have a string str and we are demonstrating the use of replace() method using the String str. Note: precision and var ange[1] remain unchanged for the entire problem. That is, if you put something in single quotation marks, Perl assumes that you want the exact characters you place between the marks — except for the slash-single quote (‘) combination and double-slash (\) combination. [quote author="utcenter" date="1361442748"]No argument here - the regex solution is the fastest to implement, but it is also complete magic to a newbie. Lets add the regex tool. I've broken it down into 4 steps: Remove ^M. Example Regular Expressions. Jinja is the default templating language in SLS files. qr/STRING/msixpodualn. Matches, pass it a pattern. Caution The addslashes() function is run on each matched backreference before the substitution takes place. Was the regular expression someone pointing to column 2 only?. Print number of changes for changed files. (2) Click the Add button. They can be also used as a data generator, following the concept of reversed regular expressions, and provide randomized test data for use in test databases. See Regexps. For a discussion of regular expression syntax and usage, see an online resource such as www. For each row, replace double quotes with empty string. * (bug 20239) MediaWiki:Imagemaxsize does not contain anymore a. This means that all metacharacters in the input String are treated as ordinary characters. a single quote or a double quote). A regular expression (also called regex) is a way to work with strings, in a very performant way. [David McCreedy and others at IBM] *) Require the batch (-b) option and default to MD5 on TPF in htpasswd. › Remove Commas between Double Quotes › DOS remove linefeed and carriage return ? › Why does a script that runs in test not run in production? › grep in for loop shell script › Remove latest file. Before I conclude, I'd like to share some regex example I often use. Dim value As String = "('BH','BAHRAIN','Bahrain','BHR','048')" ' Match data between single quotes hesitantly. It’s all based in the java. The above regex expression will match the text string, since we are trying to match a string of any length and any character. At this point I believe it is impossible. Single quotes and double quotes are not special characters in regex (however double quotes must be escaped in string literals). Here are … Continue reading "Textpad Regular Expressions". single-quote characters are used both as field delimiters and as apostrophes. This manual page documents the GNU version of find. If it contains a quote it becomes "string, ""value 1""",string value 2. By Paul Hoffman. 3 Overview of Regular Expression Syntax. Regular Expression Test Page for Java. To begin editing and testing regular expressions, just select Expresso from the start menu! To load the 30 Minute Regex Tutorial, select the tutorial from the start menu. Net using:- 1. When a character is followed by ? in a regular expression it means to match zero or one instance of the character. 2 thoughts on " VSCode tips - How to select everything between brackets or quotes " Vim Lovers October 26, 2019 at 5:06 am. The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. The regular expression to match the balanced text uses two new (to Perl 5. Finally, match the closing quote. When a character is followed by ? in a regular expression it means to match zero or one instance of the character. [email protected] When I parse the above line, "Some words got inserted into a column, and then words after comma" got insert to. split method returns parts of the string which are generated from the split. 5k Poetry Quotes 16k Death Quotes 15. Among these string functions are three functions that are related to regular expressions, regexm for matching, regexr for replacing and regexs for subexpressions. The Original Quote:. Ask Question Asked 5 years, 8 months ago. Regex to delimit at spaces, except inside of quotes I am building a program to analyze some javascript, and I need it to. NET Framework 2. A regular expression enclosed in slashes (/') is an awk pattern that matches every input record whose text belongs to that set. Regular expression ("regex" for short) pattern matching is a concise and hopefully efficient way of specifying a piece of text for the purpose of searching for it or manipulating it in some way. Another way of thinking about the pattern is, "Look for a double-quote, but do not include it in the match. This produces the results I expect. The example also shows how to handle CSV records having a comma between double quotes or parentheses. You may learn more about regex constructs. Lets add the regex tool. Matches with quotes Imports System. You can still take a look, but it might be a bit quirky. It allows you to define the character patterns with standard JavaScript regular expressions and offers a set of auxiliary functions to facilitate the text processing. Wow! Very useful. For example, to split a string into words, use. `verilog-minimum-comment-distance' (default 10) Minimum distance (in lines) between begin and end required before a comment will be inserted. Free Interactive Regular Expression (Regex) Testers and Builders. It defines regex as "a. How to get regular text between quotes using regular expressions. Jinja2 ships with many filters. When defining a regex containing an$ anchor, be sure to enclose the regex using single quotes (') instead of double quotes (") or PowerShell will expand the expression as a variable. The goal is this: To make a function that doesn't care about what is in the line. When configuring the regular expressions in JMeter, use the same syntax as Perl5. By formulating a regular expression with a special syntax, you can. shell sed grep regular-expression. The regular expression seemed to work on my example data, but when I tried a larger file with more columns, it did not replace the comma within the quotes. For each row, replace double quotes with empty string. For handling quotes within a character query, you must add two quotes for each one that is desired. NET Framework 2. Both representations can be used interchangeably. One is simple StringSplitTokenizer, which simply splits a String. For example, to split a string into words, use. "RE: So my general principle would be this. This produces the results I expect. jsSteven Wade using VerbalExpressions. This is remind me of vim. Here is a table of some of the special characters and symbols that can be used in a regular expression. Thanks for contributing an answer to Vi and Vim Stack Exchange! Please be sure to answer the question. Make sure to test your regular expression to ensure you get the desired result. substring(1, s. To use Regex. 2 Changing cases of string. He's looking for a comma followed by a blank space, or return a whole string surrounded by single or double quotes while ignoring any commas between the double or single quotes. Regular Expression Tester - play with this and other regex right on the site. We then turn the string variable into a numeric variable using Stata’s function. My query is to. For each row, replace double quotes with empty string. I have written following lines of code but not working for removing single quote. You can find me on Twitter (@MrThomasRayner),. var str="The rain in SPAIN stays mainly in the plain"; var n=str. Regex maybe the most popular language in the programming world. Automatic backup. Double Quotes and Regular Expressions Double quotes around a string are used to specify a regular expression search (compatible with Perl 5. Learn by If you're so inclined, you can also start exploring the differences between Python regex and other forms of regex Stack Overflow post. A RegEx, or Regular Expression, is a sequence of characters that forms a search pattern. At this point I believe it is impossible. Usually such patterns are used by string searching algorithms for "find" or "find and replace" operations on strings, or for input validation. 3 Overview of Regular Expression Syntax. Python RegEx: Regular Expressions can be used to search, edit and manipulate text. If the regular expression remains constant, using this can improve performance. When I parse the above line, "Some words got inserted into a column, and then words after comma" got insert to. Java has only one string style. I tried doing this with regex function mentioned above. A regular expression (or RE) specifies a set of strings that matches it; the functions in this module let you check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). There’s only one difference between single and double quotes in PowerShell: Inside double quotes, PowerShell looks for the \$ character. I need only the CR/LF's removed that are within the double quotes, the ones at the end of the line must stay (they are outside of the double quotes). It's important to use single quotes around the replacement argument, because otherwise the variables will be interpolated/expanded too soon, by the shell. Returns an empty string, or the string specified in no_match if the expression does not match. Rsyslog uses POSIX ERE (and optionally BRE) expressions. To use Regex. w3schools is a pattern (to be used in a search). Pattern class converts a given regular expression pattern String into a literal pattern String. CSV files frequently need the double quotes. Regex expression starts with the alphabet r followed by the pattern that you want to search. The most basic usage of Jinja in state files is using control structures to wrap conditional or redundant state elements: In this example, the first if block will only be evaluated on minions that aren't. Balancing parentheses is an impossible regex problem. When a character is followed by ? in a regular expression it means to match zero or one instance of the character. Get excited. I am trying to find a way to exclude an entire word from a regular expression search. @[User:63123|John Pope] honestly, I really just used the wikipedia entry to learn it! Whenever I wanted to do any string matching with regex I just kept banging my head against it while referencing the wiki until I could get a match:. 5k Wisdom Quotes 17k Romance Quotes 16. However, Unicode strings and 8-bit strings cannot be mixed: that is, you cannot match a Unicode string with a byte pattern or vice-versa; similarly, when asking for a substitution, the replacement. By TheAutomator, July 26, 2017 in AutoIt General Help and Support. [David McCreedy and others at IBM] *) Require the batch (-b) option and default to MD5 on TPF in htpasswd. In this tutorial, I will use the term “string” to indicate the text that I am applying the regular expression to. & Conventions2 A string, made up of arbitrary characters. escape special characters. "Between Shades of Gray" by Ruta Sepetys, published in 2011, is a fictional story based on real-life testimonies of people's experiences during the Stalinist repression of the mid-twentieth. version_info < (3,): from cStringIO import StringIO else: from io import StringIO xrange = range from tokenize import generate_tokens a = 'par1=val1,par2=val2,par3="some text1, again some text2, again some text3",par4="some text",par5=val5' def parts. 10) regular expression features. Entry boxes: apw_( wrd,30) wrd is a string variable that names a short student answer, w hich is then tested against various regular expression for matches. A regular expression is used to check. before, after, or between characters. By the end of the tutorial, you’ll be familiar with how Python regex works, and be able to use the basic patterns and functions in Python’s regex module, re, for to analyze text strings. Do not leave spaces between multiple business service names, but separate names with commas, and do not use quote marks. RegEx can be used to check if a string contains the specified search pattern. The sample expressions in Table 11-2 are quoted for use within Tcl scripts. Regular expressions are useful in extracting information from data such as log files, spreadsheets, databases e. A regular expression (also called regex) is a way to work with strings, in a very performant way. I've broken it down into 4 steps: Remove ^M. The Oracle REGEXP_REPLACE () function replaces a sequence of characters that matches a regular expression pattern with another string. I have successfully got the script working, but I'm having some trouble with the regular expression I'm using. In regex, anchors are not used to match characters. between quotes, as in "Tarzan123", 2. Using this method would be a more convenient alternative than using \Q & \E as it wraps the given String. 1 === * The installer now includes a check for a data corruption issue with certain versions of libxml2 2. Perl's Special Variables. Here are … Continue reading "Textpad Regular Expressions". Python RegEx: Regular Expressions can be used to search, edit and manipulate text. jsSteven Wade using VerbalExpressions. A recent attempt to replace a string within double quotes "string value" while using sed resulted in much trial and. Java regular expressions are very similar to the Perl programming language and very easy to learn. -this is a regex I have wrote for numbers it allows [+-] minus before the number digits before and digits after the point the question is how to change this to allow "not finished" values so that input of "5. The string is split as many times as possible. NET (and this site) when testing on a URL with a - int he query string. This is remind me of vim. Learn by If you're so inclined, you can also start exploring the differences between Python regex and other forms of regex Stack Overflow post. Pattern: Text: CultureInvariant IgnoreCase Multiline Singleline RightToLeft IgnorePatternWhitespace ExplicitCapture ECMAScript IgnoreCase Multiline Singleline RightToLeft. I have a regular expression that matches URLs but strangely kills. So that the text can be inserted into a MySQL database, the apostrophes must be escaped (\') (Maybe some other characters as well. How to write the regex to extract a field from XML data if the field is not completely XML? 1 Answer. I always enjoy a good regex problem so no worries from me ;). Regular expressions are the default pattern engine in stringr. Windows PowerShell has a "select-string" cmdlet which can be used to quickly scan a file to see if a certain string value exists. Followers 1. This technique ensures that the entire expression is interpreted by the SQL function and improves the readability of your code. NET program that uses Regex. If you don't know how to use them, try consulting the man pages for ed, egrep, vi, or regex. # replace words in a text that match key_strings in a dictionary with the given value_string # Python's regular expression module re is used here # tested with Python24 vegaseat 07oct2005 import re def multiwordReplace(text, wordDic): """ take a text and replace words that match a key in a dictionary with the associated value, return the changed text """ rc = re. Unicode and ISO 10646 make a very clear distinction between the undirected typewriter-style ASCII single quotation mark and apostrophe U+0027 as in. RegularExpressions Module Module1 Sub Main() ' The input string. Such is the life of a programmer :). Using Regex I can correctly retrieve all the text between the tags but I am unable to single out the double quotes for replacement. qr/STRING/msixpodualn. There are a handful of useful patterns that are sufficient to make you a Cucumber power user. Through the constructor function of the RegExp object.
|
2020-09-27 01:18:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2687542736530304, "perplexity": 1783.7203720501466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00303.warc.gz"}
|
http://mathhelpforum.com/calculus/27157-trigonometric-substitution-integral.html
|
# Math Help - Trigonometric Substitution Integral
1. ## Trigonometric Substitution Integral
Evaluate the indefinite integral:
⌡ [√(16 x^2 − 144) / x] dx
Sorry for the lame text, but it's all I could gather, I've gotten to the point:
12 ⌠
⌡ [cos (t) / cos^3 (t) sin (t)] dt
by setting x = 3 tan (t); dx = 3 sec^2 (t); and √(x^2 + 9) = 3 sec (t), but I don't know how to solve it from here. Please help!
2. Originally Posted by Del
Evaluate the indefinite integral:
⌡ [√(16 x^2 − 144) / x] dx
You don't need to apply a trig. sub.
Why don't you flip it around by settin' $u^2=16x^2-144$?
As for mathematical symbols, see my signature.
3. Originally Posted by Krizalid
You don't need to apply a trig. sub.
Why don't you flip it around by settin' $u^2=16x^2-144$?
As for mathematical symbols, see my signature.
But the x is on the bottom in his equation so by your substitution you aren't going to eliminate x you are going to make it x^2 on the bottom.
4. And by settin' that substitution $x^2 = \frac{{u^2 + 144}}
{{16}}.$
5. Hello, Del!
Integrate: . $\int\frac{\sqrt{16x^2-144}}{x}\,dx$
Factor out 16: . $4\int\frac{\sqrt{x^2-9}}{x}\,dx$
Let $x \:= \:3\sec\theta\quad\Rightarrow\quad dx\:=\:3\sec\theta\tan\theta\,d\theta$
Substsitute: . $4\int\frac{3\tan\theta}{3\sec\theta}\,(3\sec\theta \tan\theta\,d\theta) \;=\;12\int\tan^2\!\theta\,d\theta$
We have: . $12\int(\sec^2\!\theta - 1)\,d\theta \;=\;12(\tan\theta - \theta) + C$
Back-substitute: . $12\left(\frac{\sqrt{x^2-9}}{3} \,- \,\text{arcsec}\frac{x}{3}\right) + C$
Answer: . $4\left(\sqrt{x^2-9} \,- \,3\!\cdot\!\text{arcsec}\frac{x}{3}\right) + C$
|
2016-05-27 05:30:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168206453323364, "perplexity": 5215.444065608462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00142-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://forum.azimuthproject.org/plugin/ViewComment/16687
|
In a the poset \$$P(X) \$$, the complement of the top element \$$X \$$ defines negation,
$$\neg(X): P(X) \rightarrow P(X) \\ x \mapsto X \setminus x.$$
Proof.
Let \$$x=X \$$, then \$$X \setminus X = \varnothing \$$.
Let \$$x=\varnothing \$$, then \$$X \setminus \varnothing = X\$$.
Finally, let \$$x=y,\$$ for some \$$y, \\, \varnothing \subsetneq y \subsetneq X \$$, then there will exist some subset \$$z, \varnothing \subsetneq z \subsetneq X \$$ where \$$z \neq y \$$ and \$$z = X \setminus y \$$.
|
2019-09-18 19:37:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460678696632385, "perplexity": 11754.346338660907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00302.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=E.%20Diociaiuti
|
• Electrical Induced Annealing technique for neutron radiation damage on SiPMs(1804.09792)
April 27, 2018 physics.ins-det
The use of Silicon Photo-Multipliers (SiPMs) has become popular in the design of High Energy Physics experimental apparatus with a growing interest for their application in detector area where a significant amount of non-ionising dose is delivered. For these devices, the main effect caused by the neutron flux is a linear increase of the leakage current. In this paper, we present a technique that provides a partial recovery of the neutron damage on SiPMs by means of an Electrical Induced Annealing. Tests were performed on a sample of three SiPM arrays (2 $\times$ 3) of 6 mm$^2$ cells with 50 {\mu}m pixel sizes: two from Hamamatsu and one from SensL. These SiPMs were irradiated up to an integrated neutron flux up to 8 $\times$ 10$^{11}$ n$_{1MeV-eq}$/cm$^2$. Our techniques allowed to reduced the leakage current of a factor ranging between 15-20 depending on the overbias used and the SiPM vendor.
• Design, status and perspective of the Mu2e crystal calorimeter(1801.03159)
April 18, 2018 hep-ex, physics.ins-det
The Mu2e experiment at Fermilab will search for the charged lepton flavor violating process of neutrino-less $\mu \to e$ coherent conversion in the field of an aluminum nucleus. Mu2e will reach a single event sensitivity of about $2.5\cdot 10^{-17}$ that corresponds to four orders of magnitude improvements with respect to the current best limit. The detector system consists of a straw tube tracker and a crystal calorimeter made of undoped CsI coupled with Silicon Photomultipliers. The calorimeter was designed to be operable in a harsh environment where about 10 krad/year will be delivered in the hottest region and work in presence of 1 T magnetic field. The calorimeter role is to perform $\mu$/e separation to suppress cosmic muons mimiking the signal, while providing a high level trigger and a seeding the track search in the tracker. In this paper we present the calorimeter design and the latest R$\&$D results.
March 15, 2018 hep-ex, physics.ins-det
In this paper, we report the measurement of the neutron radiation hardness of custom Silicon Photomultipliers arrays (SiPMs) manufactured by three companies: Hamamatsu (Japan), AdvanSiD (Italy) and SensL (Ireland). These custom SiPMs consist of a 2 x 3 array of 6 x 6 mm^2 monolithic cells with pixel sizes of respectively 50 um (Hamamatsu and SensL) and 30 um (AdvanSid). A sample from each vendor has been exposed to neutrons generated by the Elbe Positron Source facility (Dresden), up to a total fluence of ~ 8.5 x 10^11 n_(1 MeV)/cm^2. Test results show that the dark current increases almost linearly with the neutron fluence. The room temperature annealing was quantified by measuring the dark current two months after the irradiation test. The dependence of the dark current on the device temperature and on the applied bias have been also evaluated.
• Results of the first user program on the Homogenous Thermal Neutron Source HOTNES (ENEA / INFN)(1802.08132)
Feb. 22, 2018 nucl-ex, physics.ins-det
The HOmogeneous Thermal NEutron Source (HOTNES) is a new type of thermal neutron irradiation assembly developed by the ENEA-INFN collaboration. The facility is fully characterized in terms of neutron field and dosimetric quantities, by either computational and experimental methods. This paper reports the results of the first "HOTNES users program", carried out in 2016, and covering a variety of thermal neutron active detectors such as scintillators, solid-state, single crystal diamond and gaseous detectors.
• Quality Assurance on Un-Doped CsI Crystals for the Mu2e Experiment(1802.08247)
Feb. 22, 2018 hep-ex, physics.ins-det
The Mu2e experiment is constructing a calorimeter consisting of 1,348 undoped CsI crystals in two disks. Each crystal has a dimension of 34 x 34 x 200 mm, and is readout by a large area silicon PMT array. A series of technical specifications was defined according to physics requirements. Preproduction CsI crystals were procured from three firms: Amcrys, Saint-Gobain and Shanghai Institute of Ceramics. We report the quality assurance on crystal's scintillation properties and their radiation hardness against ionization dose and neutrons. With a fast decay time of 30 ns and a light output of more than 100 p.e./MeV measured with a bi-alkali PMT, undoped CsI crystals provide a cost-effective solution for the Mu2e experiment.
• The Mu2e undoped CsI crystal calorimeter(1801.02237)
Feb. 22, 2018 physics.ins-det
The Mu2e experiment at Fermilab will search for Charged Lepton Flavor Violating conversion of a muon to an electron in an atomic field. The Mu2e detector is composed of a tracker, an electromagnetic calorimeter and an external system, surrounding the solenoid, to veto cosmic rays. The calorimeter plays an important role to provide: a) excellent particle identification capabilities; b) a fast trigger filter; c) an easier tracker track reconstruction. Two disks, located downstream of the tracker, contain 674 pure CsI crystals each. Each crystal is read out by two arrays of UV-extended SiPMs. The choice of the crystals and SiPMs has been finalized after a thorough test campaign. A first small scale prototype consisting of 51 crystals and 102 SiPM arrays has been exposed to an electron beam at the BTF (Beam Test Facility) in Frascati. Although the readout electronics were not the final, results show that the current design is able to meet the timing and energy resolution required by the Mu2e experiment.
• Expression of Interest for Evolution of the Mu2e Experiment(1802.02599)
Feb. 7, 2018 hep-ex, physics.ins-det
We propose an evolution of the Mu2e experiment, called Mu2e-II, that would leverage advances in detector technology and utilize the increased proton intensity provided by the Fermilab PIP-II upgrade to improve the sensitivity for neutrinoless muon-to-electron conversion by one order of magnitude beyond the Mu2e experiment, providing the deepest probe of charged lepton flavor violation in the foreseeable future. Mu2e-II will use as much of the Mu2e infrastructure as possible, providing, where required, improvements to the Mu2e apparatus to accommodate the increased beam intensity and cope with the accompanying increase in backgrounds.
• The Mu2e crystal calorimeter(1801.10002)
Jan. 30, 2018 hep-ex, physics.ins-det
The Mu2e Experiment at Fermilab will search for coherent, neutrino-less conversion of negative muons into electrons in the field of an Aluminum nucleus, $\mu^- + Al \to e^- +Al$. Data collection start is planned for the end of 2021. The dynamics of such charged lepton flavour violating (CLFV) process is well modelled by a two-body decay, resulting in a mono-energetic electron with an energy slightly below the muon rest mass. If no events are observed in three years of running, Mu2e will set an upper limit on the ratio between the conversion and the capture rates %\convrate of $\leq 6\ \times\ 10^{-17}$ (@ 90$\%$ C.L.). R$_{\mu e} = \frac{\mu^- + A(Z,N) \to e^- +A(Z,N)}{\mu^- + A(Z,N) \to \nu_{\mu} ^- +A(Z-1,N)}$ of $\leq 6\ \times\ 10^{-17}$ (@ 90$\%$ C.L.). This will improve the current limit of four order of magnitudes with respect to the previous best experiment. Mu2e complements and extends the current search for $\mu \to e \gamma$ decay at MEG as well as the direct searches for new physics at the LHC. The observation of such CLFV process could be clear evidence for New Physics beyond the Standard Model. Given its sensitivity, Mu2e will be able to probe New Physics at a scale inaccessible to direct searches at either present or planned high energy colliders. To search for the muon conversion process, a very intense pulsed beam of negative muons ($\sim 10^{10} \mu/$ sec) is stopped on an Aluminum target inside a very long solenoid where the detector is also located. The Mu2e detector is composed of a straw tube tracker and a CsI crystals electromagnetic calorimeter. An overview of the physics motivations for Mu2e, the current status of the experiment and the required performances and design details of the calorimeter are presented.
• The calorimeter of the Mu2e experiment at Fermilab(1701.07975)
Jan. 27, 2017 physics.ins-det
The Mu2e experiment at Fermilab looks for Charged Lepton Flavor Violation (CLFV) improving by 4 orders of magnitude the current experimental sensitivity for the muon to electron conversion in a muonic atom. A positive signal could not be explained in the framework of the current Standard Model of particle interactions and therefore would be a clear indication of new physics. In 3 years of data taking, Mu2e is expected to observe less than one background event mimicking the electron coming from muon conversion. Achieving such a level of background suppression requires a deep knowledge of the experimental apparatus: a straw tube tracker, measuring the electron momentum and time, a cosmic ray veto system rejecting most of cosmic ray background and a pure CsI crystal calorimeter, that will measure time of flight, energy and impact position of the converted electron. The calorimeter has to operate in a harsh radiation environment, in a 10-4 Torr vacuum and inside a 1 T magnetic field. The results of the first qualification tests of the calorimeter components are reported together with the energy and time performances expected from the simulation and measured in beam tests of a small scale prototype.
• Irradiation study of UV Silicon Photomultipliers for the Mu2e Calorimeter(1701.06464)
Jan. 23, 2017 physics.ins-det
The Mu2e calorimeter is composed of 1400 un-doped CsI crystals, coupled to large area UV extended Silicon Photomultipliers (SiPMs), arranged in two annular disks. This calorimeter has to provide precise information on energy, timing and position resolutions. It should also be fast enough to handle the high rate background and it must operate and survive in the high radiation environment. Simulation studies estimated that, in the highest irradiated regions, each photo-sensor will absorb a dose of 20 krad and will be exposed to a neutron fluency of 5.5x10^11 n_(1MeV)/cm^2 in three years of running, with a safety factor of 3 included. At the end of 2015, we have concluded an irradiation campaign at the Frascati Neutron Generator (FNG, Frascati, Italy) measuring the response of two different 16 array models from Hamamatsu, which differ for the protection windows and a SiPM from FBK. In 2016, we have carried out two additional irradiation campaigns with neutrons and photons at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR, Dresden, Germany) and at the Calliope gamma irradiation facility at ENEA-Casaccia, respectively. A negligible increment of the leakage current and no gain change have been observed with the dose irradiation. On the other hand, at the end of the neutron irradiation, the gain does not show large changes whilst the leakage current increases by around a factor of 2000. In these conditions, the too high leakage current makes problematic to bias the SiPMs, thus requiring to cool them down to a running temperature of ~0 {\deg}C.
|
2020-09-30 14:35:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6032031774520874, "perplexity": 2277.8376106633455}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402127075.68/warc/CC-MAIN-20200930141310-20200930171310-00248.warc.gz"}
|
https://jemrare.com/question-39-a-man-bought-a-flat-for-rs-820000-he-borrowed-55-of-money-from-bank-how-much-money-did-he-borrow-from-the-bank/
|
# Question 1 & 2
Question 1: For Rs 820000 a man purchased a flat. He borrowed from a banl 55% of the money. How much has the bank given as loan ?
FPSC Custom Inspector 2016
acca past papers f5,acca past papers f9,acca past papers f8,acca past papers p7,acca past papers f7
Solution:
The statement of question means that 55% money out of Rs. 820000/- was taken from bank !
let ‘x’ money is 55 % of Rs 820000
i-e
Rs
So he took Rs 451000 from the bank
Question 2: Find the value of ‘y’ if 30y=60
FPSC Custom Inspector 2016
Solution:
We are give;
So, value of y is 2
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
All original content on these pages is fingerprinted and certified by Digiprove
Insert math as
$${}$$
|
2020-09-22 13:08:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27188047766685486, "perplexity": 12332.035123200254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00677.warc.gz"}
|
https://rpg.stackexchange.com/questions/157104/what-kind-of-tools-would-be-used-to-carve-bone/157144
|
# What kind of tools would be used to carve bone?
I'm designing a Wood Elf Outlander Warlock (in D&D 5E) who only has one hand. This makes my arm nearly useless. As an Outlander, I am proficient with a musical instrument, and have skill proficiency in Survival.
I have designed the character as a self-sufficient hunter, who uses everything from a kill: hides for clothes, meat for food, and bones as a carving medium.
What I would like to do is trade the musical instrument proficiency for a proficiency with a type of artisan's tools that would be used in carving bones into practical things I could use, or decorative items I could sell.
What kind of tools would be used for bone-carving?
• The [tool-recommendation] tag is for questions asking for recommendations on out-of-game RPG tools (such recommendations are no longer allowed here). As such, I've removed the tag. (The [tools] tag is similarly about such out-of-universe RPG tools, and would also be inappropriate.) – V2Blast Oct 2 '19 at 6:20
• Do you ask about any existing game mechanics regarding bone carving specifically? Or do you ask, what tools are being used for bone-carving in real world? – enkryptor Oct 2 '19 at 10:29
• What kind of "things I can use" – John Oct 2 '19 at 14:06
• @enkryptor: Here I am asking about existing game mechanics. If you choose to answer that using information from the real world, I will certainly welcome such. – VarisBersk Oct 2 '19 at 16:50
• @John: Mostly I'm referring to decorative. It did occur to me that I might be able to make things like daggers (maybe bone-silverware), or other simple tools that might actually be useful. However that portion of background refers primarily to decorative items. – VarisBersk Oct 2 '19 at 16:53
## "Bone-carving artisan tools" is fine
The Player's Handbook, page 154 "Tools" doesn't have the exhaustive list. It describes "examples of the most common types of tools" instead:
Artisan's Tools. These special tools include the items needed to pursue a craft or trade. The table shows examples of the most common types of tools, each providing items related to a single craft. Proficiency with a set of artisan's tools lets you add your proficiency bonus to any ability checks you make using the tools in your craft. Each type of artisan's tools requires a separate proficiency.
Since there are no special rules for any of these types, sticking to this particular list doesn't make much sense — you are free to choose any type you want.
Rule-wise, "artisan tools" is a single concept. It wouldn't be really "homebrewing", because there are no new rules being introduced — you just say "I am proficient in this particular type of craft" and use the same mechanics for crafting as for other artisan tools types:
### Crafting
You can craft nonmagical objects, including adventuring equipment and works of art. You must be proficient with tools related to the object you are trying to create (typically artisan's tools).
• Worth to emphasize specifically table shows examples part of the rules. – Mołot Oct 2 '19 at 14:33
• If you want a technical term, I suppose the closest real-world example would be something like scrimshaw. – anaximander Oct 2 '19 at 15:52
• @anaximander Scrimshaw is specifically decorative carving on the surface of the basically unaltered bone/tooth. It doesn't really cover utilitarian objects such as buttons or knife handles. There isn't really a term (beyond "bone carving") because we don't usually make utilitarian things out of bone anymore – Martin Bonner supports Monica Oct 3 '19 at 8:41
• @MartinBonner was saying about decorative things by the way. AFAIK we still make decorative things from bones – enkryptor Oct 3 '19 at 8:45
By raw there is nothing, but tools are kept vague for a reason.
There is only one mention of shaping bone at all and it is an ability for lizardfolk.
Cunning Artisan. As part of a short rest, you can harvest bone and hide from a slain beast, construct, dragon, monstrosity, or plant creature of size Small or larger to create one of the following items: a shield, a club, a javelin, or 1d4 darts or blowgun needles. To use this trait, you need a blade, such as a dagger, or appropriate artisan's tools, such as leatherworker's tools.
As a DM I would look at it one of two ways.
1. Woodcarving tools, in the real world bone carving and woodcarving tools are identical so woodcarving tools would be my first choice.
2. Include it in a custom "hunter's tools", Harvesting meat, bone, and leather is not actually covered by any tool description, even the more in depth leatherworking description in xanthar makes no mention of harvesting hides. So I would just group it all together into a single custom tool set, since it is obviously a skill set that has to exist. As per cunning artisan ability I would just say the kit is a set of knives in a leather roll.
Of course since you are not a lizardfolk I would require proficiency is said tools.
• +1 for pointing out they're largely the same as woodworking tools. I'm bretty crappy at both wood-carving and bone-carving, but I've done enough of each to realize how similar the materials handle. I'm sure a better carver might notice differences, but that's like saying Tom Brady can distinguish between types of leather with his fingertips. Probably can, but just about any football'll do for this backyard hack. – nitsua60 Oct 2 '19 at 19:54
• @nitsua60 you can get nearly as much variation with different types of wood, I made a handle out of lignum vitae once, I had to use some of my metal cutting drill bits. – John Oct 3 '19 at 2:47
No existing tool exactly fits what you want. The closest I can see would be woodcarver's tools with leatherworker's tools being a distant second. I'd ask you DM about homebrewing up some custom tools for bone carving. In my experience, tools don't generally have a huge impact on the game.
@enkryptor: Here I am asking about existing game mechanics. If you choose to answer that using information from the real world, I will certainly welcome such. - VarisBersk
Simple abrasive stone has been used to carve bone for over a million years, so a set of abrasive stones in a pouch would count as a "simple set". You can see examples of modern abrasive stones and associated tools here, here, or here.
Assuming your game is set in the typical 5E time/technological period as most are (e.g. some basic steam/industrial tech, tops), it also wouldn't be out of the question to include a basic hacksaw, a serrated knife, and file (each of which have been around for several thousand years in real life) each made of iron/steel. You could include some fancier metal if you want to provide increased cost/rarity in exchange for higher 'success' rolls.
For example, you could have a table using this suggestion:
Carving Tool Quality | Cost | Bonus to D20 roll:
----------------------------------------------------
Stone | 3cp | +0
Iron | 2sp | +1
Steel | 5gp | +2
Mithril | 100gp | +3
|
2021-07-25 15:57:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29427415132522583, "perplexity": 2671.2825135747207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00062.warc.gz"}
|
http://math.stackexchange.com/questions/325504/imo-1987-function
|
# IMO 1987 - function
Show that there is no function $f: \mathbb{N} \to \mathbb{N}$ such that $f(f(n))=n+1987, \ \forall n \in \mathbb{N}$.
-
Here is an alternative approach. It is obvious that such an $f$ must be an injection. Now, look at sets $\mathbb{N}$, $A = f(\mathbb{N})$ and $B = f(f(\mathbb{N})) = \{n + 1987 \ | \ n \in \mathbb{N}\}$.
It is easy to see that $B \subset A \subset \mathbb{N}$, and also from the injectivity of $f$ that $f$ induces a bijection between the disjoint sets $\mathbb{N} \setminus A$ and $A \setminus B$. Therefore, $\mathbb{N} \setminus B = (\mathbb{N} \setminus A) \cup (A \setminus B)$ must contain an even number of elements. But $|\mathbb{N} \setminus B| = 1987$, which is a contradiction, and we are done.
-
Very interesting solution. should say $f(f(n)) = n + 1987$ at the second last line. – user58512 Mar 9 '13 at 15:33
@user58512 Thanks for the correction! – Dan Shved Mar 9 '13 at 15:35
+1 Great answer, mush easier to follow than the other ones. Only I would have said $B=f(f(\Bbb N))=\{\,i+1987\mid i\in\Bbb N\,\}$ right away, which avoids making a case distinction (and also like your answer avoids any commitment to where the natural numbers start exactly ;-). – Marc van Leeuwen Mar 9 '13 at 15:40
@MarcvanLeeuwen I've updated the answer. Is this what you had in mind? If not, feel free to improve it! – Dan Shved Mar 9 '13 at 15:53
Here is a generalisation.
Proposition. For positive integers $k,l$, the functional equation $$f^k(n)=n+l\qquad\text{for all n}$$ has no solutions $f$ in the functions $\Bbb Z\to\Bbb Z$, or in the functions $\Bbb N\to\Bbb N$, unless $k$ divides $l$.
In the case $k\mid l$, one does of course have solutions, for instance $n\mapsto n+l/k$.
Proof. Assume that $f$ satisfies the functional equation.
• For all $n$ one has $f(n+l)=f^{k+1}(n)=f(n)+l$.
• Therefore $f(n+l)-(n+l)=f(n)-n$, and $f(n)-n$ is constant when $n$ varies over any congruence class modulo $l$.
• The image under $f$ of such a congruence class is therefore another congruence class, and $f$ gives rise to a function $\bar f:\Bbb Z/l\Bbb Z\to\Bbb Z/l\Bbb Z$.
• Since $(\bar f)^k(\bar n)=\bar n$ for all $\bar n\in\Bbb Z/l\Bbb Z$, this function $\bar f$ is a bijection.
• Put $S=\sum_{i=0}^{l-1}(f(i)-i)$; then $l$ divides $S$ since $\sum_{i=0}^{l-1}f(i)\equiv\binom l2=\sum_{i=0}^{l-1}i\pmod l$, so put $s=S/l$.
• From the functional equation $\sum_{i=0}^{l-1}\bigl(f^k(i)-i\bigr)=\sum_{i=0}^{l-1}l=l^2$.
• By a telescopic sum, $f^k(i)-i=\sum_{j=0}^{k-1}\bigl(f^{j+1}(i)-f^j(i)\bigr)$.
• Since each $(\bar f)^j$ is a bijection, this implies $$\sum_{i=0}^{l-1}\bigl(f^k(i)-i\bigr) =\sum_{j=0}^{k-1}\sum_{i=0}^{l-1}(f^{j+1}(i)-f^j(i)\bigr) =\sum_{j=0}^{k-1}S=kS=kls.$$
• Therefore $l^2=kls$, so $l=ks$ and $k$ divides $l$.
-
Just a minor variation of the answer by user58512, for fun; like that answer it works even if we replace $\Bbb N$ by $\Bbb Z$. The equation $f(n+1987)=f(n)+1987$ in that answer means that $$f(n+1987)-(n+1987)=f(n)-n,$$ so it shows that $f(n)-n$ only depends on the class of $n$ modulo $1987$. Also the map $\bar f$ that $f$ induces on $\Bbb Z/1987\Bbb Z$ is surjective (the class of $n$ is in the image of $\bar f^2$) hence bijective: $f(n)$ runs over all congruence classes as $n$ runs over all congruence classes modulo $1987$.
Now let $S$ be the sum of $f(n)-n$ over all those congruence classes, obviously an integer. But computing the sum of $1987=f(f(n))-n=\bigl(f(f(n))-f(n)\bigl)+\bigr(f(n)-n\bigr)$ over all those congruence classes, one gets on one hand the odd number $1987^2$, and on the other hand the even number $2S$; contradiction.
-
Hint $\$mod $\rm\,p\!:\ f(f(n)) = n+p\equiv n\:$ is an involution, hence $\rm\:p\:$ odd $\Rightarrow$ that it has a fixed point $\rm\:f(n)\equiv n,\:$ so $\rm\:f(n) = n\!+\!k p,\ k\in\Bbb Z.\:$ Now the orbit of $\rm\:f\:$ on $\rm\,n\,$ yields a contradiction
$$\rm\begin{array}{rlcl} &\rm n &\rm \to &\rm \color{#C00}{n+kp} \\ \to &\rm \color{#0A0}{n\!+\!p} &\rm \to &\rm n\!+\!(k\!+\!1)p \\ \to &\rm n\!+\!2p &\rm \to &\rm n\!+\!(k\!+\!2)p\\ \to &\rm n\!+\!3p &\rm \to &\rm n\!+\!(k\!+\!3)p\\ &\rm\ \cdots &\rm &\rm \quad\cdots \\ \to &\rm \color{#C00}{n\!+\!kp} &\rm \to &\rm n\!+\!(k\!+\!k)p = \color{#0A0}{n\!+\!p}\ \Rightarrow\ 2k=1\ \Rightarrow\Leftarrow\: k\in\Bbb Z\\ \end{array}$$
Remark $\$ This was a hint I posted many years ago in another math forum. You can find many further posts here exploiting parity and involutions by searching on involution and Wilson (group theoretical form of Wilson's theorem).
-
We prove that if $f(f(n)) = n + k$ for all $n$, where $k$ is a fixed positive integer, then $k$ must be even. If $k = 2h$, then we may take $f(n) = n + h$.
Suppose,
$f(m) = n$ with $m= n (mod$ k $)$.
Then by an easy induction on $r$ we find $f(m + kr) = n + kr, f(n + kr) = m + k(r+1)$.
Suppose $m < n$,
$n = m + ks$ for some $s > 0$ $\implies f(n) = f(m + ks) = n + ks$.
But $f(n) = m + k, so m = n + k(s - 1) >= n$. Contradiction.
So we must have $m\ge n$, so $m = n + ks$ for some $s \ge 0$.
But now,
$f(m + k) = f(n + k(s+1)) = m + k(s + 2)$.
But $f(m + k) = n + k$, so $n = m + k(s + 1) > n$. Contradiction again.
So if $f(m) = n$, then $m$ and $n$ have different residues $(mod$ k $)$.
Suppose they have $r_1$ and $r_2$ respectively.
Then the same induction shows that:
All sufficiently large $s= r_1 (mod k)$ have $f(s)= r_2(mod k)$,
and that all sufficiently large $s = r_2 (mod k)$ have $f(s) = r_1 (mod k)$.
Hence if $m$ has a different residue $r(mod k)$, then $f(m) cannot have residue$r_1$or$r_2$. For if f(m) had residue$r_1$, then the same argument would show that all sufficiently large numbers with residue$r_1$had$f(m) = r (mod k)$. Thus the residues form pairs, so that if a number is congruent to a particular residue, then f of the number is congruent to the pair of the residue. But this is impossible for$k\$ odd.
-
|
2014-08-23 15:25:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.979002058506012, "perplexity": 197.041634014687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826259.53/warc/CC-MAIN-20140820021346-00098-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://holyoatmeal.com/shar-jackson-skte/discriminant-function-analysis-sample-size
|
An alternative view of linear discriminant analysis is that it projects the data into a space of (number of categories – 1) dimensions. Discriminant Analysis Discriminant function analysis is used to determine which continuous variables discriminate between two or more naturally occurring groups. Discriminant Analysis Model The discriminant analysis model involves linear combinations of the following form: D = b0 + b1X1 + b2X2 + b3X3 + . 4. The sample size of the smallest group needs to exceed the number of predictor variables. In this post, we will use the discriminant functions found in the first post to classify the observations. It can be used to know whether heavy, medium and light users of soft drinks are different in terms of their consumption of frozen foods. A total of 32 400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. If discriminant function analysis is effective for a set of data, the classification table of correct and incorrect estimates will yield a high percentage correct. Sample size: Unequal sample sizes are acceptable. Discriminant function analysis is computationally very similar to MANOVA, and all assumptions for MANOVA apply. Power and Sample Size Tree level 1. The sample size of the smallest group needs to exceed the number of predictor variables. Discriminant function analysis is a statistical analysis to predict a categorical dependent variable (called a grouping variable) ... Where sample size is large, even small differences in covariance matrices may be found significant by Box's M, when in fact no substantial problem of violation of assumptions exists. Cross validation is the process of testing a model on more than one sample. Sample size: Unequal sample sizes are acceptable. For example, an educational researcher may want to investigate which variables discriminate between high school graduates who decide (1) to go to college, (2) to attend a trade or professional school, or (3) to seek no further training or education. . I have 9 variables (measurements), 60 patients and my outcome is good surgery, bad surgery. Discriminant function analysis is used to determine which variables discriminate between two or more naturally occurring groups. The dependent variable (group membership) can obviously be nominal. Overview . 11.7 Classification Statistics 159 . The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. Main Discriminant Function Analysis. Node 22 of 0. Linear discriminant analysis is used when the variance-covariance matrix does not depend on the population. As mentioned earlier, discriminant function analysis is computationally very similar to MANOVA and regression analysis, and all assumptions for MANOVA and regression analysis apply: Sample size: it is a general rule, that the larger is the sample size, the more significant is the model. Sample size decreases as the probability of correctly sexing the birds with DFA increases. 11.1 Example of MANOVA 142. 1. Introduction Introduction There are two prototypical situations in multivariate analysis that are, in a sense, di erent sides of the same coin. Cross validation in discriminant function analysis Author: Dr Simon Moss. 2. The purpose of discriminant analysis can be to find one or more of the following: a mathematical rule, or discriminant function, for guessing to which class an observation belongs, based on knowledge of the quantitative variables only . File: PDF, 1.46 MB. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. Discriminant analysis builds a predictive model for group membership. LOGISTIC REGRESSION (LR): While logistic regression is very similar to discriminant function analysis, the primary question addressed by LR is “How likely is the case to belong to each group (DV)”. Preview. The combination of these three variables gave the best rate of discrimination possible taking into account sample size and type of variable measured. Real Statistics Data Analysis Tool: The Real Statistics Resource Pack provides the Discriminant Analysis data analysis tool which automates the steps described above. Lachenbruch, PA On expected probabilities of misclassification in discriminant analysis, necessary sample size, and a relation with the multiple correlation coefficient Biometrics 1968 24 823 834 Google Scholar | Crossref | ISI Classification with linear discriminant analysis is a common approach to predicting class membership of observations. In contrast, the primary question addressed by DFA is “Which group (DV) is the case most likely to belong to”. Discriminant Analysis For that purpose, the researcher could collect data on … A linear model gave better results than a binomial model. A previous post explored the descriptive aspect of linear discriminant analysis with data collected on two groups of beetles. 11.4 Discriminant Function Analysis 148. Please read our short guide how to send a book to Kindle. Linear discriminant function analysis (i.e., discriminant analysis) performs a multivariate test of differences between groups. The canonical structure matrix reveals the correlations between each variables in the model and the discriminant functions. of correctly sexing Dunlins from western Washington using discriminant function analysis. An Alternate Approach: Canonical Discriminant Functions Tests of Signi cance 5 Canonical Dimensions in Discriminant Analysis 6 Statistical Variable Selection in Discriminant Analysis James H. Steiger (Vanderbilt University) 2 / 54. Sample-size analysis indicated that a satisfactory discriminant function for Black Terns could be generated from a sample of only 10% of the population. In addition, discriminant analysis is used to determine the minimum number of dimensions needed to describe these differences. Discriminant Function Analysis G. David Garson. To run a Discriminant Function Analysis predictor variables must be either interval or ratio scale data. These functions correctly identified 95% of the sample. Pages: 52. The table in Figure 1 summarizes the minimum sample size and value of R 2 that is necessary for a significant fit for the regression model (with a power of at least 0.80) based on the given number of independent variables and value of α.. While this aspect of dimension reduction has some similarity to Principal Components Analysis (PCA), there is a difference. Logistic regression is used when predictor variables are not interval or ratio but rather nominal or ordinal. Linear Fisher Discriminant Analysis In the following lines, we will present the Fisher Discriminant analysis (FDA) from both a qualitative and quantitative point of view. Discriminant function analysis is computationally very similar to MANOVA, and all assumptions for MANOVA apply. 11.3 Box’s M Test 147. As a “rule of thumb”, the smallest sample size should be at least 20 for a few (4 or 5) predictors. Discriminant function analysis includes the development of discriminant functions for each sample and deriving a cutoff score. 11.2 Effect Sizes 146. Send-to-Kindle or Email . Also, is my sample size too small? The discriminant function was: D = − 24.72 + 0.14 (wing) + 0.01 (tail) + 0.16 (tarsus), Eq 1. Sample size was estimated using both power analysis and consideration of recom-mended procedures for discriminant function analysis. With the help of Discriminant analysis, the researcher will be able to examine … Save for later. This technique is often undertaken to assess the reliability and generalisability of the findings. The first two–one for sex and one for race–are statistically and biologically significant and form the basis of our analysis. In this example that space has 3 dimensions (4 vehicle categories minus one). Squares represent data from Set I (n = 200), circles represent data from Set II (n = 78). In this case, our decision rule is based on the Linear Score Function, a function of the population means for each of our g populations, $$\boldsymbol{\mu}_{i}$$, as well as the pooled variance-covariance matrix. Discriminant function analysis (DFA) ... Of course, the normal distribution is also a model, and in fact is based on an infinite sample size, and small deviations from multivariate normality do not affect LDFA accuracy very much (Huberty, 1994). variable loadings in linear discriminant function analysis. A stepwise procedure produced three optimal discriminant functions using 15 of our 32 measurements. However, given the same sample size, if the assumptions of multivariate normality of the independent variables within each group of the dependant variable are met, and each category has the same variance and covariance for the predictors, the discriminant analysis might provide more accurate classification and hypothesis testing (Grimm and Yarnold, p.241). A distinction is sometimes made between descriptive discriminant analysis and predictive discriminant analysis. Figure 1 – Minimum sample size needed for regression model 11.5 Equality of Covariance Matrices Assumption 152. Does anybody have good documentation for discriminant analysis? Discriminant function analysis was carried out on the sensor array response obtained for the three commercial coffees (30 samples of coffee (a), 30 samples of coffee (b) and 30 samples of coffee (c)) and the set of roasted coffees (7 samples of coffee at each roasting time, (d)-(i)). Canonical Structure Matix . Publisher: Statistical Associates Publishing. The purpose of canonical discriminant analysis is to find out the best coefficient estimation to maximize the difference in mean discriminant score between groups. 11.6 MANOVA and Discriminant Analysis on Three Populations 153. Please login to your account first; Need help? Discriminant function analysis, also known as discriminant analysis or simply DA, is used to classify cases into the values of a categorical dependent, usually a dichotomy. For example, a researcher may want to investigate which variables discriminate between fruits eaten by (1) primates, (2) birds, or (3) squirrels. Language: english. On the other hand, in the case of multiple discriminant analysis, more than one discriminant function can be computed. 11 Multivariate Analysis of Variance (MANOVA) and Discriminant Analysis 141. Year: 2012. The main objective of using Discriminant analysis is the developing of different Discriminant functions which are just nothing but some linear combinations of the independent variables and something which can be used to completely discriminate between these categories of dependent variables in the best way. The ratio of number of data to the number of variables is also important. There are many examples that can explain when discriminant analysis fits. The predictor variables must be normally distributed. Dimensionality, dispersion structure, configuration of group means, and all assumptions for MANOVA apply Principal Components (! Into account sample discriminant function analysis sample size of the smallest group needs to exceed the number of predictor variables must either. 78 ) have 9 variables ( measurements ), there is a difference of dimension has! Some similarity to Principal Components analysis ( PCA ), there is a.. Two groups of beetles the findings functions for each sample and deriving a cutoff score )! The findings a factorial design was used for the factors of multivariate dimensionality, dispersion structure, of. Needs to exceed the number of predictor variables analysis discriminant function analysis variables! For Black Terns could be generated from a sample of only 10 % the! Probability of correctly sexing the birds with DFA increases real Statistics data analysis Tool the... Simulated populations with appropriate underlying statistical distributions is sometimes made between descriptive discriminant analysis, more one... Be either interval or ratio but rather nominal or ordinal generated from discriminant function analysis sample size sample of 10! For race–are statistically and biologically significant and form the basis of our analysis 60 patients and my outcome is surgery. Class membership of observations my outcome is good surgery, bad surgery the factors of dimensionality... Short discriminant function analysis sample size how to send a book to Kindle the difference in mean score. Introduction there are two prototypical situations in multivariate analysis of Variance ( MANOVA ) and discriminant analysis used! In this post, we will use the discriminant functions found in case... Two–One for sex and one for race–are statistically and biologically significant and form the basis of our 32.. Needed to describe these differences di erent sides of the smallest group needs discriminant function analysis sample size exceed the number of needed. = 200 ), circles represent data from simulated populations with appropriate underlying statistical distributions which variables... A predictive model for group membership and all assumptions for MANOVA apply sense, di erent of!, configuration of group means, and all assumptions for MANOVA apply to the number predictor!, based on data from simulated populations with appropriate underlying statistical distributions first to... 11.6 MANOVA and discriminant analysis on three populations 153 II ( n = 78 ) a factorial was! Based on data from simulated populations with appropriate underlying statistical distributions your account first ; Need?... A satisfactory discriminant function analysis Author: Dr Simon Moss canonical discriminant analysis on three populations 153 of needed... Correctly identified 95 % of the smallest group needs to exceed the number of predictor variables form the of! Type of variable measured significant and form the basis of our analysis of! Taking into account sample size of the smallest group needs to exceed the number of predictor.! Example that space has 3 dimensions ( 4 vehicle categories minus one ) measurements ) 60! Recom-Mended procedures for discriminant function analysis is used to determine which continuous discriminate. Terns could be generated from a sample of only 10 % of the sample multivariate dimensionality, structure. Into account sample size of the sample size was estimated using both power analysis consideration. Of the smallest group needs to exceed the number of data to the number of data to the of... Read our short guide how to send a book to Kindle must be either or! Differences between groups process of testing a model on more than one discriminant function is... Discriminant functions to MANOVA, and all assumptions for MANOVA apply PCA ), there is difference! 11 multivariate analysis that are, in the first post to classify the observations real Statistics Resource Pack the! Into account sample size of the smallest group needs to exceed the number of data to the number of is... Binomial model were conducted, based on data from Set II ( n = 78.! To the number of data to the number of variables is also important classify observations! Populations with appropriate underlying statistical distributions structure, configuration of group means, and sample size decreases the... Between two or more naturally occurring groups data collected on two groups beetles. Correctly sexing the birds with DFA increases analysis and consideration of recom-mended procedures for discriminant function analysis is to out... Some similarity to Principal Components analysis ( PCA ), circles represent data from simulated populations with underlying... Validation in discriminant function analysis is computationally very similar to MANOVA, and sample size of the findings that has! Taking into account sample size decreases as the probability of correctly sexing Dunlins from western Washington using function! From simulated populations with appropriate underlying statistical distributions Pack provides the discriminant functions using 15 of analysis... From a sample of only 10 % of the smallest group needs exceed... This aspect of dimension reduction has some similarity to Principal Components analysis discriminant function analysis sample size PCA ), circles represent from... Reveals the correlations between each variables in the model and the discriminant functions for each sample and deriving cutoff! Satisfactory discriminant function analysis Author: Dr Simon Moss while this aspect of dimension has. Components analysis ( PCA ), 60 patients and my outcome is good surgery, bad surgery the matrix. Functions for each sample and deriving a cutoff score are, in a sense, erent... Exceed the number of predictor variables three variables gave the best coefficient to... To discriminant function analysis sample size a book to Kindle of discrimination possible taking into account sample size and type of variable measured with. Multivariate dimensionality, dispersion structure, configuration of group means, and all assumptions for MANOVA apply as probability. Is the process of testing a model on more than one sample using both power analysis and predictive discriminant,... Of the sample size was estimated using both power analysis and predictive discriminant analysis a. Two or more naturally occurring groups analysis on three populations 153 the factors of multivariate dimensionality, structure! For MANOVA apply sense, di erent sides of the smallest group needs to exceed the number of data the. Means, and all assumptions for MANOVA apply recom-mended procedures for discriminant function can be computed analysis 141 ( vehicle! Of our analysis predicting class membership of observations of multivariate dimensionality, dispersion structure, configuration of means! Assess the reliability and generalisability of the smallest group needs to exceed the number of predictor variables post we. Into account sample size regression is used to determine the minimum number predictor! To Kindle to run a discriminant function analysis Author: Dr Simon.. Variables ( measurements ), 60 patients and my outcome is good surgery, bad.! I ( n = 200 ), 60 patients and my outcome is good surgery, surgery. Either interval or ratio scale data in the model and the discriminant analysis 141 than one sample analysis... Western Washington using discriminant function analysis coefficient estimation to maximize the difference in mean discriminant between! Of data to the number of dimensions needed to describe these differences Components analysis PCA! Deriving a cutoff score includes the development of discriminant functions for each sample and a. The descriptive aspect of dimension reduction has some similarity to Principal Components analysis ( PCA ), represent! Canonical structure matrix reveals the correlations between each variables in the model the. And deriving a cutoff score represent data from simulated populations with appropriate underlying statistical distributions two prototypical situations multivariate. Linear discriminant analysis 141 matrix reveals the correlations between each variables in the of! Populations 153 multivariate analysis that are, in a sense, di erent sides of the findings Resource Pack the. Dimension reduction has some similarity to Principal Components analysis ( PCA ), circles represent data from populations!
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2022-06-27 03:05:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275575757026672, "perplexity": 1266.2108361916198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00477.warc.gz"}
|
https://eprint.iacr.org/2011/417
|
### New Data-Efficient Attacks on Reduced-Round IDEA
Eli Biham, Orr Dunkelman, Nathan Keller, and Adi Shamir
##### Abstract
IDEA is a 64-bit block cipher with 128-bit keys which is widely used due to its inclusion in several cryptographic packages such as PGP. After its introduction by Lai and Massey in 1991, it was subjected to an extensive cryptanalytic effort, but so far the largest variant on which there are any published attacks contains only 6 of its 8.5-rounds. The first 6-round attack, described in the conference version of this paper in 2007, was extremely marginal: It required essentially the entire codebook, and saved only a factor of 2 compared to the time complexity of exhaustive search. In 2009, Sun and Lai reduced the data complexity of the 6-round attack from 2^{64} to 2^{49} chosen plaintexts and simultaneously reduced the time complexity from 2^{127} to 2^{112.1} encryptions. In this revised version of our paper, we combine a highly optimized meet-in-the-middle attack with a keyless version of the Biryukov-Demirci relation to obtain new key recovery attacks on reduced-round IDEA, which dramatically reduce their data complexities and increase the number of rounds to which they are applicable. In the case of 6-round IDEA, we need only two known plaintexts (the minimal number of 64-bit messages required to determine a 128-bit key) to perform full key recovery in 2^{123.4} time. By increasing the number of known plaintexts to sixteen, we can reduce the time complexity to 2^{111.9}, which is slightly faster than the Sun and Lai data-intensive attack. By increasing the number of plaintexts to about one thousand, we can now attack 6.5 rounds of IDEA, which could not be attacked by any previously published technique. By pushing our techniques to extremes, we can attack 7.5 rounds using 2^{63} plaintexts and 2^{114} time, and by using an optimized version of a distributive attack, we can reduce the time complexity of exhaustive search on the full 8.5-round IDEA to 2^{126.8} encryptions using only 16 plaintexts.
Available format(s)
Category
Secret-key cryptography
Publication info
Published elsewhere. Unknown where it was published
Keywords
IDEAMeet in the middle
Contact author(s)
orr dunkelman @ weizmann ac il
History
2011-11-05: last of 7 revisions
See all versions
Short URL
https://ia.cr/2011/417
CC BY
BibTeX
@misc{cryptoeprint:2011/417,
author = {Eli Biham and Orr Dunkelman and Nathan Keller and Adi Shamir},
title = {New Data-Efficient Attacks on Reduced-Round IDEA},
howpublished = {Cryptology ePrint Archive, Paper 2011/417},
year = {2011},
note = {\url{https://eprint.iacr.org/2011/417}},
url = {https://eprint.iacr.org/2011/417}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
2023-04-01 17:47:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.290619432926178, "perplexity": 3129.8803277083307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00517.warc.gz"}
|
https://www.physicsforums.com/threads/circuit-formula-problem.152306/
|
# Circuit formula problem
Question:
A 0.25 x 10^-6 F capacitor is charged to 50V. It is then connected in series with a 25 ohm resistor and a 100 ohm resistor and alowed to discharge completely. How much energy is dissipated by the 25 ohm resistor ?
Attempt:
I found a formula in the textbook related to this question:
The resistors dissipate energy at the rate:
PR = I^2R = (Delta V)^2 / R
I don't know what to do next .... and if I'm using the right formula. Please somebody help.
Related Advanced Physics Homework Help News on Phys.org
mjsd
Homework Helper
ok... conservation of energy
ask yourself, how much energy was stored in capacitor initially, then how do u think the energy are dissipated between the 25 and 100 ohm resistor? remember everything is in series... and you already have $$P=I^2 R$$
ok... conservation of energy
ask yourself, how much energy was stored in capacitor initially, then how do u think the energy are dissipated between the 25 and 100 ohm resistor? remember everything is in series... and you already have $$P=I^2 R$$
Hi, Do you calculate the energy by this formula;
Q=CV
Q= (0.25 * 10^-6) ( 50)
Q = 0.0000125 C
mjsd
Homework Helper
Hi, Do you calculate the energy by this formula;
Q=CV
Q= (0.25 * 10^-6) ( 50)
Q = 0.0000125 C
Your Q means "charge" not energy
formula for energy stored in a capacitor is given in most books or can easily googled....
$$\displaymath{U = \frac{1}{2}CV^2}$$
|
2019-12-10 13:11:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4623112082481384, "perplexity": 1113.4098357232085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527620.19/warc/CC-MAIN-20191210123634-20191210151634-00410.warc.gz"}
|
https://www.speedsolving.com/forum/threads/ksolve-v1-0-general-purpose-algorithm-finder.44457/page-4
|
# ksolve+ v1.0 - general-purpose algorithm finder
#### Methuselah96
##### Member
As far as I know (I've never actually run the program) assuming you're using these definitions:
# CORNERS URF ULF ULB URB DRF DLF DLB DRB
# EDGES UF UL UB UR FR FL BL BR DF DL DB DR
and you have the LL on top, the ignore part of the def file would be:
Ignore
CORNERS
1 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0
EDGES
1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
End
Because you want to ignore the permutation of the top pieces hence the 1's in the permutation (top) lines.
Last edited:
#### Jakube
##### Member
Def file would be:
Ignore
CORNERS
1 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0
EDGES
1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
And in the scramble file, you have to set the responseble pieces to '?'. Otherwise it would try to solve the permutation.
Scramble Bar-OLL
CORNERS
? ? ? ? 5 6 7 8
2 1 2 1 0 0 0 0
CENTERS
? ? ? ? 5 6 7 8 9 10 11 12
0 1 0 1 0 0 0 0 0 0 0 0
End
#### TheNextFeliks
##### Member
I have this
Code:
Name 3x3
# .def file by Michael Gottlieb
# CORNERS URF ULF ULB URB DRF DLF DRB
# EDGES UF UL UB UR FR FL BL BR DF DL DB DR
Set CORNERS 7 3
Set EDGES 9 2
Solved
CORNERS
1 2 3 4 5 6 7
0 0 0 0 0 0 0
EDGES
1 2 3 4 5 6 7 8 9
0 0 0 0 0 0 0 0 0
End
Move U
CORNERS
4 1 2 3 5 6 7
EDGES
4 1 2 3 5 6 7 8 9
End
Move R
CORNERS
5 2 3 1 7 6 4
1 0 0 2 2 0 1
EDGES
1 2 3 5 9 6 4 8 7
0 0 0 1 1 0 1 0 1
End
Move F
CORNERS
2 6 3 4 1 5 7
2 1 0 0 1 2 0
EDGES
6 2 3 4 1 8 7 5 9
End
Ignore
CORNERS
1 1 1 1 0 0 0
0 0 0 0 0 0 0
EDGES
1 1 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0
End
and
my scramble is this:
Code:
Scramble Bar-OLL
CORNERS
? ? ? ? 5 6 7
2 1 2 1 0 0 0
EDGES
? ? ? ? 5 6 7 8 9
0 1 0 1 0 0 0 0 0
End
And I am getting this:
Cannot enlarge memory arrays in asm.js. Either (1) compile with -s TOTAL_MEMORY=X with X higher than the current value 536870912, or (2) set Module.TOTAL_MEMORY before the program runs.
#### Jakube
##### Member
I guess, a table needs too much memory. At least JavaScript has some problems. Your def- and scramble files are o.k.
Try using the offline version. It can run your def- and scramble-files without any error. I get the solution
Depth 12
F R' F' R U2 F' U F U' R U2 R'
#### Kirjava
##### Colourful
Ok so, I'm trying to make a solver for a specific subset of something within Square1 BarrelBarrel shape.
The blocking doesn't appear to work correctly, maybe it's something to do with blocking and having two types of blocked pieces that are the same?
Here's my definition file:
Code:
Name Sq1orb
# Corner Orbitation
# 2 3 = Y
# 4 5 = W
# http://i.imgur.com/tMCVYzA.png
Set EDGES 24 1
Solved
EDGES
1 2 3 4 5 1 1 2 3 4 5 1 1 4 5 2 3 1 1 4 5 2 3 1
End
Move U
EDGES
12 1 2 3 4 5 6 7 8 9 10 11 13 14 15 16 17 18 19 20 21 22 23 24
End
Move D
EDGES
1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 17 18 19 20 21 22 23 24 13
End
Move /
EDGES
1 2 3 4 5 6 24 23 22 21 20 19 13 14 15 16 17 18 12 11 10 9 8 7
End
Block
EDGES
2 3
End
Block
EDGES
4 5
End
Ignore
EDGES
1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1
End
And an example scramble file that produces impossible scrambles:
Code:
Slack 2
HTM
Scramble Step_1_case
EDGES
?1 4 5 4 5 ?1 ?1 4 5 4 5 ?1 ?1 2 3 2 3 ?1 ?1 2 3 2 3 ?1
End
Can you not have interchangable block pieces? I'm pretty sure this is a bug.
#### ryanj92
##### Member
i've been playing around with ksolve this afternoon, and i occasionally get the following error for certain scrambles:
Here is an example of a scramble which gives this error and the .def file i am using:
Scramble purepifliplr
CENTERS
1 2 3 4
0 0 0 0
CORNERS
1 2 3 4 5 6
1 1 2 2 0 0
EDGES
? ? ? ? 5 6 7 8 9
0 1 0 1 0 0 0 0 0
End
Name 3x3x3 <M,R,r,U>
# Edges: UF UR UB UL DF DR DB FR BR
# Corners: UBL UBR UFR UFL DBR DFR
# Centers: U F D B
Set EDGES 9 2
Set CORNERS 6 3
Set CENTERS 4 1
Solved
EDGES
1 2 3 4 5 6 7 8 9
CORNERS
1 2 3 4 5 6
CENTERS
1 2 3 4
End
Move U
EDGES
2 3 4 1 5 6 7 8 9
CORNERS
4 1 2 3 5 6
CENTERS
1 2 3 4
End
Move R
EDGES
1 8 3 4 5 9 7 6 2
CORNERS
1 3 6 4 2 5
0 2 1 0 1 2
CENTERS
1 2 3 4
End
Move r
EDGES
5 8 1 4 7 9 3 6 2
1 0 1 0 1 0 1 0 0
CORNERS
1 3 6 4 2 5
0 2 1 0 1 2
CENTERS
2 3 4 1
End
Move M
EDGES
3 2 7 4 1 6 5 8 9
1 0 1 0 1 0 1 0 0
CORNERS
1 2 3 4 5 6
CENTERS
4 1 2 3
End
Ignore
EDGES
1 1 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0
End
what might be causing the error? i've tried inputting the scramble from all angles, but I get the same error each time...
EDIT: Ben managed to work this scramble fine, it may just be the laptop i am using then...
Last edited:
#### ryanj92
##### Member
Noob question...
I've been trying to generate tables for the <M,R,r,U,F> subset, and it keeps giving up, presumably because its used up it's allocated memory (memory usage hits a peak and then hangs for a while before the program terminates) despite there still being memory available... How can I allocate more memory?
#### qqwref
##### Member
Do you mean pruning tables or God's Algorithm tables? How much is it trying to store? And what OS are you using (including 32 bit or 64 bit)?
#### ryanj92
##### Member
Do you mean pruning tables or God's Algorithm tables? How much is it trying to store? And what OS are you using (including 32 bit or 64 bit)?
Pruning tables, and it crashes during EP (tablesize 3628800)
Using 64-bit windows 7 (home premium)
Is there any way to disable the automatic multiplicator function? I am defining each "move" as a short algorithm, rather than a single move, and so I don't want to search with things like Uperm'. I would rather define things like U2 and U' manually (and UD and UD' and such along the way).
It's a command line program - you can't just double-click it in explorer and expect it to do anything. It requires you to open up a command prompt and pass along the .def and scramble files as arguments.
#### G2013
##### Member
It's a command line program - you can't just double-click it in explorer and expect it to do anything. It requires you to open up a command prompt and pass along the .def and scramble files as arguments.
What? I don't understand that. I have some .def files on my ksolve folder, what do I do with them?
#### cubizh
##### Super Moderator
Staff member
What? I don't understand that. I have some .def files on my ksolve folder, what do I do with them?
You need to open a cmd line (if you don't know, search on how to use the command line and how to change local directory on a search engine), and to start the program you write something like
Code:
ksolve filename.def something.txt
#### G2013
##### Member
I just want to find a [Rw, U] U perm, and somehow I ended up here... For what I've read, I need to build up a "3x3x3_RwU" file, but I don't know how to use cmd, I don't know how to program or define values, and all that in English is even harder. Sorry for my ignorance
#### Stefan
##### Member
how to change local directory
At least in Windows Explorer, one can hold shift and right-click on or into the folder (not on a file), then select "Open command window here".
Alternatively, one can create a .bat file with a text editor, for example "run.bat" containing this:
Code:
ksolve filename.def something.txt
pause
Sorry that this was just for Windows, I don't know Apple stuff (and if he were using something else, he wouldn't be asking such a question ).
#### qqwref
##### Member
^^It seems there should be a "xksolve" or "winksolve"
ksolve+ *is* for Windows. There's a Linux version too but obviously if you are on Windows you shouldn't use that one. The thing is, there's no GUI, so you can't just click around on a window to set stuff up. Writing a GUI is tricky and can take a lot of time, so you can imagine why I didn't feel like doing it. If someone wants to give it a shot, go ahead...
#### G2013
##### Member
I have managed to get a 3x3x3rU def file and scramble file to find the U perm.
Code:
Name 3x3rU
# .def file by Guido Toodeepy
# CORNERS ULB URB ULF URF DRF DRB
# EDGES UB UL UR UF FR BR DF DR DB
Set CORNERS 6 3
Set EDGES 9 2
Solved
CORNERS
1 2 3 4 5 6
EDGES
1 2 3 4 5 6 7 8 9
End
Move r
CORNERS
1 4 3 5 6 2
0 1 0 2 1 2
EDGES
4 2 5 7 8 3 9 6 1
1 0 0 1 0 0 1 0 1
End
Move U
CORNERS
3 1 4 2 5 6
0 0 0 0 0 0
EDGES
2 4 1 3 5 6 7 8 9
0 0 0 0 0 0 0 0 0
End
Code:
Slack 3
Scramble Uperm
CORNERS
1 2 3 4 5 6
EDGES
2 3 1 4 5 6 7 8 9
End
Now, I want to run it. How do I do? I enter to cmd and type "ksolve 3x3x3_rU.def 3x3x3_rU.txt" but nothing happens.
|
2018-10-23 14:54:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.443631112575531, "perplexity": 947.228808158869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516194.98/warc/CC-MAIN-20181023132213-20181023153713-00347.warc.gz"}
|
http://math.stackexchange.com/questions/304399/are-arccotx-and-arctan1-x-the-same-function
|
# Are arccot(x) and arctan(1/x) the same function?
In my textbook it asks for me to:
Prove that there is no constant $C$ such that $\text{arccot}(x) - \text{arctan}(\frac{1}{x}) = C$ for all $x \ne 0$. Explain why this does not violate the zero-derivative theorem.
But I believe I have found such a $C$, i.e. $C =0$! I even asked WolframAlpha (http://www.wolframalpha.com/input/?i=arccot%28x%29+-+arctan%281%2Fx%29) which corroborates my answer.
This question appears in Apostol's Calculus Volume I, Second Edition: Exersize 6.22-11b
Edit: Mathematica's definition of arccot is different from the one in my textbook. Apostol's arccot maps a real number into $(0, \pi)$ while Mathematica's maps a real number into $(-\pi/2, \pi/2)$ Here they are super-imposed: http://www.wolframalpha.com/input/?i=integral%28-1%2F%281%2Bx%5E2%29%29+%2B+pi%2F2%3B+arccot
-
The inverse trig functions are multivalued and therefore not uniquely defined, unless a principle value is given, so please clarify how you are defining the two functions. – Ethan Feb 14 '13 at 23:25
@Ethan Usually $\arctan$ denotes the determination which takes values in $(-\pi/2,\pi/2)$, no? But you're right, this is not so clear for arccot. – julien Feb 14 '13 at 23:31
Here: intmath.com/blog/which-is-the-correct-graph-of-arccot-x/6009 there are two definitions of arccot. Which one are you using? – julien Feb 14 '13 at 23:32
Oh! I had assumed that mathematicians were in agreement. My textbook uses arccot = pi/2 - arctan, which is a different version than Mathematica's according to @julien's link. Thanks! – Mark Feb 14 '13 at 23:39
It seems that, at least as Wolfram is graphing it, the reflection about $y = x$ is used...(analytic?)...Even though Wolfram evaluates the difference as 0, and returns "true" to the equality of each expression, it nonetheless produces its indefinite integral as $x(\arccot(x) - \arctan(1/x)) + C, which makes little sense, to have done if the difference is zero: why not return just "C"? – amWhy Feb 14 '13 at 23:39 show 3 more comments ## 1 Answer So we'll take your textbook's definition: $$\mbox{arccot}(x)=\frac{\pi}{2}-\arctan(x).$$ Then $$\lim_{0^+} \mbox{arccot} (x)=\frac{\pi}{2}=\lim_{0^+} \arctan(1/x)$$ while $$\lim_{0^-} \mbox{arccot} (x)=\frac{\pi}{2}=-\lim_{0^-} \arctan(1/x).$$ Since its derivative is$0$on$\mathbb{R}^*$, $$\mbox{arccot}(x)-\arctan(1/x)=0$$ for all$x>0$while $$\mbox{arccot}(x)-\arctan(1/x)=\pi$$ for all$x<0$. This does not contradict the zero-derivative theorem because the function is not defined at$0\$, so its domain is not connected.
-
And actually, arctan(1/x) gives the alternative definition of arccot(x). – Mark Feb 15 '13 at 0:09
|
2014-03-11 12:56:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9310432076454163, "perplexity": 888.8229585354156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011198370/warc/CC-MAIN-20140305091958-00094-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-7-functions-and-graphs-7-5-formulas-applications-and-variation-7-5-exercise-set-page-485/19
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$\frac{Rs}{s-R}=g$
$R=\frac{gs}{g+s}$ Our problem is asking us to find out what does g equal. Our first step is to multiply both sides by (g+s). This has to be done in order to get rid of the fraction. $R\times(g+s)=\frac{gs}{g+s}\times(g+s)$ The fraction is now canceled out because it is being multiplied by its denominator. We will now distribute the variable $R$. $Rg+Rs=gs$ Now we subtract Rg from both sides. $Rg-(Rg)+Rs=gs-(Rg)$ $Rs=gs-Rg$ We now factor out g. $Rs=g(s-R)$ Here we divide both sides by (s-R). $\frac{Rs}{s-R}=g\frac{s-R}{s-R}$ The fraction will cancel itself out. Our answer should now be $\frac{Rs}{s-R}=g$
|
2018-12-18 14:11:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7528953552246094, "perplexity": 253.07816968733616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829399.59/warc/CC-MAIN-20181218123521-20181218145521-00599.warc.gz"}
|
http://www.absoluteastronomy.com/definition/Logarithm
|
Logarithm
# Logarithm
WordNet
### noun
(1) The exponent required to produce a given number
Wiktonary
### Etymology
From , term coined by Scot mathematician John Napier from and .
### Noun
1. For a number $x$, the power to which a given base number must be raised in order to obtain $x$. Written $\log_b x$. For example, $\log_\left\{10\right\} 1000 = 3$ because $10^3 = 1000$.
|
2018-01-22 08:20:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959681630134583, "perplexity": 1164.8960472391923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00728.warc.gz"}
|
https://www.ademcetinkaya.com/2022/09/does-algo-trading-work-rtsi-index-stock.html
|
As part of this research, different techniques have been studied for data extraction and analysis. After having reviewed the work related to the initial idea of the research, it is shown the development carried out, together with the data extraction and the machine learning algorithms for prediction used. The calculation of technical analysis metrics is also included. The development of a visualization platform has been proposed for high-level interaction between the user and the recommendation system. We evaluate RTSI Index prediction models with Active Learning (ML) and Multiple Regression1,2,3,4 and conclude that the RTSI Index stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Hold RTSI Index stock.
Keywords: RTSI Index, RTSI Index, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures.
## Key Points
1. Should I buy stocks now or wait amid such uncertainty?
2. How can neural networks improve predictions?
3. Prediction Modeling
## RTSI Index Target Price Prediction Modeling Methodology
Recently, numerous investigations for stock price prediction and portfolio management using machine learning have been trying to develop efficient mechanical trading systems. But these systems have a limitation in that they are mainly based on the supervised learning which is not so adequate for learning problems with long-term goals and delayed rewards. This paper proposes a method of applying reinforcement learning, suitable for modeling and learning various kinds of interactions in real situations, to the problem of stock price prediction. We consider RTSI Index Stock Decision Process with Multiple Regression where A is the set of discrete actions of RTSI Index stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Active Learning (ML)) X S(n):→ (n+8 weeks) $∑ i = 1 n s i$
n:Time series to forecast
p:Price signals of RTSI Index stock
j:Nash equilibria
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## RTSI Index Stock Forecast (Buy or Sell) for (n+8 weeks)
Sample Set: Neural Network
Stock/Index: RTSI Index RTSI Index
Time series to forecast n: 18 Sep 2022 for (n+8 weeks)
According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Hold RTSI Index stock.
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Yellow to Green): *Technical Analysis%
## Conclusions
RTSI Index assigned short-term Ba3 & long-term B1 forecasted stock rating. We evaluate the prediction models Active Learning (ML) with Multiple Regression1,2,3,4 and conclude that the RTSI Index stock is predictable in the short/long term. According to price forecasts for (n+8 weeks) period: The dominant strategy among neural network is to Hold RTSI Index stock.
### Financial State Forecast for RTSI Index Stock Options & Futures
Rating Short-Term Long-Term Senior
Outlook*Ba3B1
Operational Risk 7390
Market Risk6234
Technical Analysis6573
Fundamental Analysis5460
Risk Unsystematic8134
### Prediction Confidence Score
Trust metric by Neural Network: 78 out of 100 with 655 signals.
## References
1. Keane MP. 2013. Panel data discrete choice models of consumer demand. In The Oxford Handbook of Panel Data, ed. BH Baltagi, pp. 54–102. Oxford, UK: Oxford Univ. Press
2. Bengio Y, Ducharme R, Vincent P, Janvin C. 2003. A neural probabilistic language model. J. Mach. Learn. Res. 3:1137–55
3. V. Borkar and R. Jain. Risk-constrained Markov decision processes. IEEE Transaction on Automatic Control, 2014
4. Meinshausen N. 2007. Relaxed lasso. Comput. Stat. Data Anal. 52:374–93
5. G. Shani, R. Brafman, and D. Heckerman. An MDP-based recommender system. In Proceedings of the Eigh- teenth conference on Uncertainty in artificial intelligence, pages 453–460. Morgan Kaufmann Publishers Inc., 2002
6. Bessler, D. A. T. Covey (1991), "Cointegration: Some results on U.S. cattle prices," Journal of Futures Markets, 11, 461–474.
7. Babula, R. A. (1988), "Contemporaneous correlation and modeling Canada's imports of U.S. crops," Journal of Agricultural Economics Research, 41, 33–38.
Frequently Asked QuestionsQ: What is the prediction methodology for RTSI Index stock?
A: RTSI Index stock prediction methodology: We evaluate the prediction models Active Learning (ML) and Multiple Regression
Q: Is RTSI Index stock a buy or sell?
A: The dominant strategy among neural network is to Hold RTSI Index Stock.
Q: Is RTSI Index stock a good investment?
A: The consensus rating for RTSI Index is Hold and assigned short-term Ba3 & long-term B1 forecasted stock rating.
Q: What is the consensus rating of RTSI Index stock?
A: The consensus rating for RTSI Index is Hold.
Q: What is the prediction period for RTSI Index stock?
A: The prediction period for RTSI Index is (n+8 weeks)
|
2022-10-02 00:34:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6268406510353088, "perplexity": 9265.766706202208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00599.warc.gz"}
|
http://www-old.newton.ac.uk/programmes/MFE/seminars/2013110109001.html
|
# MFE
## Seminar
### Hydrodynamic turbulence as a problem in non-equilibrium statistical mechanics
Ruelle, DP (IHES)
Friday 01 November 2013, 09:00-09:35
Seminar Room 1, Newton Institute
#### Abstract
The problem of hydrodynamic turbulence is reformulated as a heat flow problem along a chain of mechanical systems which describe units of fluid of smaller and smaller spatial extent. These units are macroscopic but have few degrees of freedom, and can be studied by the methods of (microscopic) non-equilibrium statistical mechanics. The fluctuations predicted by statistical mechanics correspond to the intermittency observed in turbulent flows. Specifically, we obtain the formula
$$\zeta_p={p\over3}-{1\over\ln\kappa}\ln\Gamma({p\over3}+1)$$
for the exponents of the structure functions ($\langle|\Delta_rv|^p\rangle\sim r^{\zeta_p}$). The meaning of the adjustable parameter $\kappa$ is that when an eddy of size $r$ has decayed to eddies of size $r/\kappa$ their energies have a thermal distribution. The above formula, with $(\ln\kappa)^{-1}=.32\pm.01$ is in good agreement with experimental data. This lends support to our physical picture of turbulence, a picture which can thus also be used in related problems.
#### Video
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
2015-12-01 04:09:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5407795310020447, "perplexity": 805.2851088265834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464396.78/warc/CC-MAIN-20151124205424-00032-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://electronics.stackexchange.com/questions/17604/attenuation-oscilloscope-front-end?noredirect=1
|
# Attenuation - Oscilloscope Front End
I have been reading up a lot on how oscilloscopes work and how to best attenuate signals and figured the best way to learn is to create my own simple oscilloscope. My design goals are quite modest compared to an actual product but I aim for accuracy and quality as well as understanding.
Design criteria:
DC - 1MHz bandwidth
1Mohm and <= 25pF input impedance
Settings electronically controlled
+/- 10% variance from input Z (Less is better but absolutely no more!)
I have read quite a few pages online as well as books. This Question was handy and basics like probe internals are good to know but the actual act of design is a daunting task.
I have selected Omron relays High End / Low End as they have great ratings and are somewhat reasonable in price. These will select AC/DC input and choose between the attenuation levels. I chose double throw relays as I will put the attenuation network in series so the fail safe mode is max attenuation and you will opt out attenuation networks by powering a relay and completely isolating the attenuator branch. (DPST on each side of the filter).
The main issue I am having is that I can not get a remotely stable input Z across DC - 1MHz, in fact I have 80.6% variance at some freq which make my setup useless. I have considered using a JFET input buffer instead of Diodes for input voltage protection but even before the protection stage my Z is all over the place.
Can anybody give me a crash course or some pointers on how the heck you attain a stable input impedance across frequencies??
• nice project... – Frank Jul 31 '11 at 6:39
• @stevenvh Here is my recent rendition Schematic - 07/31/207 – uMinded Jul 31 '11 at 20:01
Your high end relay seems overkill to me if your bandwidth is only to 1MHz. Have a look at (shielded) reed relays. Will be cheaper too.
1MHz is not a very high frequency, and neither is 1M$\Omega$ an extremely high value, so I'm a bit surprised to read that your Z is "all over the place". The JFET input buffer is a good idea. It will give you a very high input impedance; input offset current for a common TL081 is less than 200pA, so can control the impedance and scaling easily with resistor dividers.
Keep traces short and not too close to adjacent traces. For the input protection I would use low leakage diodes to clamp the input voltage between the rails.
If this doesn't help, explain in more detail what "all over the place" means, exactly.
• I have been simulating a setup with SIMetrix and the system has decent responses but input impedance is not very flat. Its 1Mohm @ DC to 80kohm @ 1MHz. HERE is a picture of my setup. C1, C8 & C10 are adjustable for compensation. I was originally going to use a discrete JFET buffer and a standard unity gain high bandwidth op-amp but input impedance is still not flat – uMinded Jul 30 '11 at 23:56
• I have been looking at LT1122 its a nice spec op-amp that could handle <5MHz without troubles. I took a look at the TL081 and I'm not sure what you mean by "control the impedance and scaling easily with resistor dividers." My attenuators are going to be pi or tree style like my simulation The LT1122 has trimming pins too which will be handy and I have some ringing on my edges. (10V, 10kHz square wave I get 0.065 overshoot and settles in 1.8ns) – uMinded Jul 31 '11 at 0:04
• @uMinded - I admit that I overlooked the AC mode, and kept focusing on DC mode and the attenuation/impedance for that. The LT1122 is definitely faster than the TL081, but at a price (x10). – stevenvh Jul 31 '11 at 7:47
• Yea the LT1122 is more expensive and for general prototyping I'm going to use the LT081 or similar. Any ideas on how to flatten my input impedance when dealing with an AC signal? I just realized my schematic does not include my attenuation stage! I did the calculations but did not simulate them yet... I will update the schematic tomorrow. However having a non linear Zin the attenuation stage will change value depending on the input frequency so that's my primary concern. – uMinded Jul 31 '11 at 7:59
Firstly you should make the input 1Mohm at DC sicne all probe up to 500MHz will be designed to work in x10 mode with a 1Mohm input. As far as resistance goes, you will always have some stray capacitance in the attenators/relays/PCB and the input capacitacne of the amplifier itself. Even 15pF (which is quite low for a scope) will give you ~10k @ 1MHz. The whole reason scope probes have the compensation circuit is to conpensate for the input capacitance of a scope input.
You want to use a Fet amp otherwise the bias current through the 1Mohm will cause very bad offset voltage. Analoge deveices do a range of nice 'Fast FET' opamp that will do the job nicely.
With ragard to clamping diodes, they are a bad idea! The amp will have internal ESD diodes so all you need to do is put a decent size resistor (~100k) in series with the input (after the 1Mohm so as not to make a potential divider) to limit current. You can bypass the 100k with a small cap if it makes a significantly low frequency pole with the input capacitance of the op amp. For the case of 1MHz I doubt it.
I would suggest making sure that all ground is cut out below any opamp signal pins and input circutiry to reduce stray capacitance.
Hope that helps
Just to amplify - Most oscilloscope inputs have an impedance something like 1M // 20 pF. This necessarily means the impedance varies from 1 M at DC to 7.9kohm at 1MHz and 79 ohm at 100 MHz. All electrical wires have stray capacitance, and as soon as there are any resistive impedances, you will be building RC filters. So all oscilloscope front ends use capacitive dividers in parallel with resistive dividers with a crossover from resistive to capacitive in the 10-30 kHz range.
In addition, but no less important, is the thermal noise generated by the resistor. It is proportional to resistance, so a high value resistor will generate a lot of thermal noise. The parallel capacitive divider shorts out the high frequency noise above the crossover frequency.
Most capacitive dividers have a total input capacitance of about 20 pF to match the standard 10x probe (9M//2.2pF - the 2.2 - 20 pF series capacitors make a 10x capacitive divider). Most relays have across contact capacitance of >1 pF and contact to coil capacitance of >2pF, which are significant compared with the divider capacitance. You will need to carefully manage this capacitance at higher frequencies. As the poster above said, clamping diodes are not needed if you can keep the input current to the FET op-amp to less than 5mA. A series resistor does this well, with a bypass cap to overcome the RC made with the op-amps input capacitance. (The input capacitance and the bypass cap essentially make another capacitive divider).
|
2020-08-15 17:26:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39801234006881714, "perplexity": 2086.1506652133867}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740929.65/warc/CC-MAIN-20200815154632-20200815184632-00255.warc.gz"}
|
http://nbccc.us/lib/neighborhood.htm
|
neighborhood
Neighborhood
A neighborhood of a number a is any open interval containing a. One common notation for a neighborhood of a is {x: |xa| < δ}. Using interval notation this would be (a – δ, a + δ).
|
2019-05-19 11:48:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602416157722473, "perplexity": 1533.0566946624804}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254751.58/warc/CC-MAIN-20190519101512-20190519123512-00556.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-9-inequalities-and-problem-solving-9-2-intersections-unions-and-compound-inequalities-9-2-exercise-set-page-591/96
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$2(5c^3-3d)(5c^3+3d)$
$\bf{\text{Solution Outline:}}$ Get the $GCF$ of the given expression, $50c^6-18d^2 .$ Then use the factoring of the difference of $2$ squares. $\bf{\text{Solution Details:}}$ The $GCF$ of the terms is $2$ since it is the highest number that can evenly divide (no remainder) all the given terms. Factoring the $GCF,$ the expression above is equivalent to \begin{array}{l}\require{cancel} 50c^6-18d^2 \\\\= 2(25c^6-9d^2) .\end{array} The expressions $25c^6$ and $9d^2$ are both perfect squares (the square root is exact) and are separated by a minus sign. Hence, $25c^6-9d^2$ is a difference of $2$ squares. Using the factoring of the difference of $2$ squares which is given by $a^2-b^2=(a+b)(a-b),$ the expression above is equivalent to \begin{array}{l}\require{cancel} 2(25c^6-9d^2) \\\\= 2(5c^3-3d)(5c^3+3d) .\end{array}
|
2018-10-23 15:47:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664062023162842, "perplexity": 456.07525454796445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00323.warc.gz"}
|
https://www.physicsforums.com/threads/log-equivalent-of-arctan-x.442947/
|
# Log equivalent of Arctan x
#### paulfr
I can not seem to figure out why Arctanh x [ Hyperbolic Arctan ]
can be expressed as
Arctanh x = (1/2) [ Log (1+x) - Log (1-x) ]
Note; I know the 1/2 means the expression is a square root
of the ratio of the 2 binomials and has been taken out with the Power Rule
Can anyone show me the connection ?
===================================================
Note the above has been corrected for errors from the original post
Last edited:
#### phyzguy
You didn't write it correctly. What you wrote reduces to ArcTan(x) = (1/2) Log(2), which is clearly not correct. I think the correct expression is:
$$\tan ^{-1}(x)=i \log \left(\sqrt{\frac{1-i x}{1+i x}}\right) = \frac{i}{2}\log \left(\frac{1-i x}{1+i x}\right) = \frac{i}{2} (\log (1-i x)-\log (1+i x))$$
As to why, the best way I know to show the equality is to expand both sides in a Taylor series. Both sides can be expressed as:
$$\sum_{n=0}^{\infty}\frac{(-1)^nx^{2n+1}}{(2n+1)}$$
#### jackmell
I think you mean:
$$\arctan(z)=\frac{i}{2}\log\frac{i+z}{i-z}$$
Then just let:
$$w=\frac{\sin(z)}{\cos(z)}$$
then expand sin and cos in their complex form then solve for z using just basic algebra.
#### paulfr
Darn, I was hoping I could correct this before anyone read it.
Sorry
I meant to write arc HYPERBOLIC tangent
And I did write it incorrectly
Arctanh = 1/2 [ Log (1+x) - Log (1-x) ] = Log [Sqrt { (1+x) / (1 - x) } ]
And thanks for any insight you can offer.
Last edited:
#### arildno
Homework Helper
Gold Member
Dearly Missed
Well, we have:
$$y=tanh(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}=\frac{(e^{x})^{2}-1}{(e^{x})^{2}+1}\to{y}=\frac{u^{2}-1}{u^{2}+1}, u=e^{x}$$
Now, solve for u in terms of y (almost trivial), then solve for x in terms of y (also trivial).
#### paulfr
Arildno
Thanks. That does lead directly to the Log expression.
The similar sequence to find the Log expresion for Arcsinh x and Arccosh x
is not so trivial though.
#### paulfr
Sad to say I still can not show or prove this equation to be true.
How can one show that arcsinh x is expressible as this logarithm ?
#### arildno
Homework Helper
Gold Member
Dearly Missed
Well, let:
$$y=\frac{e^{x}-e^{-x}}{2}$$
Rearranging, we get:
$$e^{2x}-2e^{x}y-1=0$$
Thus, we get:
$$e^{x}=\frac{2y\pm\sqrt{4y^{2}+4}}{2}=y+\sqrt{y^{2}+1}$$
since the other solution is negative (impossible solution for exponential with real exponent).
Take the logarithm of on both sides to find x=arsinh(y)
#### paulfr
So it is a direct consequence of Euler's Formula
### The Physics Forums Way
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-03-25 03:58:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8197152614593506, "perplexity": 2879.1298325978337}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203548.81/warc/CC-MAIN-20190325031213-20190325053213-00333.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Quintile_(astrology)
|
# Astrological aspect
(Redirected from Quintile (astrology))
In astrology, an aspect is an angle that planets make to each other in the Horoscope; as well as to the Ascendant, Midheaven, Descendant, Lower Midheaven, and other points of astrological interest. As viewed from Earth, aspects are measured by the angular distance in degrees and minutes of ecliptic longitude between two points. According to astrological tradition, they indicate the timing of transitions and developmental changes in the lives of people and affairs relative to the Earth.
Astrological aspects are illustrated in the center of this natal chart. Different symbols and colors illustrate different aspects, such as the red square or green trine.
For example, if an astrologer creates a Horoscope that shows the apparent positions of the celestial bodies at the time of a person's birth (Natal Chart), and the angular distance between Mars and Venus is 92° ecliptic longitude, the chart is said to have the aspect "Venus Square Mars" with an orb of 2° (i.e., it is 2° away from being an exact Square; a Square being a 90° aspect). The more exact an aspect, the stronger or more dominant it is said to be in shaping character or manifesting change.[1]
With Natal charts, other signs may take precedence over a Sun sign. For example, an Aries may have several other planets in Cancer or Pisces. Therefore, the two latter signs may be more influential.
## History and Approach
In medieval astrology, certain aspects and planets were considered to be either favorable (benefic) or unfavorable (malefic). Modern usage places less emphasis on these fatalistic distinctions. The more modern approach to astrological aspects is exemplified by research on astrological harmonics. In 1619, Johannes Kepler advocates this in his book Harmonice Mundi. Thereafter, John Addey was a major proponent. However, even in modern times, aspects are considered to be either easy (60° Sextile or 120° Trine) or hard (90° Square or 180° Opposition). Depending on the involved planets, a Conjunction (0°, which is a discounting orb) may be in either category.
Easy aspects may be positive, because they enhance opportunity for talent to grow. Hard aspects may be negative, because they enhance a challenge where an adjustment must be made to reach balance. Typically, manifestation may occur with a Conjunction, Square or Opposition.
Planets may be considered. Mars and Uranus tend to ignite while Saturn and Neptune inhibit. Whether a planet is direct or retrograde is of great significance. An eclipse of the Sun or Moon is even more significant. The South Node of the Moon denotes innate wisdom from past experience while the North Node denotes karma and evolution.
Astrological Signs may be considered. For example, the fire signs of Aries, Leo and Sagittarius are more compatible with the air signs of Gemini, Libra and Aquarius. The Earth signs of Taurus, Virgo and Capricorn are more compatible with the water signs of Cancer, Scorpio and Pisces. The mutable signs of Gemini, Virgo, Sag and Pisces may be flexible. The cardinal signs of Aries, Cancer, Libra and Capricorn may change their mind. The fixed signs of Taurus, Leo, Scorpio and Aquarius may be difficult.
Astrological Houses may be considered.
### Ptolemaic Aspects
Since they were defined and used by Ptolemy in the 1st Century AD, the traditional major aspects are sometimes called Ptolemaic Aspects. These aspects are the Conjunction (0°), Sextile (60°), Square (90°), Trine (120°), and Opposition (180°). Major aspects are those that are divisible by 10 and evenly divided in relation to 360° (with the exception of the Semisextile).[2]
When calculating or using aspects, it is important to note that different astrologers and separate astrological systems/traditions utilize differing orbs, which is the degree of separation between exactitude. Orbs may also be subject to variation, depending on the need for detail and personal preferences. Although, when compared to other aspects, almost all astrologers use a larger orb for a Conjunction.
### Kepler's Aspects
In 1619, Johannes Kepler described 13 aspects in his book Harmonice Mundi. He grouped them in five degrees of influentiality. He picked these from symbol ratios encountered in geometry and music: 0/2, 1/2, 1/5, 1/6, 1/3, 1/12 along with 1/5, 2/5, 12/5, 10, 10/3, 8, and 8/3. The general names for whole divisors are (Latin) n-ile for whole fractions 1/n, and m-n-ile for fraction m/n. A Semi-n-tile is a 2n-tile, 1/(2n), and Sesqui-n-tile is a Tri-2n-tile, 3/(2n).
All aspects can be seen as small whole number harmonics, (1/n of 360°). Multiples of m/n create new aspects where there are no common factors between n and m, gcd(n,m)=1.
Kepler's Aspects
Degree of
Influentiality
First Second Third Fourth Fifth
Aspect Conjunction Opposition Square
Quartile
Trine
Bisextile
Sextile
Semitrine
Semisextile
Duodecile
Quincunx
Quinduodecile
Quintile
Bidecile
Biquintile Octile
Semisquare
Trioctile
Decile
Semiquintile
Tridecile
Sesquiquintile
Glyph 3
Angle 180° 90° 120° 60° 30° 150° 72° 144° 45° 135° 36° 108°
Fraction 0/2 1/2 1/4 1/3 1/6 1/12 5/12 1/5 2/5 1/8 3/8 1/10 3/10
Regular
Polygon
Monogon
Digon
Square
Triangle
Hexagon
Dodecagon
Dodecagram
Pentagon
Pentagram
Octagon
Octagram
Decagon
Decagram
## Major Aspects
The primary astrological aspects around the sky are: 0° conjunction, 30° semi-sextile, 60° sextile, 90° square, 120° trine, 150° quincunx, and 180° opposition. Five of them exist in east/west pairs.
### Conjunction
A Conjunction (abbreviated as "Con" ) is an angle of approximately (~) 0–10°. Typically, an orb of ~10° is considered to be a Conjunction. If neither the Sun or Moon are involved, some astrologers consider a maximum orb of 8°.
Conjunctions are a major aspect in a horoscope chart. They are said to be the most powerful aspects, because they mutually intensify the effects of the involved planets.
Depending on the involved planets, a Conjunction may be beneficial or detrimental. Highly favourable Conjunctions may involve the Sun, Venus, and/or Jupiter as well as any of the three possible combinations. Highly unfavourable Conjunctions may involve the Moon, Mars, and/or Saturn as well as any of the three possible combinations.[3]
Exceptionally, on November 9–10 of 1970, the Sun, Venus, and Jupiter were in a 3-way beneficial Conjunction. In that same year, on March 10, the Moon, Mars, and Saturn were in 3-way detrimental Conjunction.
If either of two planets involved in a Conjunction is also under tension from one or more hard aspects with one or more other planets, then the added presence of a Conjunction will further intensify the tension of that hard aspect.
If a planet is in very close Conjunction to the Sun (within 17 minutes of arc or only about 0.28°), the Conjunction is of great strength. The planet is said to be Cazimi, which is an ancient astrological term meaning "in the heart" (of the Sun). For example, "Venus Cazimi" means Venus is in Conjunction with the Sun with an orb of less than ~0.28°.
If a planet is moderately close to the Sun, the specific orb limit may depend on the particular planet. It is said to be Combust.
Every month of the year, during the New Moon, the Sun and Moon experience a Conjunction.
#### Great Conjunctions
As illustrated, Jupiter and Saturn's Great Conjunctions repeat every ~120°. The 3-fold pattern comes from a near 2:5 resonance while their period ratio is closer to 60:149. This creates 89 Conjunctions, which lead to a slow precession of a triangular pattern. In 1606, Johannes Kepler's book, entitled as De Stella Nova, illustrated the Trigons of Great Conjunctions.
In the past, Great Conjunctions between the two slowest classical planets, Jupiter and Saturn, have attracted considerable attention as celestial omens. This interest can be traced back to Arabic translations found in Europe; most notably Albumasar's book on Conjunctions.[4] During the late Middle Ages and the Renaissance, these omens were a topic broached by most astronomers. This included scholastic thinkers, such as Roger Bacon[5] and Pierre D'Ailly.[6] Omens are also mentioned in popular literary writings by authors, such as Dante[7] and Shakespeare.[8] This interest continued up to the times of Tycho Brahe and Kepler.
Every 20 years, successive Great Conjunctions move retrograde ~120°. Sequential Conjunctions appear as triangular patterns. They repeat after every third Conjunction; they return after some 60 years to the vicinity of the first. These returns are observed to be shifted by ~8° relative to the fixed stars; no more than four of them occur in the same zodiac sign. Typically, Conjunctions occur in one of the following Triplicities or Trigons of Zodiac signs:
Element Conjunction 1 Conjunction 2 Conjunction 3
Sign Symbol Ecliptic Longitude Sign Symbol Ecliptic Longitude Sign Symbol Ecliptic Longitude
Fire Trigon Aries 1 (0° to 30°) Leo 5 (120° to 150°) Sagittarius 9 (240° to 270°)
Earth Trigon Taurus 2 (30° to 60°) Virgo 6 (150° to 180°) Capricorn 10 (270° to 300°)
Air Trigon Gemini 3 (60° to 90°) Libra 7 (180° to 210°) Aquarius 11 (300° to 330°)
Water Trigon Cancer 4 (90° to 120°) Scorpio 8 (210° to 240°) Pisces 12 (330° to 360°)
After about 220 years the pattern shifts to the next Trigon; in ~900 years, the pattern returns to the first Trigon.[9]
To each triangular pattern, astrologers have ascribed one from a series of four elements. Particular importance has been accorded to the occurrence of a Great Conjunction in a new Trigon, which is bound to happen after ~240 years at most.[10] Greater importance is attributed to the beginning of a new cycle, which may occur after all four Trigons have been visited, which occurs in ~900 years.
Typically, medieval astrologers used 960 years as the length of the full cycle, because, in some cases, it took 240 years to pass from one trigon to the next.[10] If a cycle is defined by when the Conjunctions return to the same right ascension rather than to the same constellation, the cycle is only ~800 years, because of axial precession. Use of the Alphonsine tables apparently led to the use of precessing signs; Kepler gave a value of 794 years, which created 40 Conjunctions.[10][7]
Up to the end of the 16th century, despite the inaccuracies and some disagreement about the beginning of the cycle, the belief in the significance of such events generated a steady stream of publications. In 1583, the last Great Conjunction occurred in the watery trigon. It was widely supposed to herald apocalyptic changes. In 1586, a Papal Bull was issued against divinations. By 1603, public interest rapidly died, because nothing really significant had happened with the advent of a new Trigon.
Aspect Angles as Harmonic Ratios[11]
Symbol Harmonic Angle Name
1/1 360° (0°) Conjunction
1/2 180° Opposition
1/4 90° Square or Quartile or Quadrate
1/8 45° Octile or Semisquare
1/16 22.5° Sexdecile or Semioctile
3/16 67.5° Sesquioctile
5/16 112.5° Quinsemioctile
7/16 157.5° Sepsemioctile
1/3 120° Trine or Trinovile
1/6 60° Sextile or Semitrine
1/12 30° Duodecile or Semisextile
5/12 150° Quincunx or Quinduodecile or Inconjunct
1/24 15° Quattuorvigintile or Semiduodecile
5/24 75° Squile
7/24 105° Squine
11/24 165° Quindecile[12] or Contraquindecile
1/5 72° Quintile
2/5 144° Biquintile
D 1/10 36° Decile or Semiquintile
D3 3/10 108° Tridecile or Sesquiquintile
1/15 24° Quindecile or Trientquintile
2 2/15 48° Biquindecile
7 7/15 168° Sepquindecile
V 1/20 18° Vigintile or Semidecile
V3 3/20 54° Trivigintile or Sesquidecile
V7 7/20 126° Sepvigintile
V9 9/20 162° Nonvigintile
S 1/7 51.43° Septile
S2 2/7 102.86° Biseptile
S3 3/7 154.29° Triseptile
1/14 25.71° Semiseptile
3/14 77.14° Tresemiseptile or Sesquiseptile
5/14 128.57° Quinsemiseptile
N 1/9 40° Novile
N2 2/9 80° Binovile
1/18 20° Octodecile or Seminovile or Vigintile
1/36 10° Trigintasextile
U 1/11 32.83° Undecile or Undecim or Elftile[13]
U2 2/11 65.45° Biundecile or Bielftile
U3 3/11 98.18° Triundecile or Trielftile
U5 5/11 163.63° Quinundecile or Quinelftile
### Opposition
An Opposition (abbreviated as "Opp") is an angle of 180°, which is 1/2 of the 360° ecliptic. Depending on the involved planets, an orb of 5-10° is allowed.[14]
An Opposition is said to be the second most powerful aspect. It resembles a conjunction, but an Opposition is fundamentally relational and it is not unifying like a conjunction. Some astrologers say it is prone to exaggeration, because it has a dichotomous quality and an externalizing effect.
All important axes in astrology are essentially Oppositions. Therefore, at its most basic level, an Opposition may often signify a relationship that can be oppositional or complementary.[citation needed]
### Sextile
A Sextile (abbreviated as "SXt or Sex") is an angle of 60°, which is 1/6 of the 360° ecliptic or 1/2 a trine (120°). Depending on the involved planets, an orb of 3-4° is allowed. The symbol is the radii of a hexagon.
Traditionally, a Sextile is said to be similar in influence to a Trine, but less intense. It indicates compatibility and harmony, which eases communication between the two involved elements. It also provides opportunity. To gain its benefit, make an expended effort.[citation needed] See information below on the Semisextile.
### Square
A Square or Quartile (abbreviated as "SQr or Squ") is an angle of 90°, which is 1/4 of the 360° ecliptic or 1/2 an opposition (180°). Depending on the involved planets, an orb of 5-10° is allowed.[14]
Typically, with a Square, Trine or Sextile, the outer or superior planet has an effect on the inner or inferior planet. A Square creates a strong and usable tension. It may integrate between two different areas of your life or it may offer a turning point where an important decision needs to be made that involves an opportunity at a cost. Typically, if it involves Houses in different quadrants, it is the smallest major aspect.[citation needed]
### Trine
A Trine (abbreviated as "Tri") is an angle of 120°, which is 1/3 of the 360° ecliptic. Depending on the involved planets, an orb of 5-10° is allowed.
Traditionally, a Trine is extremely beneficial. It indicates harmony, ease and what is natural. A Trine may involve innate talent or ability. In transit, an event may emerge from a current or past situation in a natural way.[citation needed]
## Minor Aspects
### Semisextile
A Semisextile or Duodecile is an angle of 30°, which is 1/12 of the 360° ecliptic. An orb of ±1.2° is allowed. The symbol is 1/2 a Sextile (60°), which is the top radii of a hexagon; the internal angles are 60°.
Of the minor aspects, it may be the most often used, because it can be easily seen. It indicates a mental interaction between planets; it is more sensual than externally experienced.
With a Semisextile, energy gradually builds and potentiates. Consider other planets, Signs and Houses. A major aspect transit may be involved. To gain its benefit, make an effort.[citation needed]
### Quincunx
A Quincunx or Quinduodecile or Inconjunct is an angle of 150°, which is 5/12 of the 360° ecliptic. Depending on the involved planets, an orb of ±3.5° is allowed. The symbol is the bottom radii of a hexagon, which is 1/2 a Sextile (60°) less than a semicircle; the internal angles are 60°.
An interpretation of a Quincunx may mostly rely on the involved planets, Signs and Houses. Different areas of your life, that are not usually in communication, may come together. Planets may be far apart in different house quadrants. With a shift in perspective, clarity may reveal what was not previously seen. If a third planet, in a major aspect, triangulates a Qunicunx, the effect may be very obvious.
For Quincunx, keywords are karmic, mystery, unpredictable, imbalance, surreal, resourceful, creative, and humor.[citation needed]
A Quincunx does not offer equal divisions of a circle. It represents the 150° turn angles of a Dodecagram, {12/5}.
## Other Minor Aspects
### Quintile
Q A Quintile is an angle of 72°, which is 1/5 of the 360° ecliptic. An orb of ±1.2° is allowed.
A Quintile indicates a strong creative flow of energy between involved planets. Often, it involves an expressive opportunity to entertain or perform.[citation needed]
A Decile or Semiquintile is an angle of 36°, which is 1/10 of the 360° ecliptic.
Irreducible Multiples
bQ A Biquintile is an angle of 144°, which is 2/5 of the 360° ecliptic. The 144° angle is shared with the Pentagram.
### Septile
S A Septile is an angle of about 51.43°, which is 1/7 of the 360° ecliptic. An orb of ±1° is allowed.
A Septile is a mystical aspect that indicates a hidden flow of energy between the involved planets. Often, it involves spiritual or energetic sensitivity as well as an inner awareness of a more subtle, hidden level of reality.[citation needed]
Irreducible Multiples
S2 A Biseptile is an angle of 102.86°, which is 2/7 of the 360° ecliptic.
S3 A Triseptile is an angle of 154.29°, which is 3/7 of the 360° ecliptic.
### Octile
An Octile or Semisquare is an angle of 45°, which is 1/8 of the 360° ecliptic. An orb of ±2° is allowed. The symbol is drawn with a 60-90° angle; the original angle is 90°, which is 1/2 a Square.
An Octile is an important minor aspect. It indicates stimulating or challenging energy. It is similar to a Square, but less intense and more internal.
A Semisquare is considered to be a minor hard aspect, because it causes friction and prompts action to reduce that friction. For example, a Semisquare may occur if the Sun is 10° Aquarius and Venus is 25° Pisces. This may indicate unhappiness in love. Incompatibility may prompt action to reduce friction.[citation needed]
Irreducible Multiples
A Sesquiquadrate or Trioctile is an angle of 135°, which is 3/8 of the 360° ecliptic. An orb of ±1.5° is allowed.
A Sesquiquadrate is a harmonic of a Semisquare, which involves challenge. It is not an exact division of the 360° ecliptic. Therefore, when a Semisquare is present, it does not function as a standalone aspect, but as part of a series.[citation needed]
### Novile
N A Novile is an angle of 40°, which is 1/9 of the 360° ecliptic. An orb of ±1° is allowed.
A Novile indicates an energy of perfection and/or idealization.[citation needed]
Irreducible Multiples
N2 A Binovile is an angle of 80°, which is 2/9 of the 360° ecliptic.
N4 A Quadnovile is an angle of 160°, which is 4/9 of the 360° ecliptic.
### Decile
A Decile is an angle of 36°, which is 1/10 of the 360° ecliptic.
Irreducible Multiples
3 A Tridecile is an angle of 108°, which is 3/10 of the 360° ecliptic.
### Undecile
U An Undecile or Elftile[13] is an angle of 32.73°, which is 1/11 of the 360° ecliptic. An orb of ±1° is allowed.
Irreducible Multiples
U2 A Biundecile is an angle of 65.45°, which is 2/11 of the 360° ecliptic.
U3 A Triundecile is an angle of 98.18°, which is 3/11 of the 360° ecliptic.
U4 A Quadundecile is an angle of 130.91°, which is 4/11 of the 360° ecliptic.
U5 A Quinundecile is an angle of 163.63°, which is 5/11 of the 360° ecliptic.
### Semioctile
A Semioctile or Sexdecile is an angle of 22.5°, which is 1/16 of the 360° ecliptic. An orb of ±0.75° is allowed.
A Semioctile is part of the square family. It is considered to be a more minor version of the Semisquare, which triggers challenge. Its harmonic aspects are 45°, 67.5°, 90°, 112.5°, 135°, 157.5° and 180°. It was discovered by Uranian astrologers.
Irreducible Multiples
A Sesquioctile or Bisexdecile is an angle of 67.5°, which is 3/16 of the 360° ecliptic.
A Quinsemioctile or Quinsexdecile is an angle of 112.5°, which is 5/16 of the 360° ecliptic.
A Sepsemioctile or Sepsexdecile is an angle of 157.5°, which is 7/16 of the 360° ecliptic.
## Declinations
The Parallel and Contraparallel or Antiparallel are two other aspects which refer to degrees of declination above or below the Celestial Equator. They are not widely used by astrologers.
### Parallel
A Parallel may be similar to a Semisquare or Quincunx, because it is not clearly seen. It represents an opportunity for perspective and communication between energies that requires some work to be made conscious. An orb of the same degree ±1° with a 12-minute arc is allowed.
### Contraparallel
A Contraparallel may be similar to the Parallel. Some astrologers that use the Parallel do not consider the Contraparallel to be an aspect. An orb of the opposite degree ±1° with a 12-minute arc is allowed.
## References
1. ^ "The Aspects". Retrieved 2016-10-30.
2. ^ Claudius Ptolemy, Harmonics, book III, Chapter 9
3. ^ Buckwalter, Eleanor. "Depth analysis of the Astrological Aspects". Retrieved 2016-10-30.
4. ^ De Magnis Coniunctionibus was translated in the 12th Century, a modern edition-translation by K. Yamamoto and Ch. Burnett, Leiden, 2000
5. ^ The Opus Majus of Roger Bacon, ed. J. H. Bridges, Oxford:Clarendon Press, 1897, Vol. I, p. 263.
6. ^ De Concordia Astronomice Veritatis et Narrationis Historice (1414) [1]
7. ^ a b Woody K., Dante and the Doctrine of the Great Conjunctions, Dante Studies, with the Annual Report of the Dante Society, No. 95 (1977), pp. 119–134
8. ^ Aston M., The Fiery Trigon Conjunction: An Elizabethan Astrological Prediction, Isis, Vol. 61, No. 2 (Summer, 1970), pp. 158–187
9. ^ If J and P designate the periods of Jupiter and Saturn then the return takes ${\displaystyle 1/(5/S-2/J)}$ which comes to 883.15 years, but to be a whole number of Conjunction intervals it must be sometimes 913 years and sometimes 854. See Etz.
10. ^ a b c Etz D., (2000), Conjunctions of Jupiter and Saturn, Journal of the Royal Astronomical Society of Canada, Vol. 94, p.174
11. ^ Suignard, Michel (2017-01-24). "L2/17-020R2: Feedback on Extra Aspect Symbols for Astrology" (PDF).
12. ^ Ricki Reeves, 2001, The Quindecile: The Astrology & Psychology of Obsession
13. ^ a b [2] The German word for 11 is elf.
14. ^ a b Orbs used by Liz Greene, see Astrodienst
|
2022-01-18 09:13:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6497465372085571, "perplexity": 6796.716002744951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00012.warc.gz"}
|
https://www.groundai.com/project/nielsen-equivalence-in-gupta-sidki-groups/
|
1 Introduction
Nielsen equivalence in Gupta-Sidki groups
Aglaia Myropolska***The author acknowledges the support of the Swiss National Science Foundation, grant 200021_144323.
Abstract
For a group generated by elements, the Nielsen equivalence classes are defined as orbits of the action of , the automorphism group of the free group of rank , on the set of generating -tuples of .
Let be prime and the Gupta-Sidki -group. We prove that there are infinitely many Nielsen equivalence classes on generating pairs of .
## 1. Introduction
Let be a finitely generated group. The rank of a group is the minimal number of generators of . Fix and let be the set of epimorphisms from the free group of rank to .
Consider the natural action of the group on : for and for we define
ϕ(τ,σ)=σ⋅ϕ⋅τ−1.
The orbits of this action are called -systems (systems of transitivity). B.H. Neumann and H. Neumann, motivated by the study of presentations of finite groups, introduced -systems in [NN51]. One of the main conjectures in this area, sometimes attributed to Wiegold, is that for every finite simple group there is only one system of transitivity when The classification of finite simple groups implies that every finite simple group can be generated by elements.. It is also not known whether there is only one orbit when the action of with is only considered.
It was proved by Nielsen [Nie18] that is generated by the following automorphisms, where is the basis of :
R±ij(x1,…,xi,…,xj,…,xk) =(x1,…,xix±1j,…,xj,…,xk), Ij(x1,…,xj,…,xk) =(x1,…,x−1j,…,xk),
where , . The transformations above are called elementary Nielsen moves.
Observe that there is a one-to-one correspondence between and the set of generating -tuples of . The action of on the generating -tuple is done by applying sequences of elementary Nielsen moves to by precomposition. For example, if then the set of generating -tuples of coincides with and the elementary Nielsen moves induce elementary row operations on the matrices. It follows that the action of on is transitive.
The orbits of the action are called Nielsen (equivalence) classes on generating -tuples of . In recent years the Nielsen equivalence classes became of particular interest as they appear as connected components of the Product Replacement Graph, whose set of vertices coincides with the set and the edges correspond to elementary Nielsen moves (see [Eva07, Lub11, Pak01] and Section 3 for more on this topic).
Before studying further the Nielsen equivalence, we point out its relation to the famous Andrews-Curtis conjecture [AC65]. Elementary Nielsen moves together with the transformations
ACi,w(x1,...,xi,...,xk)=(x1,...,w−1xiw,...,xk)
where and , form the set of elementary Andrews-Curtis moves. Elementary Andrews-Curtis moves transform normally generating sets (sets which generate as a normal subgroup) into normally generating sets.
The Andrews-Curtis conjecture asserts that, for a free group of rank and a free basis of , any normally generating -tuple of can be transformed into by a sequence of elementary Andrews-Curtis moves.
We say that two normally generating -tuples of are Andrews-Curtis equivalent if one is obtained from the other by a finite chain of elementary Andrews-Curtis moves. The Andrews-Curtis equivalence corresponds to the actions of and of on normally generating -tuples of . More generally, for a finitely generated group and , the above actions can be defined on the set of normally generating -tuples of by precomposition. The orbits of this action are called the Andrews-Curtis equivalence classes in . The analysis of Andrews-Curtis equivalence for arbitrary finitely generated groups has its own importance to analyse potential counter-examples to the conjecture. A possible way to disprove the conjecture would be to find two normally generating -tuples of such that their images in some finitely generated group are not Andrews-Curtis equivalent. The Andrews-Curtis equivalence was studied for various classes of groups, for instance, for finite groups in [BKM03, BLM05], for free solvable and free nilpotent groups in [Mya84], for the class of finitely generated groups of which every maximal subgroup is normal in [Myr13]. The class includes finitely generated nilpotent groups; moreover all Grigorchuk groups and GGS groups, e.g. Gupta-Sidki -groups, belong to by [Per00, Per05]. In [AKT13] the result for GGS groups was generalized: the authors proved that all multi-edge spinal torsion groups acting on the regular -ary rooted tree, with odd prime, belong to .
Observe that, for a group in , a normally generating set of is, in fact, a generating set. Therefore, for groups in the partition (of the set of generating -tuples) into Nielsen equivalence classes is a refinement of the partition into Andrews-Curtis classes. We further describe what is known about Nielsen equivalence for some groups in the class .
The most well-understood classification of Nielsen equivalence classes is known for finitely generated abelian groups (see [NN51, DG99, Oan11]). Namely, if is a finitely generated abelian group then the action of on is transitive when . Moreover, if then the number of Nielsen equivalence classes is finite and depends on the primary decomposition of (see Theorem 3.2 for details). It also follows from the latter papers that for any finitely generated abelian group there is only one -system for any .
For a finitely generated nilpotent group the action of is transitive on when [Eva93]. However when the unicity of Nielsen equivalence class generally breaks down. For instance, Dunwoody [Dun63] showed that to every pair of integers and there exists a finite nilpotent group of rank and nilpotency class for which there are at least -systems.
As a generalization of finite nilpotent groups, we consider the family of Gupta-Sidki -groups where is odd prime. The group is a -group of rank acting on the rooted -ary tree, every quotient of which is finite and, therefore, nilpotent (Gupta-Sidki -groups were defined in [GS83b]; the reader can find the definition in Section ). It was shown by Pervova [Per05] that the groups belong to the class . This property was the main ingredient in [Myr13] for proving that there is only one Nielsen equivalence class for for . Moreover, for a group belonging to the class , it is relevant to analyse Nielsen equivalence classes in the quotient , where is the Frattini subgroup of . Namely, if there are two generating -tuples of which are not Nielsen equivalent, then their preimages in are generating -tuples of which also are not Nielsen equivalent (see the section on the class in [Myr13] for details). Using this argument and also the fact that for there are, by Theorem 3.2, Nielsen classes on generating pairs of the quotient , we conclude that there are at least Nielsen classes in for .
For the question on the transitivity of the action of on is more subtle. In this paper we show in particular that, although there is only one Nielsen class on generating pairs of , the action of is not transitive on . A natural question then is how many orbits this action has.
There are numerous examples of groups with infinitely many Nielsen classes when . These groups can be found among fundamental groups of certain knots ([Zie77, HW11]), one-relator groups ([Bru76]), relatively free polynilpotent groups (see [MN13] and references therein) and many others. We show that for the Gupta-Sidki -group, with prime, there are infinitely many Nielsen classes when . To the author’s knowledge this is the first known examples of torsion groups that have this property.
###### Theorem 1.1.
Let be prime and the Gupta-Sidki -group. Then there are infinitely many Nielsen equivalence classes on generating pairs of .
The Gupta-Sidki -group being a subgroup of , the group of automorphisms of the regular -ary rooted tree , has natural quotients by , the level stabilizer subgroups. These quotients are finite nilpotent -generated groups with growing nilpotency class. The latter is true since the limit of these quotients in the space of marked -generated groups is the Gupta-Sidki -group itself, which is not finitely presentable [Sid87]. In the last part of the paper we show that for each the quotient group of the Gupta-Sidki -group has the property that the action is not transitive. Note that there is only one Nielsen equivalence class in , the abelianization of each . It would be interesting to realize whether the number of Nielsen classes grows with but this for the moment remains an open question. An affirmative answer on this question, in particular, would imply that there were infinitely many Nielsen equivalence classes in . Notice, however, that the proof of Theorem 1.1 does not rely on Proposition 1.2.
###### Proposition 1.2.
Let be the Gupta-Sidki -group and the level stabilizer subgroups of . Set . Then the action is not transitive for any .
Acknowledgement. The author would like to thank Pierre de la Harpe, Tatiana Nagnibeda and Said Sidki for stimulating discussions on this work, and Laurent Bartholdi for valuable suggestions during the conference “Growth in Groups” in Le Louverain.
## 2. Preliminaries on groups acting on rooted trees
Let with be a finite set. The vertex set of the rooted tree is the set of finite sequences over ; two sequences are connected by an edge when one can be obtained from the other by right-adjunction of a letter in . The top node (the root) is the empty sequence , and the children of are all the for . A map is an automorphism of the tree if it is bijective and it preserves the root and adjacency of the vertices. An example of an automorphism of is the rooted automorphism , defined as follows: for the permutation , set . Geometrically it can be viewed as the permutation of subtrees just below the root . Denote by the group of automorphisms of the tree .
Let . Denote by the subgroup of consisting of the automorphisms that fix the sequence , i.e.
StG(σ)={g∈G∣g(σ)=σ}.
And denote by the subgroup of consisting of the automorphisms that fix all sequences of length , i.e.
StG(n)=∩σ∈XnStG(σ).
Notice an obvious inclusion . Moreover, observe that for any the subgroups are normal and of finite index in . We therefore have a natural epimorphism between finite groups
(1) G/StG(n+1)→G/StG(n),
for any .
The examples of groups acting on rooted trees include groups of intermediate growth, such as the Grigorchuk group [Gri80] and the Gupta-Sidki -groups [GS83a]. We define the latter family of groups below.
Fix prime and . Let be the cyclic permutation on . Let belong to and belong to . Denote by the rooted automorphism of defined by
x(sσ)=π(s)σ.
Denote by the automorphism of defined by
y(sσ)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩sx(σ)if s=1sx−1(σ)if s=2sy(σ)if s=psσotherwise.
The Gupta-Sidki -group is the group of automorphisms of the tree generated by and and we will write
Gp=⟨x,y⟩.
To shorten the notation for the element we will simply write
y=(x,x−1,1,…,1,y).
More generally, for any element we can write for some and .
We summarize here some facts on the Gupta-Sidki -group which will be used in the following section.
1. [GS83b] is just-infinite, i.e. every proper quotient of is finite.
2. [Per05] All maximal subgroups of are normal.
3. [Per02] The abelianization is isomorphic to .
## 3. Nielsen equivalence in Gupta-Sidki p-groups
For a finitely generated group and , we define the Nielsen graph333Also called the Extended Product Replacement Graph. as follows:
• the set of vertices consists of generating -tuples, i.e.
VNk(G)={(g1,...,gk)∈Gk∣⟨g1,...,gk⟩=G};
• two vertices are connected by an edge if one of them is obtained from the other by an elementary Nielsen move.
Observe that the graph is connected if and only if the action of on is transitive.
Recall, that for a finitely generated group the Frattini subgroup is defined as the intersection of all maximal subgroups of . Equivalently, the Frattini subgroup of contains all the non-generators, i.e. the elements which can be removed from any generating set. The latter implies the following lemma.
###### Lemma 3.1 ([Eva93]).
Let be a group generated by and let . Then .
As it was explained in the Introduction, for groups in class (the class of finitely generated groups all maximal subgroups of which are normal), the number of connected components of is bounded below by the number of connected components of . Since all maximal subgroups of , the Gupta-Sidki -group, are normal, it follows that the quotient is abelian. Moreover, any generating set of the quotient can be lifted up to the generating set of [Myr13]. Therefore is a quotient of of rank ; we deduce that . Using the following theorem, we find the number of connected components of the Nielsen graph .
###### Theorem 3.2 ([Nn51, Dg99, Oan11]).
Let be a finitely generated abelian group with the primary decomposition with and . Then and
1. is connected if .
2. if , i.e. , then is connected;
3. otherwise if or then is connected and if then has connected components,
where is the Euler function (the number of positive integers less than which are coprime with ).
It follows from Theorem 3.2 and the arguments before that for the Nielsen graph has at least connected components.
To prove Theorem 1.1 we use an observation by Nielsen (sometimes also attributed to Higman, see lemma 3.3) as well as an analysis on conjugacy classes in the Gupta-Sidki -group.
###### Lemma 3.3 (Nielsen).
Let and be two Nielsen equivalent generating pairs of a group . Then the commutator is conjugate either to or to .
The proof of this lemma is a straightforward calculation of commutators of the pairs obtained from by the elementary Nielsen moves.
In order to show that two elements are not conjugate in , the Gupta-Sidki -group, sometimes we use the finite quotients by the -th level stabilizers. Consider a natural epimorphism
π:G3→G3/StG3(4).
The finite quotient can be seen as a subgroup of with
π(x)=(1,28,55)(2,29,56)…(27,54,81)
and
π(y)=(1,10,19)…(9,18,27)(28,46,37)…(36,54,45)(55,58,61)⋅
(56,59,62)(57,60,63)(64,70,67)(65,71,68)(66,72,69)(73,74,75)(76,78,77).
Recall that two elements are conjugate in the symmetric group if and only if their cycle types are the same. Therefore if for two elements their images and have different cycle types in then, in particular, they are not conjugate in . Below all computations in were done using GAP.
###### Example 3.4.
The elements and are not conjugate in . Indeed,
and its cycle type differs from the one of .
Let be the Gupta-Sidki -group. Set and for all set . The fact that follows from [GS84].
###### Proposition 3.5.
The elements , and are not pairwise conjugate in for any such that .
###### Proof.
We prove the following two claims in order to conclude the proposition:
###### Claim 1.
is not conjugate to for any .
###### Claim 2.
and are not conjugate for and .
The claims will be proved by contradiction. We compute that and .
Proof of Claim . Assume that and are conjugate, then there exists for some integer , such that . Observe that because is not conjugate neither to nor to . Moreover is not conjugate to in therefore can be conjugate only to . We will prove that it is not the case. For this it is enough to show that and are not conjugate in . We will show it by induction assuming that
(∗) yzn and y are not conjugate in G3 for any% n≥1
and then will show that (*) is indeed the case.
Suppose that and are conjugate in then there exists for some integer such that .
• If then and it follows that .
• If then and it follows that .
• If then and it follows that .
By assumption (*) elements and are not conjugate in and we deduce that is not conjugate to in modulo assumption (*).
Proof of the assumption (*): and are not conjugate in for any .
1. The assumption holds for . To see this, look at the action of and on the 4th level of the tree, see Example 3.4.
2. Suppose (*) is true for .
3. Consider and suppose it is conjugate to . Then there exists with such that . Since is not conjugate neither to nor to in then . Therefore . We obtain the contradiction with the step of induction.
Proof of Claim . We will prove Claim modulo Assumption (*) and (**) below and then in the end prove that both assumptions indeed hold.
Assumption (*): for any such that the elements and are not conjugate in .
Assumption (**): for any the element is not conjugate to or in .
We prove Claim by contradiction. Suppose that there exists such that
[x,yzk]=g−1[x,yzj]±1g
or equivalently
(2) (z−1k−1y−1x,x,xyzk−1)=(g−11,g−12,g−13)x−i(z−1j−1y−1x,x,xyzj−1)±1xi(g1,g2,g3).
Observe that is not conjugate to , and . To see this, look at the quotient and notice that the images of and are not conjugate in . Therefore can not be conjugate to . Moreover, it follows from Assumption (**) that in equation (2).
To obtain the contradiction it is sufficient to show that is not conjugate to . Suppose they are conjugate, then there exists with such that
x(x,x−1yzk−2)=(g−11,g−12,g−13)x−ix(x,x−1,yzj−2)xi(g1,g2,g3).
• If then and it follows that .
• If then and it follows that .
• If then and it follows that .
By Assumption (*), elements and are not conjugate in and we deduce that and are not conjugate in modulo assumptions (*) and (**).
Proof of the assumption (*) Without loss of generality suppose that . Suppose and are conjugate. Then there exists with such that
(x,x−1,yzk−1)=(g1,g2,g3)−1x−i(x,x−1,yzj−1)xi(g1,g2,g3).
Since is not conjugate to or to we conclude that and hence and are conjugate. Continuing in the same way, we deduce that the elements and are conjugate. We obtain a contradiction since is not conjugate to or to (to see this it is enough to look at the action of these elements on the 4th level of the tree) or to .
Proof of the assumption (**) To see that is not conjugate to or , it is enough to look at the action of these elements on the third level of the tree and to see that they have different cycle types, hence they are not conjugate in the quotient . And for , the action of on the third level is trivial therefore it is enough to look at the action of , and on the third level to see that they have different cycle types and therefore not conjugate in . ∎
Let be the Gupta-Sidki -group for prime. Set and for set . The fact that follows from [GS84].
###### Proposition 3.6.
For any and the elements and are not conjugate in .
###### Proof.
By contradiction, suppose that there exists an element
g=xi(g1,…,gp)\leavevmode\nobreak ∈\leavevmode\nobreak Gp
with such that
[x,yzk]=g−1[x,yzj]±1g
or, in other words,
(3) (z−1k−1y−1x,xp−2,x,1,…,1,yzk−1)=(g−11,…,g−1p)x−i(z−1j−1y−1x,xp−2,x,1,…,1,yzj−1)±1xi(g1,…,gp)
Suppose . Observe that is not conjugate to , , , , , and to . To see this, look at the quotient , and notice that the image of is not conjugate to the images of the elements above. Therefore must be conjugate to , in other words there exists with such that
x=(h1,…,hp)−1x−m⋅(a1,…,ap)x⋅xm(h1,…,hp),
where , , and otherwise.
It follows that the following system of equations holds:
where is the th power of the permutation and, for each , denotes the image of under .
After solving the system one obtains that
h−1paπm+1(1)aπm+1(2)…aπm+1(p)hp=1,
which gives us a contradiction to .
In view of equation and that , in order to obtain a contradiction to the initial assumption that is conjugate to , it is enough to show that is not conjugate to . Without loss of generality suppose that .
Suppose by contradiction that is conjugate to , i.e. there exists with such that
(x,x−1,1,…,1,yzk−2)=(h1,…,hp)−1x−l(x,x−1,1,…,1,yzj−2)xl(h1,…,hp).
Observe that is not conjugate to , and . Hence and therefore is conjugate to . We repeat the same arguments times to conclude that and are conjugate. Observe that is not conjugate to , , , and . The contradiction then follows and we deduce that is not conjugate to which concludes the proof.
We are now able to deduce that there are infinitely many Nielsen equivalence classes on generating pairs of the Gupta-Sidki -group for any prime.
###### Proof of Theorem 1.1.
Fix prime. Let and for all let . It follows from Theorem [GS84] that . Since and then by Lemma 3.1 we deduce that . We conclude by Lemma 3.3, Proposition 3.5 and Proposition 3.6 that there are infinitely many orbits of the action . ∎
###### Proof of Proposition 1.2.
First, we show that the graph is not connected. Consider two pairs and in . Since and , it follows that is also a generating pair of by Lemma 3.1.
Denote the images of and in the finite quotient by and . Clearly the pairs and are generating. If they are Nielsen equivalent then by Nielsen criterion (Lemma 3.3) their commutators and must be conjugate in and, in particular, their cycle types must be the same. We will obtain the contradiction with the latter.
We calculate the commutators respectively :
|
2021-02-25 07:04:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644736051559448, "perplexity": 403.5372759893599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350846.9/warc/CC-MAIN-20210225065836-20210225095836-00164.warc.gz"}
|
https://www.tutorialspoint.com/performing-an-opening-operation-on-an-image-using-opencv
|
# Performing an opening operation on an image using OpenCV
OpenCVPythonServer Side ProgrammingProgramming
In this program, we will perform the opening operation on image. Opening removes small objects from the foreground of an image, placing them in the background. This technique can also be used to find specific shapes in an image. Opening can be called erosion followed by dilation. The function we will use for this task is cv2.morphologyEx(image, cv2.MORPH_OPEN, kernel).
## Algorithm
Step 1: Import cv2 and numpy.
Step 3: Define the kernel.
Step 4: Pass the image and kernel to the cv2.morphologyex() function.
Step 4: Display the output.
## Example Code
import cv2
import numpy as np
cv2.imshow('Opening', image)
|
2021-06-20 16:03:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3585206866264343, "perplexity": 3543.5246239504527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00467.warc.gz"}
|
https://codegolf.stackexchange.com/questions/89094/gauss-to-eisenstein?noredirect=1
|
# Gauss to Eisenstein
Given a Gaussian integer $$\a+bi\$$ where $$\a\$$,$$\b\$$ are integers and $$\i = \exp\left(\pi i/2\right)\$$ is the imaginary unit, return the closest (w.r.t to the Euclidean distance) Eisenstein integer $$\k+l\omega\$$ where $$\k\$$,$$\l\$$ are integers and $$\\omega = \exp(2\pi i/3) = (-1+i\sqrt{3})/2\$$.
### Background
It is probably quite obvious that every Gaussian integer can uniquely be written as $$\a+bi\$$ with $$\a\$$,$$\b\$$ integers. It is not so obvious but nonetheless true: Any Eisenstein integer can uniquely be written as $$\k+l\omega\$$ with $$\k\$$,$$\l\$$ integers. They both form a $$\\mathbb{Z}\$$-module within the complex numbers, and are both p-th cyclotomic integers for $$\p=2\$$ or $$\3\$$ respectively. Note that $$\3+2i \neq 3+2\omega\$$
Source: commons.wikimedia.org
### Details
• In case the given complex number has two or three closest points, any of those can be returned.
• The complex number is given in rectangular coordinates (basis $$\(1,i)\$$), but other than that in any convenient format like (A,B) or A+Bi or A+B*1j etc.
• The Eisenstein integer has to be returned as coordinates of the basis $$\(1,\omega)\$$ but other than that in any convenient format like (K,L) or K+Lω or K+L*1ω etc.
### Examples
All real integers should obviously be mapped to the real integers again.
6,14 -> 14,16
7,16 -> 16,18
-18,-2 ->-19,-2
-2, 2 -> -1, 2
-1, 3 -> 1, 4
• Nice, I don't remember seeing a hexagonal grid since codegolf.stackexchange.com/q/70017/17602
– Neil
Aug 7 '16 at 20:22
• Related
– Lynn
Aug 7 '16 at 20:37
• Aug 7 '16 at 20:48
• You should also include test cases when a and b have opposite signs. Aug 5 '19 at 10:10
• @SmileAndNod Added one. But one could also just use the symmetry with respect to the real axis and just replace (1,w) with (-1,1+w). And I also renamed this section to Examples to make it clear that it is not sufficient to just provide the right results for these cases. Aug 5 '19 at 12:13
# APL (Dyalog Extended), 16 bytesSBCS
⎕0+⌈3÷⍨1 2×⌊⎕×√3
Try it online!
A full program that takes y then x from standard input and prints a 2-element vector of integers.
### How it works: the math
First of all, note that any Gaussian integer will be placed on the vertical diagonal of a diamond, with the point $$\Z\$$ placed at $$\(x,\sqrt3y)\$$ for some integer $$\x,y\$$.
+ W
/|\
/ | \
/ | \
/ + X \
/ | \
+-----|-----+V
\ | /
\ + Y /
\ | /
\ | /
\|/
+ Z
In the figure, $$\\overline{WZ}=\sqrt3\$$ and $$\\overline{WX}=\overline{XY}=\overline{YZ}=\overline{XV}=\overline{YV}=\frac{1}{\sqrt3}\$$. So, given the vertical position of a point, we can identify the nearest Eisenstein point as follows:
$$\text{Given a point }P \in \overline{WZ}, \\ \left\{ \begin{array}{c} P \in \overline{WX} \implies \text{the nearest point is } W \\ P \in \overline{XY} \implies \text{the nearest point is } V \\ P \in \overline{YZ} \implies \text{the nearest point is } Z \end{array} \right.$$
Given a Gaussian point $$\P\$$, we first determine which diamond $$\P\$$ belongs to, measured by how many diamonds (denoted $$\h\$$) $$\Z\$$ is away from the $$\x\$$-axis.
$$h = \lfloor P.y \div \sqrt3 \rfloor$$
Then the Eisenstein coordinates of $$\Z\$$ are
$$Z.x_E = P.x+h, \quad Z.y_E = 2h$$
Now, we determine which of the segments $$\\overline{WX},\overline{XY},\overline{YZ}\$$ $$\P\$$ belongs to. For this, we can calculate the indicator $$\w\$$ as follows:
$$w = \lfloor P.y \times \sqrt3 \rfloor \% 3$$
Then the cases $$\ w = 0, 1, 2 \$$ correspond to $$\\overline{YZ},\overline{XY},\overline{WX}\$$ respectively. Finally, the nearest Eisenstein point of $$\P\$$ (which is one of $$\Z\$$, $$\V\$$, or $$\X\$$) can be calculated as:
$$P_E.x_E = P.x+h+\lceil \frac{w}2 \rceil, \quad P_E.y_E = 2h+w$$
Using the identities for $$\h\$$ and $$\w\$$, we can further simplify to:
$$y' = \lfloor P.y \times \sqrt3 \rfloor, \quad P_E.x_E = P.x+\lceil y' \div 3 \rceil, \quad P_E.y_E = \lceil 2y' \div 3 \rceil$$
### How it works: the code
⎕0+⌈3÷⍨1 2×⌊⎕×√3
⌊⎕×√3 ⍝ Take the first input (P.y) and calculate y'
⌈3÷⍨1 2× ⍝ Calculate [ceil(y'/3), ceil(2y'/3)]
⎕0+ ⍝ Take the second input(P.x) and calculate [P.x+ceil(y'/3), ceil(2y'/3)]
## JavaScript (ES6), 112 bytes
(a,b,l=b/Math.pow(.75,.5),k=a+l/2,f=Math.floor,x=k-(k=f(k)),y=l-(l=f(l)),z=x+y>1)=>[k+(y+y+z>x+1),l+(x+x+z>y+1)]
ES7 can obviously trim 9 bytes. Explanation: k and l initially represent the floating-point solution to k+ωl=a+ib. However, the coordinates needed to be rounded to the nearest integer by Euclidean distance. I therefore take the floor of k and l, then perform some tests on the fractional parts to determine whether incrementing them would result in a nearer point to a+ib.
• I guess your tests on the fractional parts are taking advantage of the facts that x is always .2887 or 0.577and y is always either .1547 or .577 Aug 9 '19 at 0:08
• @SmileAndNod 3 years ago? I really can't remember, but I don't think it's that complicated, I'm just working out which is the nearest corner of the diamond.
– Neil
Aug 9 '19 at 9:05
# MATL, 3938 35 bytes
t|Ekt_w&:2Z^tl2jYP3/*Zeh*!sbw6#YkY)
Input format is 6 + 14*1j (space is optional). Output format is 14 16.
Try it online!
### Explanation
The code first takes the input as a complex number. It then generates a big enough hexagonal grid in the complex plane, finds the point that is closest to the input, and returns its Eisenstein "coordinates".
t % Take input implicitly. This is the Gauss number, say A. Duplicate
|Ek % Absolute value times two, rounded down
t_ % Duplicate and negate
w&: % Range. This is one axis of Eisenstein coordinates. This will generate
% the hexagonal grid big enough
2Z^ % Cartesian power with exponent 2. This gives 2-col 2D array, say B
t % Duplicate
l % Push 1
2jYP3/* % Push 2*j*pi/3
Ze % Exponential
h % Concatenate. Gives [1, exp(2*j*pi/3)]
* % Multiply by B, with broadcast.
!s % Sum of each row. This is the hexagonal grid as a flattened array, say C
bw % Bubble up, swap. Stack contains now, bottom to top: B, A, C
6#Yk % Index of number in C that is closest to A
Y) % Use as row index into B. Implicitly display
i=fromIntegral;r=[floor,ceiling];a!k=(i a-k)**2;c(a,b)|l<-2*i b/sqrt 3,k<-i a+l/2=snd$minimum[(x k!k+y l!l,(x k,y l))|x<-r,y<-r] Try it online! For input Gaussian integer (a,b), convert it into Eisenstein coordinates, floor and ceil both components to get four candidates for closest Eisenstein integer, find the one with minimal distance and return it. # Tcl, 124116 106 bytes {{a b f\ int(floor(2*$b/3**.5)) {l "[expr $f+(1-$f%2<($b-$f)*3**.5)]"}} {subst [expr $l+$a-($f+1)/2]\$l}}
Try it online!
This is somewhat inspired by the three-year old post from @Neil
The floor function returns the corner of the rhombus whose edges are the vectors 1 and $$\\omega\$$. With respect to this rhombus, the Gaussian integer lies on the perpendicular bi-sector of either the top (if l is even) or bottom (if l is odd). This is important because it means that either the lower left corner or the upper right corner will be an acceptable solution. I compute k for the lower left corner, and do one test to see if the Gaussian integer lies above or below the diagonal separating the two corners; I add 1 to k when above the diagonal, and I do likewise for l.
Saved 10 bytes by using the "sign of the cross-product v x d of the diagonal d with the vector v joining the lower right corner and (a,b)" as the test for which side of the diagonal the point lies.
# Burlesque, 24 bytes
pe@3r@2././J2./x/.+CL)R_
Try it online!
Pretty sure this can be shorter. Input read as a b
pe # Parse input to two ints
@3r@2./ # sqrt(3)/2
./ # Divide b by sqrt(3)/2
J2./ # Duplicate and divide by 2
x/.+ # swap stack around and add to a
CL # Collect the stack to a list
)R_ # Round to ints
# 05AB1E, 13 bytes
Port of Bubbler's APL answer
3t*ïx‚3/îR΂+
Try it online!
Input and output are both y first, x second.
|
2021-09-26 09:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 49, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7172953486442566, "perplexity": 1465.8266313839965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00022.warc.gz"}
|
https://socratic.org/questions/57f3f834b72cff334c32c204
|
# What factors influence the solubility of a salt with respect to the given diagram?
Oct 6, 2016
$\text{Option 4}$ is the best candidate; i.e. (b) is the strongest electrolyte.
#### Explanation:
$\text{Option 1}$ is contraindicated. No data are conveyed with respect to solubility.
$\text{Option 2}$ is contraindicated, by the very terms of the question. Why?
$\text{Option 3}$ is again contraindicated, by the very terms of the question. Why?
And this leaves $\text{Option 4}$ as the only horse running. We started with $A {Y}_{2}$. In solution this speciated to give discrete particles of (arguably) ${A}^{2 +}$ and $2 \times {Y}^{-}$; of course the charge could have been the other way round.
In solution $\text{b}$ there was no association between cation and anion. When a strong electrolyte dissolves in water it speciates to give its constituent ions. Solutions $\text{a}$ and $\text{c}$ still showed associated ions. Why?
Real examples of $\text{Option 4}$ could include $C a C {l}_{2}$, and $N {a}_{2} S {O}_{4}$, i.e. consider the following equation representing their dissolution, are as follows:
$C a C {l}_{2} \left(s\right) \rightarrow C {a}^{2 +} + 2 C {l}^{-}$
$N {a}_{2} S {O}_{4} \left(s\right) \rightarrow 2 N {a}^{+} + S {O}_{4}^{2 -}$
This is a good and hard question designed to make you really think. I certainly had to do so!
|
2021-06-24 02:47:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6468799710273743, "perplexity": 1372.472650984509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00348.warc.gz"}
|
http://recurrence-plot.tk/invariants.php
|
# Recurrence Plots and Cross Recurrence Plots
## Dynamical Invariants Derived from Recurrence Plots
### Correlation entropy and correlation dimension
The lengths of diagonal lines in an RP are directly related to the ratio of determinism or predictability inherent to the system. Suppose that the states at times $$i$$ and $$j$$ are neighbouring, i.e. $$R_{i,j}=1$$. If the system behaves predictably, similar situations will lead to a similar future, i.e. the probability for $$R_{i+1,j+1}=1$$. is high. For perfectly predictable systems, this leads to infinitely long diagonal lines (like in the RP of the sine function). In contrast, if the system is stochastic, the probability for $$R_{i+1,j+1}=1$$. will be small and we only find single points or short lines. If the system is chaotic, initially neighbouring states will diverge exponentially. The faster the divergence, i.e. the higher the Lyapunov exponent, the shorter the diagonals.
At first, we recall the definition of the second order Rényi entropy (correlation entropy). Let us consider a trajectory $$\vec{x}(t)$$ in a bounded $$d$$-dimensional phase space; the state of the system is measured at time intervals $$\tau$$. Let $${1,2,\ldots,M(\varepsilon)}$$ be a partition of the attractor in boxes of size $$\varepsilon$$. Then $$p(i_1,\ldots,i_l)$$ denotes the joint probability that $$\vec{x}(t =1\tau)$$ is in the box $$i_1$$, $$\vec{x}(t =2\tau)$$ is in the box $$i_2$$, …, and $$\vec{x}(t =l\tau)$$ is in the box $$i_l$$. The 2nd order Rényi entropy is then defined as (Renyi, 1970; Grassberger & Procaccia, 1983) $$K_2 = - \lim_{\tau \rightarrow 0} \ \lim_{\varepsilon \rightarrow 0} \ \lim_{l \rightarrow \infty} \frac{1}{l \tau} \ln \sum_{i_1,\ldots,i_l} p^2(i_1,\ldots,i_l).$$ Roughly speaking, this measure is directly related to the number of possible trajectories that the system can take for $$l$$ time steps in the future. If the system is perfectly deterministic in the classical sense, there will be only one possibility for the trajectory to evolve and hence $$K_2=0$$. In contrast, for purely stochastic systems the number of possible future trajectories increases to infinity so fast that $$K_2 \rightarrow \infty$$. Chaotic systems are characterised by a finite value of $$K_2$$, as they belong to a category between pure deterministic and pure stochastic systems. Also in this case the number of possible trajectories diverges but not as fast as in the stochastic case. The inverse of $$K_2=0$$ has units of time and can be interpreted as the mean prediction time of the system.
The sum of the probabilities $$p(i_1,\ldots,i_l)$$ can be approximated by the probability $$p_t(l)$$ of finding a sequence of $$l$$ points in boxes of size $$\varepsilon$$ centred at the points $$\vec{x}(t), \ldots, \vec{x}(t+(l-1))$$: $$\frac{1}{N}\sum_{t=1}^N p_{i_1(t),\ldots,i_l(t+ (l-1) \Delta t)} \approx \frac{1}{N}\sum_{t=1}^N p_t(l).$$ Moreover, $$p_t(l)$$ can be expressed by means of the recurrence matrix $$p_t(l) = \lim_{N \to \infty}\frac{1}{N}\sum_{s=1}^N \prod_{k=0}^{l-1}R_{t+k,s+k}.$$ Using these relation, we find an estimator for the second order Rényi entropy by means of the RP (Thiel et al., 2003) $$K_2(l)= -\frac{1}{l\,\Delta t} \ln \left( p^c(l) \right)=-\frac{1}{l\,\Delta t} \ln \left(\frac{1}{N^2}\sum_{t,s=1}^N \prod_{k=0}^{l-1} R_{t+k,s+k}\right),$$ where $$p^c(l)$$ is the probability to find a diagonal of at least length $$l$$ in the RP.
On the other hand, the $$l$$-dimensional correlation sum can be used to define $$K_2$$ (Grassberger & Procaccia, 1983a). This definition of $$K_2$$ can also be expressed by means of RPs and yields the following fundamental relationship (Thiel et al., 2003): $$\ln p^c(l) \sim \varepsilon^{D_2} e^{-K_2(\varepsilon)\tau}.$$ $$D_2$$ is the correlation dimension of the system under consideration (Grassberger & Procaccia, 1983). Therefore, in a logarithmic presentation of $$p^c(l)$$ over $$l$$ the slope of the lines corresponds to $$-K_2\tau$$ for large $$l$$, which is independent of $$\varepsilon$$ for a rather large range in $$\varepsilon$$.
If we represent the slope of the curves for large $$l$$ in dependence on $$\varepsilon$$ a plateau can be found for chaotic systems. The value of this plateau determines $$K_2$$. If the system is not chaotic, we have to consider the value of the slope for a sufficiently small value of $$\varepsilon$$.
The relationship between $$K_2$$ and RPs also allows to estimate $$D_2$$ from $$p^c(l)$$. Considering the relationship between $$K_2$$ and RPs for two different thresholds $$\varepsilon$$ and $$\varepsilon + \Delta \varepsilon$$ and dividing both of them, we get $$D_2(\varepsilon) = \frac{\ln\left(\frac{p^c(\varepsilon,l)}{p^c(\varepsilon+\Delta\varepsilon,l)}\right)}{ \ln\left(\frac{\varepsilon}{\varepsilon+\Delta\varepsilon}\right)},$$ which is an estimator of the correlation dimension $$D_2$$ (Grassberger, 1983).
### Generalised mutual information (generalised redundancies)
The mutual information quantifies the amount of information that we obtain from the measurement of one variable on another. It has become a widely applied measure to quantify dependencies within or between time series (auto and cross mutual information). The time delayed generalised mutual information (redundancy) $$I_q(\tau)$$ of a system $$\vec{x}_i$$ is defined by (Rényi, 1970) $$I_q^{\vec x}(\tau) = 2 H_q - H_q(\tau).$$ $$H_q$$ is the $$q$$th-order Rényi entropy of $$\vec{x}_i$$ and $$H_q(\tau)$$ is the $$q$$th-order joint Rényi entropy of $$\vec{x}_i$$ and $$\vec{x}_{i+\tau}$$ $$H_q=-\ln\sum\limits_{k}p_k^q,\qquad H_q(\tau)=-\ln\sum\limits_{k,l}p_{k,l}^q(\tau),$$ where $$p_k$$ is the probability that the trajectory visits the $$k$$th box and $$p_{k,l}(\tau)$$ is the joint probability that $$\vec{x}_i$$ is in box $$k$$ and $$\vec{x}_{i+\tau}$$ is in box $$l$$. Hence, for the case $$q=2$$ we can use the recurrence matrix to estimate $$H_2$$ $$H_2=-\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N R_{i,j}\right)$$ and $$H_q(\tau)$$ $$H_2(\tau) = -\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N R_{i,j} R_{i+\tau,j+\tau}\right) = -\ln \left(\frac{1}{N^2}\sum_{i,j=1}^N JR^{\vec x, \vec x}_{i,j}(\tau)\right),$$
where $$JR_{i,j}(\tau)$$ denotes the delayed joint recurrence matrix. Then, the second order generalised mutual information can be estimated by means of RPs (Thiel et al., 2003) $$I_2^{\vec x}(\tau)= \ln \left(\frac{1}{N^2}\sum\limits_{i,j=1}^N JR_{i,j}^{\vec x, \vec x}(\tau)\right) - 2 \ln \left(\frac{1}{N^2} \sum\limits_{i,j=1}^N R_{i,j}\right).$$
### References
• P. Grassberger & I. Procaccia, Measuring the strangeness of strange attractors, Physica D, 9(1-2), 189-208, 1983.
• P. Grassberger & I. Procaccia, Estimation of the Kolmogorov entropy from a chaotic signal, Physical Review A, 9(1-2), 2591-2593, 1983a.
• Grassberger, P., Generalized Dimensions of Strange Attractors, Physics Letters A, 97(6), 227-230, 1983.
• E. Ott, E. 1993, Chaos in Dynamical Systems, Cambridge University Press
• A. Rényi, Probability theory, North-Holland (appendix), 1970
• M. Thiel, M. C. Romano & J. Kurths, Analytical Description of Recurrence Plots of white noise and chaotic processes, Izvestija vyssich ucebnych zavedenij/ Prikladnaja nelinejnaja dinamika - Applied Nonlinear Dynamics, 11(3), 20-30, 2003.
• N. Marwan, M. C. Romano, M. Thiel & J. Kurths, Recurrence Plots for the Analysis of Complex Systems, Physics Reports, 438(5-6), 237-329, 2007.
|
2017-02-23 00:09:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212183356285095, "perplexity": 368.29660783149995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171066.47/warc/CC-MAIN-20170219104611-00420-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/78899-c-question/
|
Archived
This topic is now archived and is closed to further replies.
c++ question
Recommended Posts
blooper 122
I''m relatively new to C++ and I had a question. In the command prompt is there a way to make it so everything remains static/in place instead of scrolling? Also I was wondering about coloring letters (I don''t know if it makes a difference but I have XP''s command prompt not reg DOS). I''m really more concerned with the scrolling issue though, thanks for your time.
CONIO.h
Share on other sites
blooper 122
Thanks! Anywhere I can find a good tut on it...? (Sorry about the newbie sounding garbabge but I''m unfortuneately in a bit of a ruch). Many thanks ahead of time.
Share on other sites
Jeff D 122
Well I can help with coloring the text, but the scrolling is an issue I would have to do myself. Her goes with the coloring.
You need this line
#include < windows.h >
in that header file there is 2 things you need to use this:
HANDLE H_OUTPUT = GetStdHandle(STD_OUTPUT_HANDLE);
You need that to do colors. Then you make need to do this:
SetConsoleTextAttribute(H_OUTPUT, FOREGROUND_RED);
cout << "Hello World";
Now there is six color attributes they are:
FOREGROUND_RED
FOREGROUND_BLUE
FOREGROUND_GREEN
BACKGROUND_RED
BACKGROUND_BLUE
BACKGROUND_GREEN
The FOREGROUND colors affect the symbol, while the BACKGROUND colors affect behind the symbol.
Now you can do this:
SetConsoleTextAttribute(H_OUTPUT, FOREGROUND_RED | FOREGROUND_GREEN | BACKGROUND BLUE);
That will make orange lettering on blue.
The '|'s, I wont go into details, but if you want orange you have to mix up FOREGROUND_RED and FOREGROUND_GREEN. The '|' tells it to use that.
Try this:
#include < windows.h >#include < iostream >using namespace std;int main(){ HANDLE H_OUTPUT = GetStdHandle(STD_OUTPUT_HANDLE); SetConsoleTextAttribute(H_OUTPUT, FOREGROUND_RED | FOREGROUND_GREEN | BACKGROUND_BLUE); cout << "Hello!";}
I hope I helped
Jeff D
Suffered seven plagues, but refused to let the slaves go free. ~ Ross Atherton
Edited by - Jeff D on February 6, 2002 9:46:56 PM
blooper 122
bump
|
2017-09-19 15:13:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32892799377441406, "perplexity": 11256.303181957388}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00272.warc.gz"}
|
http://forums.ross-tech.com/showthread.php?5918-Testbench-setup&s=fceaba08a724129453b155024834de7e&p=111901&mode=threaded
|
Hello guys,
I have a neu projekt now and maybe you can help me, it has something to do with ECU #16 and #44,
normaly an EPS#44 need some external agle signal, it is an normal CAN message from #16,
I connected EPS#44 to my CanRestBusSimmulation from Audi(it is Audi EPS), but I can not simulate a CanRBS from #16(to complex counter), thats why I always have a "External Agle Fail". then I connect #16(agle sensor) and #44 physically with cable, it was a solution from this problem,BUT now I need one more ECU... Can I hack the ECU#44(EPS) and say :"you do note need external agle anymore, work without it"? Maybe EEPROM programming? Thanks for reading,sorry I saw you write about#16 and maybe you have solution for it.
|
2018-09-26 03:14:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8145029544830322, "perplexity": 7863.284101939849}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267163146.90/warc/CC-MAIN-20180926022052-20180926042452-00491.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-5-polynomials-and-polynomial-functions-5-4-factor-and-solve-polynomial-equations-5-4-exercises-skill-practice-page-357/30
|
## Algebra 2 (1st Edition)
Solving the problem correctly, we find: $$8x^3=27 \\ x^3=\frac{27}{8} \\ x=\frac{3}{2}$$
|
2022-12-09 10:15:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7381287217140198, "perplexity": 4501.7605189417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00558.warc.gz"}
|
https://study.com/academy/answer/equilibrium-in-a-simple-keynesian-model-numerical-example-suppose-that-the-following-parameters-apply-to-an-open-economy-with-a-government-that-is-running-a-balanced-budget-autonomous-consumption.html
|
# Equilibrium in a Simple Keynesian Model: Numerical Example Suppose that the following parameters...
## Question:
Equilibrium in a Simple Keynesian Model: Numerical Example
Suppose that the following parameters apply to an open economy with a government that is running a balanced budget.
Autonomous consumption =$200 billion Marginal propensity to consume = 0.8 Investment =$50 billion
Taxes (lump sum) =40 billion
Government spending (G) = 40 billion
Exports (X) = 80 billion
Import Function = M = 0.1 Yd
(Recall: Yd = disposable income = Y - T
Answer the following questions:
I. Write an expression for the consumption function (don't forget the taxes!)
2. Write an expression for the aggregate expenditure function.
3. Find equilibrium income.
4. What is the marginal propensity to import?
5. What is the size of the trade deficit/surplus?
## Keynesian Economics:
Keynesian economics is all about the aggregate expenditure in the economy and its aftereffects on the economy?s output and the level of inflation. It was developed by the economist John Maynard Keynes and was popularly followed by various economists to understand the great depression and recessions.
## Answer and Explanation:
Become a Study.com member to unlock this answer!
1.
The equation for consumption function looks like-
{eq}C = autonomous\;consumption + mpc(Income - Tax) {/eq}
Therefore, {eq}C = 200 + 0.8(Y -...
See full answer below.
#### Learn more about this topic:
What is Macroeconomics? - Definition & Principles
from
Chapter 4 / Lesson 9
33K
Understand the definition of macroeconomics and what macroeconomics focuses on. Learn the principles of macroeconomics, including economic output, economic growth, unemployment, inflation and deflation, and investment.
|
2022-05-20 00:10:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6349043250083923, "perplexity": 7214.992014724795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530553.34/warc/CC-MAIN-20220519235259-20220520025259-00737.warc.gz"}
|
http://mathhelpforum.com/statistics/99121-mean-absolute-variation.html
|
1. ## mean absolute variation
Hi everyone,
Sorry if this question is too random or general, but it´s bugging me.
Why do we write the mean absolute variation like this:
$
\psi=\frac {1}{N} \sum_{i=1}^N|x_i-\mu|
$
and not like this?:
$
\psi=\frac {\sum_{i=1}^N|x_i-\mu|}{N}
$
i.e. why do we multiply the sum of deviations by 1/N, instead of just dividing the sum of deviations by N?
Thanks
2. Originally Posted by Occurin
...
why do we multiply the sum of deviations by 1/N,
instead of just dividing the sum of deviations by N?
...
It looks to me as if it's the same.
Could you explain the difference, please?
3. Aidan,
Sorry, I wasn´t clear... they are the same, which is my point. The first one seems to me to add another layer of non-intuitive complexity that bugs beginners like me... I wondered if there was a purpose to it that I haven´t caught.
4. Originally Posted by Occurin
Hi everyone,
Sorry if this question is too random or general, but it´s bugging me.
Why do we write the mean absolute variation like this:
$
\psi=\frac {1}{N} \sum_{i=1}^N|x_i-\mu|
$
and not like this?:
$
\psi=\frac {\sum_{i=1}^N|x_i-\mu|}{N}
$
i.e. why do we multiply the sum of deviations by 1/N, instead of just dividing the sum of deviations by N?
The difference is not mathematical but typographical: the version with 1/N looks more elegant and takes up less space. Back in the days of manual typesetting, printers hated large build-up fractions, and encouraged copy-editors and authors to avoid them where possible. Now that everyone uses TeX, that's not so much of an issue, but old preferences linger on.
5. Thanks a lot, Opalg, that´s exactly the kind of explanation I was looking for.
|
2017-04-30 23:33:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115396738052368, "perplexity": 974.5518966108593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00638-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/fundamental-ideas.222281/
|
# Fundamental Ideas
## Main Question or Discussion Point
Here is an interesting question that my urge for simplification and basic/fundamental understanding has lead me to ask:
--------------------------------------------------------------
What would you say are the most fundamental ideas of Physics, the most fundamental science? By fundamental what I mean is, what laws/ideas do you think are the minimal theoretical basis from which all our current knowledge (and possibly more) can be derived?
Or more succinctly:
What are the ideas that comprise the foundation of Physics as we know it today?
--------------------------------------------------------------
On a similar subject, Richard Feynman states (from The Feynman Lectures on Physics):
"If, in some cataclysm, all scientific knowledge were to be destroyed, and only one sentence passed on to the next generation of creatures, what statement would contain the most information in the fewest words? I believe it is the atomic hypothesis (or atomic fact, or whatever you wish to call it) that all things are made of atoms — little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another. In that one sentence you will see an enormous amount of information about the world, if just a little imagination and thinking are applied."
I hesitate to give my opinion because I am only an second year Physics student and I still have an immense amount to learn but I will try just to start off:
1. The discreteness of matter at the smallest scales
-Feynman's "Atomic Fact"
2. Conservations of Energy and Momentum
-The most basic conservation laws
3. The Idea of a phase space cell
-This is my attempt to condense the Exclusion and Uncertainty and other key Quantum Mechanics principles into one idea
4. The constancy of the speed of light
-Special Relativity can mostly be derived from this idea
5. All fundamental forces are manifestations of four "forces" and are described by certain properties (charge, etc.) and respective conservation laws
-I'm sure this can be put more succinctly
6. The Idea that we live in the middle of a spectrum of sizes in our universe that ranges from incomprehensibly large to incomprehensible small and that both of those extreme domains cannot be understood using our "middle of the road" intuition.
Last edited:
Related Other Physics Topics News on Phys.org
Q_Goest
Homework Helper
Gold Member
That's a good question and I think you have a good start on the answer.
On number two, add conservation of mass to momentum and energy.
The other one that always comes to mind is locality. No information can be transmitted faster than light.
Conservation laws and locality lead to local, causal interactions.
In physics, the principle of locality is that distant objects cannot have direct influence on one another: an object is influenced directly only by its immediate surroundings. This was stated as follows by Albert Einstein in his article "Quantum Mechanics and Reality" ("Quanten-Mechanik und Wirklichkeit", Dialectica 2:320-324, 1948):
The following idea characterises the relative independence of objects far apart in space (A and B): external influence on A has no direct influence on B; this is known as the Principle of Local Action, which is used consistently only in field theory. If this axiom were to be completely abolished, the idea of the existence of quasienclosed systems, and thereby the postulation of laws which can be checked empirically in the accepted sense, would become impossible.
Ref: http://en.wikipedia.org/wiki/Principle_of_locality" [Broken]
Last edited by a moderator:
Thats all good, but i also think that the main principle that it has been used more frequently in modern knwoledge is OCKHAM RAZOR!!!!
i, sometimes, ask my self if that is a "good" way to make science!!!
regards
marco
Q_Goest said:
On number two, add conservation of mass to momentum and energy.
There are a couple of reasons I didn't include conservation of mass:
1. Unlike the two other conservation laws, Conservation of mass is only a non-relativistic approximation to Physical reality. Thus, although certainly useful in Chemistry and some domains of non-relativistic Physics, it is not completely true.
-As an example, consider an atomic nucleus. The total mass of the full system (the entire nucleus) is smaller than the mass of its constituent particles.
2. Conservation of mass is simply a manifestation of conservation of energy and thus, cannot be considered truly "Fundamental"
To quote Einstein on the subject (from What is the Theory of Relativity?, 1919):
"The most important upshot of the Special Theory of Relativity concerned the inert mass of corporeal systems. It turned out that the inertia of a system necessarily depends on its energy-content, and this led straight to the notion that inert mass is simply latent energy. The principle of the conservation of mass lost its independence and became fused with that of the conservation of energy: "
$$E_{0}=m_{0}c^{2}$$
Q_Goest said:
The other one that always comes to mind is locality. No information can be transmitted faster than light.
The fact that no information can be transmitted faster than light can be derived from the Special Theory of Relativity. I included in my original list (at number 4) the constancy of the speed of light with the idea that this theory could be derived from this one fundamental axiom. Thus, the impossibility of speeds above c no longer seems fundamental, being nothing more than a indirect result of the constancy of the speed of light.
Locality does seem to be independent of any of the axioms on my original list so you're right, it should be included as it is certainly fundamental.
Q_Goest said:
Conservation laws and locality lead to local, causal interactions.
It is interesting to consider the four fundamental forces as being simply "enforcers" of these conservation laws and of the locality axiom instead of being something independent. In this case, number 5 of my original list appears clumsy and non-fundamental and can be restated as:
5. There exist four fundamental forces, which act to maintain all of the above axioms.
Marco_84 said:
Thats all good, but i also think that the main principle that it has been used more frequently in modern knwoledge is OCKHAM RAZOR!!!!
Ockham's razor is certainly an important principle which states, in the words of William of Ockham:
"Entia non sunt multiplicanda praeter necessitatem" or "Entities should not be multiplied unnecessarily."
Although a good theoretical guideline for the development of all science, this isn't a fundamental Physical axiom, which is what we're after.
Last edited:
Q_Goest
Homework Helper
Gold Member
The conservation of energy plus mass is more fundamental than the conservation of energy or conservation of mass alone of course.
There have to be many papers on this topic in philosophy of science, though I don't know what the important ones may be. All this seems to be fairly basic stuff, so I wonder what a good reference for this would be. Maybe someone can suggest one (or many).
Ockham's razor is certainly an important principle which states, in the words of William of Ockham:
"Entia non sunt multiplicanda praeter necessitatem" or "Entities should not be multiplied unnecessarily."
Although a good theoretical guideline for the development of all science, this isn't a fundamental Physical axiom, which is what we're after.
so tell me what is a PHYSICAL reasoning, postulating isotropy of space you get conservations... and so on... but to follow theese insights you need a guide principle that tells you wich is the best frame to work in!
am i wrong?
regards
marco
Q_Goest
Homework Helper
Gold Member
so tell me what is a PHYSICAL reasoning, postulating isotropy of space you get conservations... and so on... but to follow theese insights you need a guide principle that tells you wich is the best frame to work in!
am i wrong?
regards
marco
Hi marco,
I tend to agree with americanforest on this one. Ockham's razor isn't a physical principal (axiom) on par with conservation principals. It's more of a guide.
This question is one that always intrigues me, so I'm interested in hearing what others may have to say also.
rbj
4. The constancy of the speed of light
-Special Relativity can mostly be derived from this idea
i would say it as
4. The constancy of the laws of physics, independent of who the inertial observer is.
-Special Relativity can mostly be derived from this idea
and leave the constancy of the speed of propagation of E&M as well as the other fundamental forces as a consequence of the constancy of the laws of physics for every inertial observer.
LURCH
I think the most fundemantal principle in the physical sciences is causality. For any other scientific idea to make sense, we must start with the understanding that every "thing" (event, physical reality, whatever we wish to call it), is caused by some other thing. This can be broken down into two principles; that no physical event happens "for no reason" (or without being caused), and that nothing is its own cause. All other scientific principles derive from this one. Without it, one can claim with validity that things are as they are, "just cause they are," and no science can take place.
Marco_84 said:
so tell me what is a PHYSICAL reasoning, postulating isotropy of space you get conservations... and so on... but to follow theese insights you need a guide principle that tells you wich is the best frame to work in!
am i wrong?
No, you're not wrong; We do need ideas like Ockham's Razor to "follow these insights" but my intent in this discussion wasn't to "follow" them (i.e. use them to derive more specific, non fundamental, laws). We just want to find out what these original Physical "insights", or laws, are.
Now that I think about it, I think that for the same reason we can cut my original statement of #6 off of our list
americanforest said:
6. The Idea that we live in the middle of a spectrum of sizes in our universe that ranges from incomprehensibly large to incomprehensible small and that both of those extreme domains cannot be understood using our "middle of the road" intuition.
for the same reason that we reject Ockham's Razor.
I've never heard your point about the isotropy of space providing conservation principles before but I don't want to divert this thread so I won't ask you to explain here. However, I'd really appreciate it if you could send me a private message with an explanation or some reference.
rbj said:
i would say it as
4. The constancy of the laws of physics, independent of who the inertial observer is.
-Special Relativity can mostly be derived from this idea
and leave the constancy of the speed of propagation of E&M as well as the other fundamental forces as a consequence of the constancy of the laws of physics for every inertial observer.
Yes, this is much better than my original version. Let me be a little bit nit picky and state it as:
4. The constancy of the laws of physics, independent of any physical characteristics of the inertial observer.
LURCH said:
I think the most fundemantal principle in the physical sciences is causality. For any other scientific idea to make sense, we must start with the understanding that every "thing" (event, physical reality, whatever we wish to call it), is caused by some other thing. This can be broken down into two principles; that no physical event happens "for no reason" (or without being caused), and that nothing is its own cause. All other scientific principles derive from this one. Without it, one can claim with validity that things are as they are, "just cause they are," and no science can take place.
I know that causality is obviously fundamental in macroscopic Physics but, as I have little experience with Quantum Mechanics, I can't really speak for causality in that context. Causality there certainly seems much less obvious to me.
----------------------------------------------------------------------
So, the current list is:
1. No Physical event happens without a cause, and no event is its own cause.
2. Conservations of Energy and Momentum
-perhaps should be replaced with a statement concerning the "isotropy of space" according to Marco
3. The Idea of a phase space cell
-This certainly presents the fact that particles are localized in a certain spatial volume which is why I felt justified in getting rid of the original statement of number 1, which was Feynman's "Atomic Fact"
-I am no expert on the subject but, as I understand it, for a event to take place the phase space cells of the interacting particles and of any intermediate vector bosons must all overlap, thus implying locality and making this statement more fundamental than one of locality. If I misunderstand the concept please correct me.
4. The constancy of the laws of physics, independent of any physical characteristics of the inertial observer.
5. There exist four fundamental forces, which act to maintain all of the above axioms.
Last edited:
rbj
No, you're not wrong; We do need ideas like Ockham's Razor to "follow these insights" but my intent in this discussion wasn't to "follow" them (i.e. use them to derive more specific, non fundamental, laws).
... Let me be a little bit nit picky and state it as:
4. The constancy of the laws of physics, independent of any physical characteristics of the inertial observer.
besides Occam's Razor, there is a Einstein's related warning:
Things should be described as simple as possible, but no simpler.
rbj said:
Well then let me try to get those nits back out with more picking.
I don't see how I made your original version worse. I only tried to make it sound more physical by replacing the statement involving a specific "who" with a statement concerning a set of "physical characteristics".
I propose we compromise on Wikipedia's statement of this:
Wikipedia said:
The laws of physics are invariant for the transition from one inertial system to any other arbitrarily chosen inertial system
By invoking Einstein's quote, were you implying that my statement was not "as simple as possible" or that it was "simpler"?
I don't want this discussion to degenerate into a exercise in semantics but after all, with a subject as important as the fundamental axioms of Physics, it is important to be careful with wording and meaning.
Last edited:
Dr. Courtney
Gold Member
Thats all good, but i also think that the main principle that it has been used more frequently in modern knwoledge is OCKHAM RAZOR!!!!
Occam's Razor is a philosophical preference, not an epistemological principle, and certainly not a scientific result.
Strictly followed, Occam's Razor demands that we accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often supports more complex theories than existing data. So why should we be so sure that the simplest theories are the right ones?
We shouldn't. Science has a preference for the simplest explanations that are consistent with the data, but history shows that these simplest explanations often yield to complexities as new data becomes available.
Occam's Razor rejected DNA as the carrier of genetic information in favor of proteins, since proteins provided the simpler explanation. Occam's Razor rejected the sun-centered model in favor of the geocentric model, and it would have certainly viewed Kepler's or Newton's laws as unreasonably complicated had they been offered in Galileo's time. Theories that reach far beyond the available data are rare, but General Relativity provides one example.
Michael Courtney
Occam's Razor is a philosophical preference, not an epistemological principle, and certainly not a scientific result.
Strictly followed, Occam's Razor demands that we accept the simplest possible theoretical explanation for existing data. However, science has shown repeatedly that future data often supports more complex theories than existing data. So why should we be so sure that the simplest theories are the right ones?
We shouldn't. Science has a preference for the simplest explanations that are consistent with the data, but history shows that these simplest explanations often yield to complexities as new data becomes available.
Occam's Razor rejected DNA as the carrier of genetic information in favor of proteins, since proteins provided the simpler explanation. Occam's Razor rejected the sun-centered model in favor of the geocentric model, and it would have certainly viewed Kepler's or Newton's laws as unreasonably complicated had they been offered in Galileo's time. Theories that reach far beyond the available data are rare, but General Relativity provides one example.
Michael Courtney
I agree wth your main line reasoning, but not all of them!!!
I posted that because, especially in physics, we use paradigms, principles etc. that just fit our current data.... citing your're words...
marco
Dr. Courtney
Gold Member
I agree wth your main line reasoning, but not all of them!!!
I posted that because, especially in physics, we use paradigms, principles etc. that just fit our current data.... citing your're words...
marco
Should not all sciences play by the same set of epistemological rules?
Is not the foundation of all that repeatable experiment is the ultimate arbiter of theoretical validity?
If Occam's Razor was one of the epistemological rules, then we should demand that it is completely general and applicable to all of the sciences. But it is not completely general in any single discipline, much less all of them.
Consider the theory that the earth is round that was espoused by the greeks and supported with measurements that provided a reasonable estimate of the earth's diameter. This theory was largely rejected by the europeans for many centuries since it seemed simpler to them that the earth is flat.
The atomic theory was also originally espoused by the greeks, but reasoning akin to Occam's Razor delayed its acceptance until Einstein's description of Brownian motion. Even though chemistry had provided considerable support for the atomic theory, the notion that matter is continuous seemed simpler.
Likewise, Newton's idea of light particles seemed simpler than Young's idea of waves, so many clinged to it. And once the wave idea was accepted, the idea of aether as a transmission medium seemed simpler than transmission through a vacuum.
The point is that it's not possible for Occam's Razor to be a scientific principle rather than an aesthetic preference because a suitable definition for "simpler" is absent. Which is simpler, particle or wave, aether or vacuum, flat earth or round?
Occam's Razor is also much less than general in biological matters. For example, DNA testing would conclude with a very high level of certainty that I am the natural father of seven children. My birth certificate says that I was the result of a single birth. All of the children were born after I was married, although only four were born to my wife. It would seem that my wife has a solid case for divorce, and the mothers of the other three children have a solid case for paternity suits. Occam's Razor would have me in dire straights indeed!
But the fact is that I only fathered the four children that were born to my wife. The other three children were fathered by my identical twin brother. But Occam's Razor would conclude that I must be the father of all seven until paternity analysis progresses to the point of distinguishing between children of identical twin fathers. This might not ever happen. My birth certificate that says "single birth" is in error because it is a re-issued certificate that is simply wrong. We suspect that since we were born in New Orleans, hurricane Katrina destroyed our original birth records.
Which is simpler, to believe that a man whose DNA matches paternity for seven children really fathered all seven or to believe that three of those children were fathered by an identical twin brother whose existence cannot be proven because the birth records were destroyed in hurricane Katrina?
Michael Courtney
Last edited:
Should not all sciences play by the same set of epistemological rules?
Is not the foundation of all that repeatable experiment is the ultimate arbiter of theoretical validity?
If Occam's Razor was one of the epistemological rules, then we should demand that it is completely general and applicable to all of the sciences. But it is not completely general in any single discipline, much less all of them.
Consider the theory that the earth is round that was espoused by the greeks and supported with measurements that provided a reasonable estimate of the earth's diameter. This theory was largely rejected by the europeans for many centuries since it seemed simpler to them that the earth is flat.
The atomic theory was also originally espoused by the greeks, but reasoning akin to Occam's Razor delayed its acceptance until Einstein's description of Brownian motion. Even though chemistry had provided considerable support for the atomic theory, the notion that matter is continuous seemed simpler.
Likewise, Newton's idea of light particles seemed simpler than Young's idea of waves, so many clinged to it. And once the wave idea was accepted, the idea of aether as a transmission medium seemed simpler than transmission through a vacuum.
The point is that it's not possible for Occam's Razor to be a scientific principle rather than an aesthetic preference because a suitable definition for "simpler" is absent. Which is simpler, particle or wave, aether or vacuum, flat earth or round?
Occam's Razor is also much less than general in biological matters. For example, DNA testing would conclude with a very high level of certainty that I am the natural father of seven children. My birth certificate says that I was the result of a single birth. All of the children were born after I was married, although only four were born to my wife. It would seem that my wife has a solid case for divorce, and the mothers of the other three children have a solid case for paternity suits. Occam's Razor would have me in dire straights indeed!
But the fact is that I only fathered the four children that were born to my wife. The other three children were fathered by my identical twin brother. But Occam's Razor would conclude that I must be the father of all seven until paternity analysis progresses to the point of distinguishing between children of identical twin fathers. This might not ever happen. My birth certificate that says "single birth" is in error because it is a re-issued certificate that is simply wrong. We suspect that since we were born in New Orleans, hurricane Katrina destroyed our original birth records.
Which is simpler, to believe that a man whose DNA matches paternity for seven children really fathered all seven or to believe that three of those children were fathered by an identical twin brother whose existence cannot be proven because the birth records were destroyed in hurricane Katrina?
Michael Courtney
OK,
I think i'll have to make some examples to let me understan, because i cannot explain my self as good as u can, obvi. this is not my mothertongue.
What im trying to say is that, theories like Spec.Rel. had been chosen because they introduced less postulates wrt others.
QM has less postulates therefore is easier to explain chemistry reactions, bindings etc..
against the complicated explanations of the XVIII century chemistry, obvious they put the first stones..
The geocent system is easier then tolomeo's with cycles and epicycles...
About u're DNA argument i think it is true if we use okkam razor we have to choice for the 7 children option; but in this moment i think we are using a knife not a razor :)
I mean that if the twins are present and we can recognize that they are twins for real we could admit the first possibility ; this because now we have more datas (we can prove the existence of the twins) and now we can move to another theory right?
What im trying to say is: occam razor is not a science principle (you were right) but as a science tool that is/had been used many times.
I personally think that we have to use it with caution, in fact science roughly is: BUILDING MODELS; and untill Brahe, Copernico,Galielo.... The Tolome's model was good enough to predicts many things. hope you understan what i mean.
So at the end what say: It's not always good to make postulates (fundamental also) because growning technologies and knowledges it is possible to disprove some of them, but they are necessary to the development of conossaince. In all our progress i think that the razor has been used many times and thats why i said it was a very fund postulate. maybe yes it is more an instrument to make knowledge instead of a postulate; But if we make the discipline of knowledge i believe that we agree that occam razor could be a good principle.
regards
marco
Dr. Courtney
Gold Member
What im trying to say is that, theories like Spec.Rel. had been chosen because they introduced less postulates wrt others.
The postulate thing works pretty well in physics, but not so well in biology.
In other words, physicists are adept at separating the postulates from the consequences, so it's not to hard for us to eventually understand that the one postulate of special relativity is simpler than competing models even though it introduces a multiplicity of consequences that on the surface appear complex and hard to swallow:
1) No more aether.
2) Space and time not absolute.
3) No absolute reference frame.
4) Galilean relativity is wrong.
Biolgists, on the other hand play by a different set of rules. The cell theory originally said that "all living things are made of cells." Instead of making an exception for viruses, they re-definied living things to only include stuff made of cells. Thus the "cell theory" is no longer a simple principle testable by experiment, it is a tautology: true by definition.
Biologists have similar confounding difficulties with definitions of species.
But is this really so different than re-defining planethood to exclude Pluto?
How would we apply Occam's razor to classification of objects in the system?
How would we apply Occam's razor to defining and classifying living things?
What would Occam do with stuff like prions and viruses that seem more than mere molecules but that currently are not classified as living things?
Michael Courtney
u're right, in biology we amke data base full of data and try to classify, the time for modeling is less wrt othere sciences.... and in those times it is not clear the distinction beetwen one theory and another, at least not as much as inphysics or math.
I think that basically thats the distinction between exact sciences and human ones...
regards
marco
|
2020-01-18 00:38:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5765668749809265, "perplexity": 1001.1845744755095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591431.4/warc/CC-MAIN-20200117234621-20200118022621-00268.warc.gz"}
|
http://www.contrib.andrew.cmu.edu/~enmccorm/blog/?p=83
|
# R, RStudio, knitr, apa6, citations and LaTeX minimal working example
Nota bene: Original post thrown up quickly to promote usage. Some helpful edits have been made, but as usual, continue at your own risk.
I detest copy-pasting analysis results (laziness is a virtue), so my current statistical analysis workflow uses the knitr package to convert results from R into a LaTeX file and then into a PDF report (by pressing one button). After months of using this power only for Beamer presentations and statistics homework assignments, I’ve figured out how to add the apa6 package and automatic bibliography generation to the process for building a reproducible manuscript. This post shares the framework, in hopes that free some readers of the torture that is copy-pasting from SPSS to Excel to Word–and then needing to update the analysis.
## How does this work?
In RStudio, I have set my Sweave options (in Tools > Global Options) to “Weave Rnw files using: knitr.” With this setting, I simply open up the RNW file of interest, and work as desired.
RNW files have both TeX and R commands in them–but the R commands are encased in “chunks” and one must run one an R package (either knitr, or its predecessor, Sweave) on the RNW file to create a TEX file that can then be compiled as normal. Luckily, RStudio has a “compile PDF” button when you are editing RNW files, make that RNW->TEX->PDF conversion a one-click process (assuming you have no debugging to do). What increases the one-click fun potential is that in the RNW file I can include all of my normal LaTeX bells and whistles, including citations and automatic bibliographies.
(If you are unfamiliar with anything I’ve mentioned, there are many existing resources for R, knitr, and LaTeX, I would recommend becoming comfortable with R and LaTeX separately, then taking on their combination through knitr.)
## What is the example?
I have created a short example RNW file (code shown below, and also available as a file version here). The code uses a “references.bib” file. I’ve also uploaded a .zip archive with the .rnw, .bib, and three .pdf examples. These PDF examples are the document compiled in the three different modes available from the apa6 package: manuscript mode, document mode, and (a semi-broken/buggy) journal mode.
\documentclass[man]{apa6} % man for manuscript format, jou for journal format, doc for standard LaTeX document format
\usepackage[natbibapa]{apacite} % Divine intervention help you if you need to use a different citation package.
\usepackage[american]{babel}
\usepackage[utf8]{inputenc}
\usepackage{csquotes}
\usepackage{url} % this allows us to cite URLs in the text
\usepackage{graphicx} % allows for graphic to float when doing jou or doc style
\usepackage{verbatim} % allows us to use \begin{comment} environment
\usepackage{caption}
%\usepackage{lscape}
\usepackage{pdflscape}
\title{Minimal Working Example of RStudio-Friendly Knitr and apa6 Files}
\shorttitle{Knitr and apa6}
% Based off of code provided in a question by Melvin Roest on tex.stackexchange.com
% (http://tex.stackexchange.com/questions/176745/strange-knitr-behavior-with-apa6-class-manuscript)
\author{Author Name}
\affiliation{Department of Intellectual Inquiry \\ Carnegie Mellon University}
% apa6 uses different calls for more than one author, example
%\twoauthors{Student T. Stat}{Committee:}
%\twoaffiliations{Department of Wheat-Based Yeast Products \\ Carnegie Mellon University}{Advisor Awesome (chair), Committe Member 2, Committee Member 3}
%\rightheader{knitr and apa6} % for jou format
\authornote{Acknowledgements here. Because you know you didn't do this alone.}
\note{\today}
\journal{} %\journal{Fictional Journal of Awesome} % for jou format
\volume{} %\volume{5,1,5-10}% Volume, number, pages; typeset in the top left header in jou and doc modes, underneath the content of %\journal
\keywords{minimal working example, manuscript, apa6}
\abstract{Abstract here. 95 words left.}
\ccoppy{} %\ccoppy{Copyright notice, etc.; typeset in the top right header of page 1 (jou and doc modes only)}
\begin{document}
\maketitle
<<setup, include=FALSE, cache=FALSE>>=
#library(knitr)
# set working directories
analysis.directory <- "C:/.../data_analysis/"
jul2014.data.directory <- "C:/.../data/data_JUL/"
output.directory <- "C:/.../manuscript/"
setwd(output.directory)
# set global chunk options, put figures into folder
opts_chunk\$set(fig.path='figures/figure-', fig.align='center', fig.show='hold')
options(replace.assign=TRUE,width=75)
# save workspace image, if you want
#the.date >=
require(ggplot2)
#require(plyr)
#setwd(jul2014.data.directory)
#my.df >=
# note: I use fig.width, fig.height, and out.width to resize figures the way I want them. Your mileage may vary.
sample.n <- 100; x <- c(1:sample.n)/2; y <- x + rnorm(sample.n,0,2)
df.a <- data.frame(y=y,x=x,treat=factor(rep(c(0,1),each=(sample.n/2)),
labels=c("control","treatment")))
a.lmfit.t0 <- lm(y ~ x, data=subset(df.a,treat=="control"))
a.lmfit.t1 <- lm(y ~ x, data=subset(df.a,treat=="treatment"))
ggplot(df.a,aes(x=x,y=y,group=treat)) + geom_point(aes(shape=treat),
size = 4) + scale_shape_manual(values=c(19,1)) + ggtitle(
"No T effect") + geom_abline(intercept=coef(a.lmfit.t0)[1],
slope=coef(a.lmfit.t0)[2],col="red",linetype="dashed",lwd=1) + geom_abline(
intercept=coef(a.lmfit.t1)[1],slope=coef(a.lmfit.t1)[2],
col="blue",linetype="solid",lwd=0.5)
@
\end{center}
\end{figure}
Also, you can still print code chunks:
<<codechunk, include=FALSE,echo=TRUE,cache=TRUE>>=
x <- 3
y <- 4
print(x + y)
@
\section{General Discussion}
\bibliographystyle{apacite}
\bibliography{references}
\end{document}
Happy dynamic report creation.
## 2 thoughts on “R, RStudio, knitr, apa6, citations and LaTeX minimal working example”
1. Juan says:
Hi there!
I am trying to write a paper in my R-studio session with Sweave. I followed your example and I got the same result:
|
2018-06-19 14:04:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7890018224716187, "perplexity": 11524.931908630273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863043.35/warc/CC-MAIN-20180619134548-20180619154548-00133.warc.gz"}
|
https://math.stackexchange.com/questions/2387130/evaluate-int-0-pi-frac-ln-left1-cos-theta-right-cos-theta-d-theta
|
# Evaluate $\int_0^\pi\frac{\ln\left(1+\cos\theta\right)}{\cos\theta}\,d\theta$
The problem is to evaluate:
$$\int_{0}^{\pi}{\left(\frac{\ln{\left(1+\cos{\theta}\right)}}{\cos{\theta}}\,d\theta\right)}$$
An estimate for the integral is $4.9348022$.
There is a similarity between this integral and the dilogarithm function, which is defined by:
$$\operatorname{Li}_2(z):=-\int_{0}^{z}{\left(\frac{\ln{\left(1-t\right)}}{t}\,dt\right)}$$
but I am not sure how to use this effectively.
In addition, there are two singularities in the interval of integration: one singularity when $\theta\to\pi$ and the integrand increases without bound, and one removable 'hole' at $\theta=\pi/2$, where the limit of the value of the integrand is $1$.
Integration by parts does not seem to simplify the integral. I also tried some substitutions, such as $x=\cos{\theta}$:
$$\int_{-1}^{1}{\left(\frac{\ln{\left(1+x\right)}}{x\sqrt{1-x^2}}\,dx\right)}$$
which brings it closer to the dilogarithm form. The Weierstrass substitution $x=\tan{\left(\theta/2\right)}$ gives:
$$\int_{0}^{\infty}{\left(\frac{2}{1-x^2}\cdot\ln{\left(\frac{2}{1+x^2}\right)}\,dx\right)}$$
Any ideas? Thanks!
• Since the integral is symmetric, you can integrate from $-\pi/2$ to $\pi/2$ by multiplying the integral with $1/2$. Then you can substitute $e^{i\theta}$ and use the residue theorem. – user424862 Aug 8 '17 at 20:18
• Numerically, $4.9348022 \approx \pi^2 / 2$, which should give some hints. – nbubis Aug 8 '17 at 20:45
• – Nosrati Aug 8 '17 at 21:03
Hint: Making use of symmetry and the tangent half-angle substitution, we find
\begin{align} \mathcal{I} &=\int_{0}^{\pi}\mathrm{d}\theta\,\frac{\ln{\left(1+\cos{\left(\theta\right)}\right)}}{\cos{\left(\theta\right)}}\\ &=\int_{0}^{\frac{\pi}{2}}\mathrm{d}\theta\,\frac{\ln{\left(1+\cos{\left(\theta\right)}\right)}}{\cos{\left(\theta\right)}}+\int_{\frac{\pi}{2}}^{\pi}\mathrm{d}\theta\,\frac{\ln{\left(1+\cos{\left(\theta\right)}\right)}}{\cos{\left(\theta\right)}}\\ &=\int_{0}^{\frac{\pi}{2}}\mathrm{d}\theta\,\frac{\ln{\left(1+\cos{\left(\theta\right)}\right)}}{\cos{\left(\theta\right)}}-\int_{0}^{\frac{\pi}{2}}\mathrm{d}\theta\,\frac{\ln{\left(1-\cos{\left(\theta\right)}\right)}}{\cos{\left(\theta\right)}};~~~\small{\left[\theta\mapsto\pi-\theta\right]}\\ &=\int_{0}^{\frac{\pi}{2}}\mathrm{d}\theta\,\frac{\ln{\left(\frac{1+\cos{\left(\theta\right)}}{1-\cos{\left(\theta\right)}}\right)}}{\cos{\left(\theta\right)}}\\ &=\int_{0}^{1}\mathrm{d}t\,\frac{2}{1+t^{2}}\cdot\frac{1+t^{2}}{1-t^{2}}\ln{\left(\frac{1}{t^{2}}\right)};~~~\small{\left[\tan{\left(\frac{\theta}{2}\right)}=t\right]}\\ &=-4\int_{0}^{1}\mathrm{d}t\,\frac{\ln{\left(t\right)}}{1-t^{2}}.\\ \end{align}
Just to finish David H's answer, since $\int_{0}^{1}t^{m}(-\log t)\,dt = \frac{1}{(m+1)^2}$ for any $m\in\mathbb{N}$, by expanding $\frac{1}{1-t^2}$ as $1+t^2+t^4+t^6+\ldots$ we get:
$$\int_{0}^{\pi}\log(1+\cos\theta)\frac{d\theta}{\cos\theta}=4\int_{0}^{1}\frac{-\log(t)}{1+t^2}=4\sum_{m=0}^{+\infty}\frac{1}{(2m+1)^2} = \color{blue}{\frac{\pi^2}{2}} \approx 4.9348022.$$
Addendum: it might be interesting to point out that the identity $$\sum_{n\geq 1}\frac{1}{n^2}=\sum_{n\geq 1}\frac{3}{n^2\binom{2n}{n}}$$ can be proved by applying the tangent half-angle substitution to a similar integral.
As a reference, please have a look at page 27 here.
• What is that document ? – Zaid Alyafeai Aug 9 '17 at 0:49
• @ZaidAlyafeai: they are my course notes, Zaid. The 2017 edition will start at October, I hope to bring some good and a fair amount of sharp tools to young students. – Jack D'Aurizio Aug 9 '17 at 1:20
• Nice work, wished there was a table of content. – Zaid Alyafeai Aug 9 '17 at 1:41
• @ZaidAlyafeai: I definitely have to add it, thanks for the suggestion ;) – Jack D'Aurizio Aug 9 '17 at 1:42
• Jack D'Aurizio: Very nice work ! – FDP Aug 9 '17 at 7:28
I know this is an old question with an accepted answer, but I was surprised to see no one coming up with the following solution by differentiation under the integral sign, (which is the most straightforward method for this integral IMHO) and so I thought it would be helpful to add this here:
$$I(a):=\int_0^\pi\frac{\ln(1+a\cos{\theta})}{\cos{\theta}}\,d\theta, a\in[0,1]$$
$$\frac{dI}{da}=\frac d{da}\int_0^\pi\frac{\ln(1+a\cos{\theta})}{\cos{\theta}}\,d\theta$$
$$=\int_0^\pi\frac1{\cos\theta}\frac\partial{\partial a}{\ln(1+a\cos{\theta})}d\theta$$
$$=\int_0^\pi\frac1{\cos\theta}{\frac{\cos\theta}{1+a\cos{\theta}}}d\theta$$
$$=\int_0^\pi\frac1{1+a\cos{\theta}}d\theta$$
$$=\frac\pi{\sqrt{1-a^2}}$$
$$\because I(0)=\int_0^\pi\frac{\ln(1)}{\cos\theta}\,d\theta=0,$$
$$\therefore I(a)=I(0)+\int_0^a\frac{dI}{da}da=\int_0^a\frac\pi{\sqrt{1-a^2}}dx=\pi\arcsin a$$
$$\boxed{\int_0^\pi\frac{\ln(1+\cos\theta)}{\cos\theta}\,d\theta=I(1)=\pi\arcsin 1=\frac{\pi^2}2}$$
So by differentiating under the integral sign, one is left only with the integral $$\int_0^\pi\frac{d\theta}{1+a\cos{\theta}}$$, which is much simpler to evaluate. Below is one solution by complex analysis. For anyone uncomfortable with complex analysis, the substitution $$t=\tan\frac\theta2$$ (and other symmetry/trigonometry tricks) will work as well.
$$J=\int_0^\pi\frac{d\theta}{1+a\cos\theta}$$
Substituting $$\theta\rightarrow2\pi-\theta, d\theta\rightarrow-d\theta$$,
$$J=-\int_{2\pi}^\pi\frac1{1+a\cos(2\pi-\theta)}d\theta=\int_\pi^{2\pi}\frac1{1+a\cos\theta}d\theta$$
$$\therefore 2J=\int_0^{2\pi}\frac1{1+a\cos\theta}d\theta\implies J=\frac1 2\int_0^{2\pi}\frac1{1+a\cos\theta}d\theta$$
$$J=\frac1 2\int_0^{2\pi}\frac1{1+a(\frac{e^{i\theta}+e^{-i\theta}}2)}d\theta=\int_0^{2\pi}\frac1{2+ae^{i\theta}+ae^{-i\theta}}d\theta$$
Substitute $$z=e^{i\theta},dz=ie^{i\theta}d\theta\implies d\theta=\frac{dz}{iz}$$
$$J=\oint_C \frac1{2+az+a/z}\frac{dz}{iz}=\frac1{ia}\oint_C\frac{dz}{z^2+(2/a)z+1}$$
where $$C$$ is the counterclockwise contour over the unit circle. By the residue theorem,
$$J=\frac1{ia}2\pi i\sum Res\frac1{z^2+(2/a)z+1}$$
$$\frac{-1+\sqrt{1-a^2}}a$$ is the only root of $$z^2+(2/a)z+1$$ within the unit circle, and its residue is $$\frac a{2\sqrt{1-a^2}}$$
$$\boxed{\therefore J=\int_0^\pi\frac{d\theta}{1+a\cos\theta}=\frac1{ia}\cdot2\pi i\cdot \frac a{2\sqrt{1-a^2}}=\frac\pi{\sqrt{1-a^2}}}$$
|
2019-08-26 09:10:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959444999694824, "perplexity": 729.9213102987878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331485.43/warc/CC-MAIN-20190826085356-20190826111356-00236.warc.gz"}
|
https://sajay.online/posts/fun_with_jax/
|
# Fun with Jax
Jax is a research project from Google aimed at high-performance machine learning research. It's been around for more than a year and is getting increasingly popular for research and prototyping. Quoting the developers, “Jax is a language for expressing and composing transformations of numerical programs.” Jax comes with transformations like jit, grad and vmap.
Let's explore ways to use these transformations in the context of optimization. Though grad was the first to catch my eyes, I have come to appreciate many other features of Jax apart from its automatic differentiation routines. In fact, we wouldn't use grad explicitly in this post and instead use its more flexible siblings like jacfwd, jacrev, jvp and vjp. Before getting to optimization, let's start small with iterations.
# using the following imports for the rest of the post
import numpy as onp # original numpy
import jax.numpy as np # a (mostly) drop-in replacement
from jax import jit, vmap, jacfwd, jacrev, jvp, vjp, put_device, random
## Iterations
$$10 x_{n+1} = x^2_{n}+21$$
Iterations offer a simple numerical method to converge at a solution of interest. The shown iteration converges either at 3 or 7, the roots of $10 x_{c} = x^2_{c}+21$. The final value is sensitive to the starting point of the iteration. The fixed points can be classified as attracting or repelling and the basin of attraction can be found using calculus. But, let's find that using an experiment and see how the function transformation jit could help. Iterating using vanilla numpy,
def iterfun(start):
for _ in range(500):
start = (start**2+21)/10
return start
start = onp.arange(-9, 10, 0.5)
%timeit -n 1000 iterfun(start)
1.74 ms per loop
Though we use numpy, the actual iterations are carried out using a for loop in python, which is a known performance bottleneck. Jax offers just-in-time compilation using the function transformation jit, which (in this case) compiles the for loop by statically unrolling it and making it faster. Further, the function is compiled to use an accelerator (GPU/TPU) if available. So, in some ways, Jax is numpy on GPU!
fast_iterfun = jit(iterfun)
# compilation happens when fast_iterfun is called for the first time
start = np.arange(-9, 10, 0.5)
# ensure that variable is in accelerator (GPU/TPU) using device_put()
start = device_put(start) # normally not needed
# block until ready gives the real time taken (waits despite asynchronous dispatch)
201 µs per loop
As expected Jax is much faster. However, note that compile time for jit is not included in the execution time, and it's prudent to be mindful of the trade-off between time spent compiling a function and time saved by repeatedly calling it! Plotting the values after $500$ iterations against the starting values,
It can be seen that $3$ is attracting with the basin of attraction $(-7, 7)$ and $7$ is repelling. Every other starting point diverges. However, this is not the only possible behaviour. There could also be iterations with no convergence and divergence. Instead, they have cycles, cantor sets and chaos like in the following family of iterations, which Gilbert Strang claims to be the most famous quadratic iteration in the world (when $a=4$).
$$x_{n+1} = ax_{n} - ax^2_{n}$$
To analyze the effects of $a$, let's plot all values from $x_{1001}$ to $x_{2000}$ for different values of $a$ starting with $x_0=0.5$. It's easy to build the required output for a given value of $a$ using python lists and loops as follows
def iterfun(start, a):
result = []
for _ in range(1000):
start = a*start - a*start**2
for i in range(1000):
start = a*start - a*start**2
result.append(start)
return np.array(result)
%timeit iterfun(0.5, 3.4)
354 ms per loop
This is slow. Once again jit to the rescue!
fast_iterfun = jit(iterfun)
681 µs per loop
This is fast, but our compiled function can only handle one value of $a$. Fortunately, Jax provides a way to vectorize the function without rewriting any of the original function manually. The answer is vmap. Instead of using a naive outer loop over the existing function, vmap pushes the loop down to the primitive operations even after transforming the function with jit. In fact, Jax supports arbitrary compositions of its transformations.
vec_iterfun = vmap(lambda a : fast_iterfun(0.5 ,a))
a_vector = np.arange(3.4, 4.0, 0.001)
a_vector = device_put(a_vector)
%timeit vec_iterfun(a_vector)
# reshaping for plotting
plot_y = vec_iterfun(a_vector).reshape((-1))
plot_x = a_vector.repeat(1000)
626 µs per loop
The increase in time doesn't even seem to be statistically significant! The vectorized version gives all the data we need to plot (except reshaping) in a single call, with no appreciable increase in waiting time.
This plot shows the effects of $a$ on convergence (or the absence of it). It looks cool! But, let's stop here and pivot to the applications of iteration in optimization to see how Jax can further help us.
## Optimization
Calculus helps us express optimization as a solution to the equation $f^{\prime}(x)=0$ and iterations like the following gives a numerical method to find the solution.
\begin{align} s(x_{n+1}-x_{n}) &= f^{\prime}(x_{n}) \\
\text{at convergence}\qquad s(x_{c}-x_{c}) &= f^{\prime}(x_{c}) = 0 \end{align}
Earlier iterations didn't have $s$, which can be thought as a step size for the iteration. The sign of $s$ helps us steer the iteration towards maximum (or minimum) and this iteration is just good old gradient ascent (or descent). The goal here is to find the best $s$. Let's start by subtracting the two equations.
\begin{align} s(x_{n+1}-x_{c}) &= s(x_{n}-x_{c}) + f^{\prime}(x_{n}) - f^{\prime}(x_{c}) \\
\text{using linear approximation at }x_{c} \qquad s(x_{n+1}-x_{c}) &= (s + f^{\prime\prime}(x_{c})) (x_{n}-x_{c}) \end{align}
Thus, when $s=-f^{\prime\prime}(x_{c})$, then $x_{n+1}=x_{c}$ i.e, the iteration converges in a single step. But, we don't know $x_{c}$ and the whole exercise is to find it. Instead of $f^{\prime\prime}(x_{c})$, Newton approximated it with $f^{\prime\prime}(x_{n})$ and ran the iterations as follows and this became Newton's method.
$$-f^{\prime\prime}(x_{n})(x_{n+1}-x_{n}) = f^{\prime}(x_{n})$$
When $f$ is a scalar valued function that takes a vector $\mathbf{x}$, we have the following, where $J$ is the Jacobian vector and $H$ is the Hessian matrix
$$H\cdot\Delta\mathbf{x} = J$$
The intuition is simple, $\Delta\mathbf{x}$ is selected such that the directional derivative is equal to the value of the Jacobian so that every dimension of the Jacobian can be made 0 in just one step! Please be mindful of the earlier linear approximation - this one step convergence happens only when the underlying function is quadratic. Let's check this update with the following quadratic function with the maximum at $(3, 7)$.
def quad_fun(x):
x = x - np.array([3, 7])
return -1 * (x.T @ np.array([[3, 2], [2, 7]]) @ x - 2)
Let's compute the single converging update from $(0, 0)$. As $f : \mathbb{R}^n \rightarrow \mathbb{R}$ is a scalar valued function, its Jacobian can be efficiently found using reverse-mode automatic differentiation. Now, the Jacobian $J : \mathbb{R}^n \rightarrow \mathbb{R}^n$ is a vector valued function and its derivative can be (slightly more efficiently) found using forward mode automatic differentiation. Just like the other transformation in Jax, these two modes can be arbitrarily composed with one another to find the Hessian matrix.
# naive function to solve linear equation Hx = J
def naive_solve(start):
H = jacfwd(J)
return np.linalg.inv(H(start)) @ J(start)
# compile to make the function fast
naive_solve = jit(naive_solve)
start = np.array([0, 0], dtype='float32')
# negative update as we maximize
update = -1 * naive_solve(start)
print(update)
% timeit naive_solve(start)
[3. 7.0000005]
123 µs per loop
This worked as expected. But, we explicitly calculated the Hessian matrix and inverted it. Hessian of the parameters in bigger problems (like neural networks) would be huge. Explicitly calculating it is memory intensive and inverting it would be way too costly. Instead, let's solve the system of equations $H\cdot\Delta\mathbf{x} = J$ using conjugate gradients, which avoid explicitly calculating the Hessian and use low level Jax functions like jvp to find the directional derivative. jvp is a highly optimized function to calculate the product of a Jacobian and a tangents vector, evaluated at a point without explicitly finding the Jacobian.
# better function to solve linear equation Hx = J
def cg_solve(point, ndim):
# point = point at which to evaluate J and H
H = lambda x: jvp(J, (point,), (x,))[1]
x = np.zeros(ndim)
b = J(point)
r = b - H(x)
p = r
# search in ndim conjugate directions
for _ in range(ndim):
Ap = H(p)
rr = r.T@r
alpha = rr/(p.T@Ap)
x = x + alpha*p
r = r - alpha*Ap
beta = r.T@r/(rr)
p = r + beta*p
return x
# compile with a fixed dimension size
fast_solve = jit(lambda x: cg_solve(x, 2))
start = np.array([0, 0], dtype='float32')
# negative update as we maximize
update = -1 * fast_solve(start)
print(update)
% timeit fast_solve(start)
[3. 7.0000005]
146 µs per loop
This gives the same result without explicitly calculating the hessian and inverting it. It appears to be slightly slower, but the advantages would become apparent in bigger problems.
### To ensure reproducibility
The code was run in colab using a GPU runtime. The notebook can be found here. The package versions used are listed below
• jax 0.1.52
• jaxlib 0.1.36
• numpy 1.17.5
|
2021-06-19 20:35:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028263449668884, "perplexity": 1685.6623415438028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00569.warc.gz"}
|
http://mathoverflow.net/revisions/3455/list
|
4 More clarification
For (suitable) real- or complex-valued functions f and g on a (suitable) abelian group G, we have two bilinear operations: multiplication -
(f.g)(x) = f(x)g(x),
and convolution -
(f*g)(x) = ∫y+z=xf(y)g(z)
Both operations define commutative ring structures (possibly without identity) with the usual addition. (For that to make sense, we have to find a subset of functions that is closed under addition, multiplication, and convolution. If G is finite, this is not an issue, and if G is compact, we can consider infinitely differentiable functions, and if G is Rd, we can consider the Schwarz class of infinitely differentiable functions that decay at infinity faster than all polynomials, etc. As long as our class of functions doesn't satisfy any additional nontrivial algebraic identities, it doesn't matter what it is precisely.)
My question is simply: do these two commutative ring structures satisfy any additional nontrivial identities?
A "trivial" identity is just one that's a consequence of properties mentioned above: e. g., we have the identity
f*(g.h) = (h.g)*f,
but that follows from the fact that multiplication and convolution are separately commutative semigroup operations.
Edit: to clarify, an "algebraic identity" here must be of the form "A(f1, ... fn) = B(f1, ..., fn)," where A and B are composed of the following operations:
• negation
• multiplication
• convolution
(Technically, a more correct phrasing would be "for all f1, ..., fn: A(f1, ... fn) = B(f1, ..., fn)," but the universal quantifier is always implied.) While it's true that the Fourier transform exchanges convolution and multiplication, that doesn't give valid identities unless you could somehow write the Fourier transform as a composition of the above operations, since I'm not giving you the Fourier transform as a primitive operation.
Edit 2: Apparently the above is still pretty confusing. This question is about identities in the sense of universal algebra. I think what I'm really asking for is the variety generated by the set of abelian groups endowed with the above five operations. Is it different from the variety of algebras with 5 operations (binary operations +, *, .; unary operation -; nullary operation 0) determined by identities saying that (+, -, 0, *) and (+, -, 0, .) are commutative ring structures?
3 Clarification
For (suitable) real- or complex-valued functions f and g on a (suitable) abelian group G, we have two bilinear operations: multiplication -
(f.g)(x) = f(x)g(x),
and convolution -
(f*g)(x) = ∫y+z=xf(y)g(z)
Both operations define commutative ring structures (possibly without identity) with the usual addition. (For that to make sense, we have to find a subset of functions that is closed under addition, multiplication, and convolution. If G is finite, this is not an issue, and if G is compact, we can consider infinitely differentiable functions, and if G is Rd, we can consider the Schwarz class of infinitely differentiable functions that decay at infinity faster than all polynomials, etc. As long as our class of functions doesn't satisfy any additional nontrivial algebraic identities, it doesn't matter what it is precisely.)
My question is simply: do these two commutative ring structures satisfy any additional nontrivial identities?
A "trivial" identity is just one that's a consequence of properties mentioned above: e. g., we have the identity
f*(g.h) = (h.g)*f,
but that follows from the fact that multiplication and convolution are separately commutative semigroup operations.
Edit: to clarify, an "algebraic identity" here must be of the form "A(f1, ... fn) = B(f1, ..., fn)," where A and B are composed of the following operations:
• negation
• multiplication
• convolution
(Technically, a more correct phrasing would be "for all f1, ..., fn: A(f1, ... fn) = B(f1, ..., fn)," but the universal quantifier is always implied.) While it's true that the Fourier transform exchanges convolution and multiplication, that doesn't give valid identities unless you could somehow write the Fourier transform as a composition of the above operations, since I'm not giving you the Fourier transform as a primitive operation.
2 corrected the typo
For (suitable) real- or complex-valued functions f and g on a (suitable) abelian group G, we have two bilinear operations: multiplication -
(f.g)(x) = f(x)g(x),
and convolution -
(f*g)(x) = ∫y+z=xf(y)g(z)
Both operations define commutative ring structures (possibly without identity) with the usual addition. (For that to make sense, we have to find a subset of functions that is closed under addition, multiplication, and convolution. If G is finite, this is not an issue, and if G is compact, we can consider infinitely differentiable functions, and if G is Rd, we can consider the Schwarz class of infinitely differentiable functions that decay at infinity faster than all polynomials, etc. As long as our class of functions doesn't satisfy any additional nontrivial algebraic identities, it doesn't matter what it is precisely.)
My question is simply: do these two commutative ring structures satisfy any additional nontrivial identities?
A "trivial" identity is just one that's a consequence of properties mentioned above: e. g., we have the identity
f*(g.h) = (h.g)*h,h.g)*f,
but that follows from the fact that multiplication and convolution are separately commutative semigroup operations.
1
|
2013-05-20 22:15:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924150288105011, "perplexity": 308.34667330359787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00092-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://meta.stackoverflow.com/questions/tagged?tagnames=bug%2B-status-completed%2B-status-declined%2B-status-deferred%2B-status-bydesign&sort=votes
|
# Tagged Questions
2k views
### I am not a robot!
Some days ago, the spam protection was turned up a notch. Unfortunately, it overshot the target by far. Now, as soon as I make two edits in close succession (which happens often – I’m sloppy), I am ...
828 views
### It should be possible to submit improvements to a suggested edit even when the suggestion is approved during editing
I was reviewing a suggested edit and decided that while the edit was incorrect, there were things to improve about the post. So I clicked “Improve” and set about my business. When I tried to submit ...
738 views
### Make Stack Exchange sites compatible to OS X Lion full screen mode
Introduction Mac OS X Lion has a full screen feature that allows use of a single application window in full screen, i.e. without window title bar and constantly visible menu bar, on its own virtual ...
617 views
### Moderator diamond is too large relative to other text on Android
This really is atrocious to look at. We understand appearances are everything, but there's no need to magnify them twice. We understand you need to show these off in public — after all, that's ...
196 views
### If users with <20 rep can't chat, why does SO nag me to move comment threads to chat? [duplicate]
Possible Duplicate: Disable chat migration notification if one of the users has insufficient rep? I frequently find myself helping new users with <20 rep with questions. These users need ...
698 views
### Profile picture cannot have transparent background
I never liked that SO/SE relied on the external gravatar service, so I was glad to read that it doesn't any more. However, it turns out that one cannot use as profile picture a PNG file with ...
497 views
### The cardiologist can't tell if my heart is beating!
I have found a... strange, to say the least, error on Stack Overflow. I was just browsing normally and I decided to open the error console, and I saw this: Uncaught TypeError: Cannot call method ...
211 views
### Can't tell who I voted for with the new button styling
When we had the old and boring button styles, voting for someone in an election would put a border around the vote used. That doesn't happen anymore with the hawt new button style. I know the ...
358 views
### The “Hot” tab algorithm changed recently, does not appear to be working as intended
Recently I have seen a lot of older high view count questions on the Hot tab. Was the algorithm recently changed? Before I used to see questions that had a lot of activity (many updates to Q's or ...
371 views
### How did this question bypass the dupe-title filter and the question rate-limit?
Here are the two questions: (10k only - since both are now deleted) DVD Sorting Program for C++ Beginners (Help Needed) DVD Sorting Program for C++ Beginners (Help Needed) The titles are exactly ...
239 views
### My edits got rejected because I edited at the same time as another user, now I can't make edits [duplicate]
When I edit a post, same time the other user also change the post, his post is accepted and my edit is rejected. But both of the edit is same in some times. I lost the edit privileged for an week? ...
320 views
### Would you like to automatically move this discussion to chat? Yes Please! Well you can't, so there? [duplicate]
Possible Duplicate: Disable chat migration notification if one of the users has insufficient rep? This has happened to me twice this week and is annoying... Please avoid extended ...
5k views
### Clarify “no longer accepting questions from this account” error [duplicate]
Possible Duplicate: Better explanation when account is blocked A few months ago SO started blocking questions from users with a history of poor questions; users get the error message: ...
109 views
### Badge activity roll-up on profile page isn't consistent
If a user is awarded the same badge more than once in quick succession, the profile's activity tab will show each badge if the "badges" filter is selected... ...but the "all" filter will roll them ...
280 views
### Sock-puppet attack on GameDev Stack Exchange - Exploits!
We've been having a sock-puppet attack on the GameDev Stack Exchange site over the last few days. I am not fully aware of the details but there seem to be two exploits being used: First - the ...
223 views
### The legend on user's tag badges page doesn't match the descriptions
The descriptions state that you need 100/400/1000 upvotes in a tag, while the legend says you need 100/400/1000 total score. The tooltip on the tag badge itself needs fixing as well.
277 views
### where's the LaTeX love gone? [closed]
A few sites across the network where we used to have LaTeX enabled, aren't displaying LaTeX currently: economics: Marginal cost and benefit maths: Absolute maximum value of $\sin^2(x)-\sin(x)$ in ...
205 views
### Duplicate question close reason says “already has an answer,” which is often false
I asked a question here on MSO today, but it was found to have been asked around a month ago. I'm not concerned by this (better to be found by more than one user) but, I noticed that the wording in ...
214 views
### My chat tags are borderless
When I post a tag into chat, borders on my tags are missing, but other's tags have them: also: inspecting the page source, the style attribute for my tags are missing: normal tag <span ...
144 views
### Unable to retroactively add edit summary during edit window
It often occurs to me to add an edit summary to a question-edit immediately after submitting the edit, especially if the changes aren't obvious (formatting etc.). So I: Click "edit" again. Fill in ...
274 views
### How do I delete or flag comments using the mobile website?
While browsing the site on my Android mobile, I found that there is no option to delete comments through the mobile website. In order to take screenshots, I recreated the issue using the mobile link ...
146 views
### Images can be pushed outside the boundaries of a post by using nested lists
A B C D E F I don't even know. It just seemed weird. Yes, I understand I could just avoid doing something like that. But the problem already exists with a single level. Consider ...
243 views
### Don't exclude results because of quoted punctuation
The new search allows quoted queries that include punctuation, and this is fantastic, most of the time. However, the search considers the punctuation as part of the word itself, so searches for "a ...
498 views
### Tabs Flicker when Mouse Over Top Edge
If I go to the Questions page and hold the mouse over the top edge of one of the tabs (other than the one currently displayed) the tab will flicker wildly. This is Firefox 3 in Ubuntu.
252 views
144 views
### User views on Data Explorer is wrong or extremely out-of-date
I noticed that the Users.Views field for every single active user is extremely out of date. It is even zero/missing for some (not too new) users. Example: ...
161 views
### Discrepancy in flag number in chat
Well, which is it? One flag or no flags? I do have the all tab selected, so that's not the problem. The flag circle in chat itself is unaffected, only the flags tab and the top bar of chat.SE/MSO/SO ...
259 views
### Background in OP's user name can obscure text in multiline comments
That code actually says has_identity, but the underscore is obscured by the background of the OP's username in Chrome. Image is from this answer, if anyone wants to see it in action.
189 views
### Expanding an event group shows more events than labeled when there are new events
I have six upvotes for the one question with no intervening history. They have been summarised into two groups - one with 4 events and one with 2 events. However, when you expand either, both show all ...
348 views
### How come it is showing Diamond to my account in moderator private message?
I just got a Moderator Private Message from Super User's Moderator Team. http://superuser.com/users/message/629?noredirect=1#629 (For Super User moderators only) Because I failed last few reviews I ...
234 views
### 10k Deleted List not showing full range
The Recent Occurrences:Recently Deleted list in the 10k tools won't show all the posts in the chosen date range. It appears to limit out at 45 posts (with no pagination).
239 views
### Retagging without the edit privilege now requires an edit summary
The retag button is gone and only edit remains. This has made a few negative changes for users with the retag privilege but not the edit privilege. It is now necessary to enter an edit summary even ...
279 views
### Editing a question title with the term “problem” [duplicate]
Possible Duplicate: Using the word “problem” in titles Let users with sufficient reputation use “problem” in titles I found this question: How do I resolve the ...
208 views
### The “edit tags” link does not appear after editing a question inline
After editing a question inline, the "edit tags" link does not appear when hovering over the spot it normally appears: Which is confirmed by looking at the DOM: (Also, the "title" attribute of ...
832 views
### Cannot view spoilers on iPad when logged in
I am viewing this question on my iPad: Is there an explanation for Amy Pond's selective remembering of people? How do I view the spoilers on the question? There is no way for me to "hover over" ...
146 views
### How did this circular duplicate come about?
Usually we're presented with an error if a closure would result in the creation of a loop: But somehow this question has been closed as a duplicate of this question which is a duplicate of this ...
192 views
188 views
### Suggested edit gained 4 reputation points
I recently suggested an edit to a question on StackOverflow. This edit got approved, but I received 4 reputation points for that suggestion. Shouldn't that be just two points - or am I missing ...
115 views
### Vandalism flags should only be manually dismissed
We have this weird situation where a user is deleting their posts for the past couple of days. That's fine. Whenever a user goes on self-delete mode, the system flags one of the other posts with links ...
224 views
### I think I may have broken the rep cap
I just noticed a drop in my reputation. It turns out that a rather unconstructive post was deleted and took a fair amount of reputation I earned with it. Given the amount of reputation I thought to ...
202 views
### Question garnered six offensive flags but was not deleted?
I came across this post*, which had managed to gather six offensive flags... Normally, this would result in the question being locked and deleted by the Community user. However, while it was locked, ...
479 views
### Markdown handles inline bold text (within a word) incorrectly
When I bold text inline like *strong*text it gets parsed as italics instead of either bold or nothing. I created the above with the following line: **strong**text As I understand it, inline ...
197 views
### New tags page mangles tag wiki excerpts
With the tag page changed, there are some.. let's call them "controversial" opinions whether it's an improvement or not; I'd rather not comment on that, but point out a certain "feature" in the tag ...
200 views
183 views
### Whoa! The search results just gave me a book!
So, the latest question here on Meta prompted me to do a quick search for "code review". Everything looked all fine and dandy until I switched to the votes tab, where I was prompted with this nice ...
133 views
### Close as a duplicate on mobile: Lots of Vertical Scrolling
Attempting to use the site with a Nexus 4. I came across a question that was a duplicate, so I found the dupe and voted to close. The only problem: the button is at the bottom of the modal. While ...
177 views
### Question 72394 has two accepted answers [closed]
The following question has two accepted answers. It appears that the one with fewer votes somehow became undeleted. What should a developer know before building a public web site? Related SU ...
235 views
### The Way of the Editless Edit: why could this user do a blank edit
There's two suggested edits here.. and the difference is obvious - there is no edit. While it was rejected, i see a comment but no actual edit. Is this an odd glitch, or something else? EDIT: ...
|
2013-06-19 22:57:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5624065399169922, "perplexity": 3552.3273697660147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709379061/warc/CC-MAIN-20130516130259-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/three-squares/
|
# Three squares
Let $$S = \{n = a^2+b^2+c^2 \ \vert \ (a,b,c) \in \mathbb{Z}^{3} \}$$. Is $$S$$ closed under multiplication?
×
|
2018-01-17 01:01:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853473424911499, "perplexity": 13527.512893448855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886792.7/warc/CC-MAIN-20180117003801-20180117023801-00145.warc.gz"}
|
https://www.slideserve.com/tad-butler/relations
|
Relations
1 / 60
# Relations - PowerPoint PPT Presentation
Relations. Section 9.1, 9.3—9.5 of Rosen Spring 2012 CSCE 235 Introduction to Discrete Structures Course web-page: cse.unl.edu/~cse235 Questions: Piazza. Outline. Relation: Definition, representation, relation on a set Properties
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
### Relations
Section 9.1, 9.3—9.5 of Rosen
Spring 2012
CSCE 235 Introduction to Discrete Structures
Course web-page: cse.unl.edu/~cse235
Questions: Piazza
Outline
• Relation:
• Definition, representation, relation on a set
• Properties
• Reflexivity, symmetry, antisymmetric, irreflexive, asymmetric
• Combining relations
• , , \, composite of relations
• Representing relations
• 0-1 matrices, directed graphs
• Closure of relations
• Reflexive closure, diagonal relation, Warshall’s Algorithm,
• Equivalence relations:
• Equivalence class, partitions,
Introduction
• A relation between elements of two sets is a subset of their Cartesian products (set of all ordered pairs
• Definition: A binary relation from a set A to a set B is a subset R AB ={ (a,b) | a A, b B }
• Relation versus function
• In a relation, each aA can map to multiple elements in B
• Relations are more general than functions
• When (a,b)R, we say that a is related to b.
• Notation: aRb, aRb \$aRb\$, \$a\notR b\$
Relations: Representation
• To represent a relation, we can enumerate every element of R
• Example
• Let A={a1,a2,a3,a4,a5} and B={b1,b2,b3}
• Let R be a relation from A to B defined as follows
R={(a1,b1),(a1,b2),(a1,b3), (a2,b1),(a3,b1),(a3,b2),(a3,b3),(a5,b1)}
• We can represent this relation graphically
A
B
a1
b1
a2
b2
• Graphical representation
• Bipartite
• Directed
• Graph
a3
b3
a4
a5
Relations on a Set
• Definition: A relation on the set A is a relation from A to A and is a subset of AA
• Example
The following are binary relations on N
R1={ (a,b) | a b }
R2={ (a,b) | a,b N, a/b Z }
R3={ (a,b) | a,b N, a-b=2 }
• Question
For each of the above relations, give some examples of ordered pairs (a,b) N2 that are not in the relation
Properties
• We will study several properties of relations
• Reflexive
• Symmetric
• Transitive
• Antisymmetric
• Asymmetric
• Alert: Those properties are defined for only relations on a set
Properties: Reflexivity
• In a relation on a set, if all ordered pairs (a,a) for every aA appears in the relation, R is called reflexive
• Definition: A relation R on a set A is called reflexive iff
aA (a,a)R
Reflexivity: Examples
• Recall the relations below, which is reflexive?
R1={ (a,b) | a b }
R2={ (a,b) | a,bN, a/bZ }
R3={ (a,b) | a,bN, a-b=2 }
• R1 is reflexive since for every aN, a a
• R2 is reflexive since a/a=1 is an integer
• R3 is not reflexive since a-a=0 for every aN
Properties: Symmetry
• Definitions
• A relation R on a set A is called symmetric if
a,b A ( (b,a)R (a,b)R )
• A relation R on a set A is called antisymmetric if
a,b A [ (a,b)R (b,a)R a=b]
Symmetry versus Antisymmetry
• In a symmetric relation aRb bRa
• In an antisymmetric relation, if we have aRb and bRa hold only when a=b
• An antisymmetric relation is not necessarily a reflexive relation: it may be reflexive or not
• A relation can be
• both symmetric and antisymmetric
• or neither
• or have one property but not the other
Symmetric Relations: Example
• Consider R={(x,y)R2|x2+y2=1}, is R
• Reflexive?
• Symmetric?
• Antisymmetric?
• R is not reflexive since for example (2,2)R2
• R is symmetric because
• x,yR, xRyx2+y2=1 y2+x2=1 yRx
• R is not antisymmetric because (1/3,8/3)R and (8/3,1/3)R but 1/38/3
Properties: Transitivity
• Definition: A relation R on a set A is called transitive
• if whenever (a,b)R and (b,c)R
• then (a,c)R for all a,b,c A
a,b,c A ((aRb)(bRc)) aRc
Transitivity: Examples (1)
• Is the relation R={(x,y)R2| xy} transitive?
• Is the relation R={(a,b),(b,a),(a,a)} transitive?
Yes, it is transitive because xRy and yRz xy and yz xz xRz
No, it is not transitive because bRa and aRb but bRb
Transitivity: Examples (2)
• Is the relation {(a,b) | a is an ancestor of b} transitive?
• Is the relation {(x,y)R2| x2y} transitive?
Yes, it is transitive because aRb and bRc a is an ancestor of b and b is an ancestor of c a is an ancestor of c aRc
No, it is not transitive because 2R4 and 4R10 but 2R10
More Properties
• Definitions
• A relation on a set A is irreflexive iff aA (a,a)R
• A relation on a set A is asymmetric iff
a,bA ( (a,b)R (b,a) R )
• Lemma: A relation R on a set A is asymmetric iff
• R is irreflexive and
• R is antisymmetric
A relation that is not symmetric is not necessarily asymmetric
Outline
• Relation:
• Definition, representation, relation on a set
• Properties
• Reflexivity, symmetry, antisymmetric, irreflexive, asymmetric
• Combining relations
• , , \, composite of relations
• Representing relations
• 0-1 matrices, directed graphs
• Closure of relations
• Reflexive closure, diagonal relation, Warshall’s Algorithm,
• Equivalence relations:
• Equivalence class, partitions,
Combining Relations
• Relations are simply… sets (of ordered pairs); subsets of the Cartesian product of two sets
• Therefore, in order to combine relations to create new relations, it makes sense to use the usual set operations
• Intersection (R1R2)
• Union (R1R2)
• Set difference (R1\R2)
• Sometimes, combining relations endows them with the properties previously discussed. For example, two relations may be not transitive, but their union may be
Combining Relations: Example
• Let
• A={1,2,3,4}
• B={1,2,3,4}
• R1={(1,2),(1,3),(1,4),(2,2),(3,4),(4,1),(4,2)}
• R2={(1,1),(1,2),(1,3),(2,3)}
• Let
• R1 R2=
• R1 R2 =
• R1 \ R2 =
• R2 \ R1 =
Composite of Relations
• Definition: Let R1 be a relation from the set A to B and R2 be a relation from B to C, i.e.
R1 AB and R2BC
the composite ofR1 and R2 is the relation consisting of ordered pairs (a,c) where aA, cC and for which there exists an element bB such that (a,b)R1 and (b,c)R2. We denote the composite of R1 and R2 by
R2R1
Powers of Relations
• Using the composite way of combining relations (similar to function composition) allows us to recursively define power of a relation R on a set A
• Definition: Let R be a relation on A. The powersRn, n=1,2,3,…, are defined recursively by
R1 = R
Rn+1 = Rn R
Powers of Relations: Example
• Consider R={(1,1),(2,1),(3,2),(4,3)}
• R2=
• R3=
• R4=
• Note that Rn=R3 for n=4,5,6,…
Powers of Relations & Transitivity
• The powers of relations give us a nice characterization of transitivity
• Theorem: A relation R is transitive if and only if Rn R for n=1,2,3,…
Outline
• Relation:
• Definition, representation, relation on a set
• Properties
• Reflexivity, symmetry, antisymmetric, irreflexive, asymmetric
• Combining relations
• , , \, composite of relations
• Representing relations
• 0-1 matrices, directed graphs
• Closure of relations
• Reflexive closure, diagonal relation, Warshall’s Algorithm,
• Equivalence relations:
• Equivalence class, partitions,
Representing Relations
• We have seen one way to graphically represent a function/relation between two (different) sets: Specifically as a directed graph with arrows between nodes that are related
• We will look at two alternative ways to represent relations
• 0-1 matrices (bit matrices)
• Directed graphs
0-1 Matrices (1)
• A 0-1 matrix is a matrix whose entries are 0 or 1
• Let R be a relation from A={a1,a2,…,an} and B={b1,b2,…,bn}
• Let’s impose an ordering on the elements in each set. Although this ordering is arbitrary, it is important that it remain consistent. That is, once we fix an ordering, we have to stick to it.
• When A=B, R is a relation on A and we choose the same ordering in the two dimensions of the matrix
0-1 Matrix (2)
• The relation R can be represented by a (nm) sized 0-1 matrix MR=[mi,j] as follows
• Intuitively, the (i,j)-th entry if 1 if and only if aiA is related to biB
0-1 Matrix (3)
• An important note: the choice of row-major or column-major form is important.
• The (i,j)th entry refers to the i-th row &the j-th column.
• The size, (nm), refers to the fact that MR has n rows and m columns
• Though the choice is arbitrary, switching between row-major and column-major is a bad idea, because when AB, the Cartesian Product AB BA
• In matrix terms, the transpose, (MR)T does not give the same relation. This point is moot for A=B.
Matrix Representation: Example
• Consider again the example
• A={a1,a2,a3,a4,a5} and B={b1,b2,b3}
• Let R be a relation from A to B as follows:
R={(a1,b1),(a1,b2),(a1,b3),(a3,b1),(a3,b2),(a3,b3),(a5,b1)}
• Give MR
• What is the size of the matrix?
Using the Matrix Representation (1)
• A 0-1 matrix representation makes it very easy to check whether or not a relation is
• Reflexive
• Symmetric
• Antisymmetric
• Reflexivity
• For R to be reflexive, a (a,a)R
• In MR, R is reflexive iff mi,i=1 for i=1,2,…,n
• We check only the diagonal
Using the Matrix Representation (2)
• Symmetry
• R is symmetric iff for all pairs (a,b) aRbbRa
• In MR, this is equivalent to mi,j=mj,i for every pair i,j=1,2,…,n
• We check that MR=(MR)T
• Antisymmetry
• R is antisymmetric if mi,j=1 with ij, then mj,i=0
• Thus, i,j=1,2,…, n, ij (mi,j=0) (mj,i=0)
• A simpler logical equivalence is
i,j=1,2,…, n, ij ((mi,j=1) (mj,i=1))
Matrix Representation: Example
• Is R reflexive? Symmetric? Antisymmetric?
• Clearly R is not reflexive: m2,2=0
• It is not symmetric because m2,1=1, m1,2=0
• It is however antisymmetric
Matrix Representation: Combining Relations
• Combining relations is also simple: union and intersection of relations are nothing more than entry-wise Boolean opertions
• Union: An entry in the matrix of the union of two relations R1R2 is 1 iff at least one of the corresponding entries in R1 or R2 is 1. Thus
MR1R2 = MR1 MR2
• Intersection: An entry in the matrix of the intersection of two relations R1R2 is 1 iffboth of the corresponding entries in R1 and R2 are 1. Thus
MR1R2 = MR1 MR2
• Count the number of operations
Combining Relations: Example
• What is MR1R2 and MR1R2?
• How does combining the relations change their properties?
Composing Relations: Example
• 0-1 matrices are also useful for composing matrices. If you have not seen matrix product before, read Section 3.8
Composite Relations: Rn
• Remember that recursively composing a relation Rn R for n=1,2,3,… gives a nice characterization of transitivity
• Theorem: A relation R is transitive if and only if Rn R for n=1,2,3,…
• We will use
• this idea and
• the composition by matrix multiplication
to build the Warshall (a.k.a. Roy-Warshall) algorithm, which computed the transitive closure (discussed in the next section)
Directed Graphs Representation (1)
• We will study graphs in details towards the end of the semester
• We briefly introduce them here to use them to represent relations
• We have already seen directed graphs to represent functions and relations (between two sets). Those are special graphs, called bipartite directed graphs
• For a relation on a set A, it makes more sense to use a general directed graph rather than having two copies of the same set A
Definition: Directed Graphs (2)
• Definition: A Ggraph consists of
• A set V of vertices (or nodes), and
• A set E of edges (or arcs)
• We note: G=(V,E)
• Definition: A directedG graph (digraph) consists of
• A set V of vertices (or nodes), and
• A set E of edges of ordered pairs of elements of V (of vertices)
Directed Graphs Representation (2)
• Example:
• Let A={a1,a2,a3,a4}
• Let R be a relation on A defined as follows
R={(a1,a2),(a1,a3),(a1,a4),(a2,a3),(a2,a4),(a3,a1),(a3,a4), (a4,a3),(a4,a4)}
• Draw the digraph representing this relation (see white board)
Using the Digraphs Representation (1)
• A directed graph offers some insight into the properties of a relation
• Reflexivity: In a digraph, the represented relation is reflexive iff every vertex has a self loop
• Symmetry: In a digraph, the represented relation is symmetric iff for every directed edge from a vertex x to a vertex y there is also an edge from y to x
Using the Digraphs Representation (2)
• Antisymmetry: A represented relation is antisymmetric iff there is never a back edge for any directed edges between two distinct vertices
• Transitivity: A digraph is transitive if for every pair of directed edges (x,y) and (y,z) there is also a directed edge (x,z)
This may be harder to visually verify in more complex graphs
Outline
• Relation:
• Definition, representation, relation on a set
• Properties
• Reflexivity, symmetry, antisymmetric, irreflexive, asymmetric
• Combining relations
• , , \, composite of relations
• Representing relations
• 0-1 matrices, directed graphs
• Closure of relations
• Reflexive closure, diagonal relation, Warshall’s Algorithm,
• Equivalence relations:
• Equivalence class, partitions,
Closures: Definitions
• If a given relation R
• is not reflexive (or symmetric, antisymmetric, transitive)
• How can we transform it into a relation R’ that is?
• Example: Let R={(1,2),(2,1),(2,2),(3,1),(3,3)}
• How can we make it reflexive?
• In general we would like to change the relation as little as possible
• To make R reflexive, we simply add (1,1) to the set
• Inducing a property on a relation is called its closure.
• Above, R’=R{(1,1)} is called the reflexive closure
Reflexive Closure
• In general, the reflexive closure of a relation R on A is R where ={ (a,a) | aA}
• is the diagonal relation on A
• Question: How can we compute the diagonal relation using
• 0-1 matrix representation?
• Digraph representation?
Symmetric Closure
• Similarly, we can create the symmetric closure using the inverse of the relation R.
• The symmetric closer is, RR’ where
R’={ (b,a) | (a,b)R }
• Question: How can we compute the symmetric closure using
• 0-1 matrix representation?
• Digraph representation?
Transitive Closure
• To compute the transitive closure we use the theorem
• Theorem: A relation R is transitive if and only if Rn R for n=1,2,3,…
• Thus, if we compute Rk such that Rk Rnfor all nk, then Rk is the transitive closure
• The Warshall’s Algorithm allows us to do this efficiently
• Note: Your textbook gives much greater details in terms of graphs and connectivity relations. It is good to read this material, but it is based on material that we have not yet seen.
Warshall’s Algorithm: Key Ideas
• In any set A with |A|=n, any transitive relation will be built from a sequence of relations that has a length of at most n. Why?
• Consider the case where the relation R on A has the ordered pairs (a1,a2),(a2,a3),…,(an-1,an). Then, (a1,an) must be in R for R to be transitive
• Thus, by the previous theorem, it suffices to compute (at most) Rn
• Recall that Rk=RRk-1is computed using a bit-matrix product
• The above gives us a natural algorithm for computing the transitive closure: the Warshall’s Algorithm
Warshall’s Algorithm
Input: An (nn) 0-1 matrix MR representing a relation R on A, |A|=n
Output: An (nn) 0-1 matrix W representing the transitive closure of R on A
1. W MR
2. FOR k=1,…, n DO
• FOR i=1,…,n DO
• FOR j=1,…,n DO
• wi,j wi.j (wi,k wk,j)
• END
• END
• END
• RETURN W
Warshall’s Algorithm: Example
• Compute the transitive closure of
• The relation R={(1,1),(1,2),(1,4),(2,2),(2,3),(3,1),
(3,4),(4,1),(4,4)}
• On the set A={1,2,3,4}
Outline
• Relation:
• Definition, representation, relation on a set
• Properties
• Reflexivity, symmetry, antisymmetric, irreflexive, asymmetric
• Combining relations
• , , \, composite of relations
• Representing relations
• 0-1 matrices, directed graphs
• Closure of relations
• Reflexive closure, diagonal relation, Warshall’s Algorithm
• Equivalence relations:
• Equivalence class, partitions,
Equivalence Relation
• Consider the set of every person in the world
• Now consider a R relation such that (a,b)R if a and b are siblings.
• Clearly this relation is
• Reflexive
• Symmetric, and
• Transitive
• Such as relation is called an equivalence relation
• Definition: A relation on a set A is an equivalence relation if it is reflexive, symmetric, and transitive
Equivalence Class (1)
• Although a relation R on a set A may not be an equivalence relation, we can define a subset of A such that R does become an equivalence relation (on the subset)
• Definition: Let R be an equivalence relation on a set A and let a A. The set of all elements in A that are related to a is called the equivalence class of a. We denote this set [a]R. We omit R when there is not ambiguity as to the relation.
[a]R = { s | (a,s)R, sA}
Equivalence Class (2)
• The elements in [a]R are called representatives of the equivalence class
• Theorem: Let R be an equivalence class on a set A. The following statements are equivalent
• aRb
• [a]=[b]
• [a] [b]
• The proof in the book is a circular proof
PartitionsPartitions (1)
• Equivalence classes partition the set A into disjoint, non-empty subsets A1, A2, …, Ak
• A partition of a set A satisfies the properties
• ki=1Ai=A
• Ai Aj = for ij
• Ai for all i
Partitions (2)
• Example: Let R be a relation such that (a,b)R if a and b live in the same state, then R is an equivalence relation that partitions the set of people who live in the US into 50 equivalence classes
• Theorem:
• Let R be an equivalence relation on a set S. Then the equivalence classes of R form a partition of S.
• Conversely, given a partition Ai of the set S, there is a equivalence relation R that has the set Ai as its equivalence classes
Partitions: Visual Interpretation
• In a 0-1 matrix, if the elements are ordered into their equivalence classes, equivalence classes/partitions form perfect squares of 1s (with 0s everywhere else)
• In a diargh, equivalence classes form a collections of disjoint complete graphs
• Example: Let A={1,2,3,4,5,6,7} and R be an equivalence relation that partitions A into A1={1,2}, A2={3,4,5,6} and A3={7}
• Draw the 0-1 matrix
• Draw the digraph
Equivalence Relations: Example 1
• Example: Let R={ (a,b) | a,bR and ab}
• Is R reflexive?
• Is it transitive?
• Is it symmetric?
No, it is not. 4 is related to 5 (4 5) but 5 is not related to 4
Thus R is not an equivalence relation
Equivalence Relations: Example 2
• Example: Let R={ (a,b) | a,bZ and a=b}
• Is R reflexive?
• Is it transitive?
• Is it symmetric?
• What are the equivalence classes that partition Z?
Equivalence Relations: Example 3
• Example: For (x,y),(u,v) R2, we define
R={ ((x,y),(u,v)) | x2+y2=u2+v2}
• Show that R is an equivalence relation.
• What are the equivalence classes that R defines (i.e., what are the partitions of R2)?
Equivalence Relations: Example 4
• Example: Given n,rN, define the set
nZ + r = { na + r | a Z }
• For n=2, r=0, 2Z represents the equivalence class of all even integers
• What n, r give the class of all odd integers?
• For n=3, r=0, 3Z represents the equivalence class of all integers divisible by 3
• For n=3, r=1, 3Z represents the equivalence class of all integers divisible by 3 with a remainder of 1
• In general, this relation defines equivalence classes that are, in fact, congruence classes (See Section 3.4)
|
2017-11-24 19:46:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115161061286926, "perplexity": 3368.0715592112974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808742.58/warc/CC-MAIN-20171124180349-20171124200349-00429.warc.gz"}
|
https://www.physicsforums.com/threads/number-of-quarks-inside-a-nucleon.276180/
|
# Number of quarks inside a nucleon
1. Dec 1, 2008
### TheMan112
I've been banging my head against the wall for days now trying to figure out how one determines the number of quarks inside the nucleon.
I understand it comes from the fact that the Gross-Llewellyn-Smith sum rule is equal to three:
$$\int ^1 _0 F_3 ^N (x) = \int ^1 _0 (u_V (x) + d_V (x)) = 3$$
This is based on neutrino-nucleon scattering data, a professor at my uni told me that it is derivable from these two equations:
$$\frac{d \sigma^{\nu N}}{dx} = \frac{G^2 M E}{\pi} x (q(x) + \bar q (x) /3)$$
$$\frac{d \sigma^{\bar \nu N}}{dx} = \frac{G^2 M E}{\pi} x (\bar q(x) + q (x) /3)$$
Where N is your average nucleon.
I don't know if this something that should follow naturally but I feel rather lost. :tongue:
2. Dec 1, 2008
Staff Emeritus
Think about as two equations in two unknowns: get qV(x) on one side and integrate it.
3. Dec 2, 2008
Staff Emeritus
I was unclear - the two unknowns I was speaking of are q_V(x) and qbar(x). (i.e. the valence and sea distributions). Pull q_V to one side, integrate it, and G&L-S tell you it's 3.
4. Dec 2, 2008
### TheMan112
Am I supposed to get $$q_V (x)$$ from $$q(x)$$ but keep $$\bar q(x)$$? I tried to extract $$q(x)$$ from the system of equations, but I got a rather messy expression:
$$q(x) = \frac{9 \pi}{8 G^2 M E} \frac{1}{x} \left(\frac{1}{3} \frac{d \sigma^{\bar \nu N}}{dx} - \frac{d \sigma^{\nu N}}{dx}\right)$$
Not least when one would be integrating this, by parts and all. ;)
Edit: I suppose a better formulation might be; how do I get $$F_3 ^N (x)$$ from the last two equations (in the first post).
Last edited: Dec 2, 2008
5. Dec 2, 2008
Staff Emeritus
qV = q - qbar.
6. Dec 2, 2008
Staff Emeritus
It's probably also worth pointing out that there are many oversimplifications that go into the G&L-S sum rule. Experimentally, it's around two-and-a-half, but what was surprising wasn't that it wasn't exactly three, but that it wasn't even farther off.
7. Dec 2, 2008
### humanino
I agree, but would like to point out that the gluon radiation (or any name you give to higher orders in the evolution) come in excellent agreement with the data.
Sum_GLS(3 GeV^2) = 2.50 +/- 0.018 (stat) +/- 0.078 (syst) (CCFR collaboration Fermilab, Phys Lett B331 1994 655)
compared to 2.66 +/- 0.04 expected from CTEQ95 for instance.
8. Dec 2, 2008
### atyy
Does (syst) stand for systematic error? How do they estimate it?
9. Dec 3, 2008
### TheMan112
In analogue with my last post I come to the expression for the antiquarks:
$$\bar q (x) = \frac{9 \pi}{8 G^2 M E} \frac{1}{x} \left(\frac{1}{3} \frac{d \sigma^{\nu N}}{dx} - \frac{d \sigma^{\bar \nu N}}{dx} \right)$$
The valence quarks can be calculated by subtracting $$q(x)$$ and $$\bar q (x)$$ as Vanadium suggested:
$$q_V (x) = q (x) - \bar q (x) = \frac{\pi}{G^2 M E} \frac{1}{x} \frac{3}{2} \left( \frac{d \sigma^{\bar \nu N}}{dx} - \frac{d \sigma^{\nu N}}{dx}\right)$$
I'm not sure how one reaches an integer answer when integrating this expression using GLS.
|
2018-11-17 23:18:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6893513202667236, "perplexity": 1717.3108002918495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743913.6/warc/CC-MAIN-20181117230600-20181118012600-00318.warc.gz"}
|
https://meta.mathoverflow.net/revisions/0f5cbee2-7808-4184-aa4c-f94dc5b7dac7/view-source
|
Self-promotion, but I would like to mention my question https://mathoverflow.net/questions/148691/meager-subspaces-of-a-banach-space-and-weak-convergence. It contains two questions. I have resolved Q1 and posted it as an answer (hence the question is no longer "unanswered") but Q2 has not been resolved. I quote it here for visibility.
> **Q2.** Let $X$ be a Banach space. Let us say a linear subspace $E \subset X$ **determines weak-* convergence** (of sequences) if for every sequence $\{f_n\} \subset X^*$ such that $f_n(x) \to 0$ for every $x \in E$, we have $f_n(x) \to 0$ for every $x \in X$. Is every such $E$ nonmeager?
The converse is an easy exercise with the uniform boundedness principle.
**Update** Q2 has now been resolved. The answer is No (though I would still be interested in a separable counterexample). Should I delete this?
|
2022-05-26 05:03:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619734048843384, "perplexity": 287.20463184360017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00408.warc.gz"}
|
https://thermtest.com/papers_author/fumio-inagaki
|
Thermal Conductivity Paper Database
Recommended Papers for: Fumio Inagaki
Total Papers Found: 1
Thermal properties and thermal structure in the deep-water coalbed basin off the Shimokita Peninsula, Japan
In deep sedimentary basins, geothermal processes are significantly affected by heat transport properties. In this study, thermal properties of deep sediment core samples were measured using a thermal constant analyzer with transient plane source method. Results showed that thermal conductivity of the sediment near the ...
|
2022-09-28 11:50:11
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686785697937012, "perplexity": 11022.915834010335}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00401.warc.gz"}
|
https://ec.gateoverflow.in/40/gate2017-ec-1-30
|
For the circuit shown, assume that the NMOS transistor is in saturation.Its threshold voltage $V_{tn}$=1V and its transconductance $\mu C_{ox}(\frac{W}{L})=1mA/V^{2}$.Neglect channel length modulation and body bias effects. Under these conditions, the drain current $I_D$ in $mA$ is _____________.
|
2019-10-20 16:24:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6255942583084106, "perplexity": 3639.9772207022493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00052.warc.gz"}
|
https://www.physicsforums.com/threads/does-my-speedy-spaceship-999-c-have-a-temperature.973014/
|
# B Does my speedy spaceship (.999+C) have a temperature?
#### Suppaman
Summary
Assume we measure the temperature as the ship passes, what will it be?
I assume that as time is moving much slower on the ship and that may affect the frequency of heat radiation. I am not sure how to phrase the question because I do not want to assume any answers. I did ask Mr. Google but no results.
Related Special and General Relativity News on Phys.org
#### PeroK
Homework Helper
Gold Member
2018 Award
Summary: Assume we measure the temperature as the ship passes, what will it be?
I assume that as time is moving much slower on the ship and that may affect the frequency of heat radiation. I am not sure how to phrase the question because I do not want to assume any answers. I did ask Mr. Google but no results.
How do you define temperature?
How would you measure the temperature of a moving object?
#### Suppaman
Everything in the sky is moving and we can measure their temp so like that. And I asked if it had a temp, and you asked me two questions, please answer mine first.
#### PeroK
Homework Helper
Gold Member
2018 Award
Everything in the sky is moving ...
I'll need to think about that!
Homework Helper
Gold Member
2018 Award
Try this:
#### Ibix
I think it depends how you choose to define temperature, as PeroK's link suggests. Certainly one can define "rest temperature" which is invariant, and it makes sense to do so. Certainly you will measure the radiation from a moving object as red- or blue-shifted, which would let you define a "relativistic temperature".
I'd prefer to define temperature as invariant, and as far as I'm aware that's the modern approach to more or less everything. I don't know if there is a consensus in this case or not - my only text on relativistic thermodynamics is over eighty years old.
So ultimately, I think you need to answer PeroK's questions - how are you defining temperature and what method are you using to measure it. When you've answered both, the answer to your question will be obvious. Until you've answered both, the answer is "could be anything".
For stars in the sky, generally we're trying to compare like with like. So I expect that we measure the apparent blackbody curve, correct for the Doppler shift by looking at spectral lines, and thereby measure "rest temperature".
#### Suppaman
The ship obviously has a temp, it is moving and that will not change that temp. If I take a photo with my IR camera as it passes in front of the camera, neither coming or going, just in passing, what will my camera measure? Let us say that if the ship was stopped it would reg 1000 deg F and now it is moving and time is slower on the ship it would still be 1000 deg F based on the ship's sensors but what would I see?
#### PeroK
Homework Helper
Gold Member
2018 Award
The ship obviously has a temp, it is moving and that will not change that temp. If I take a photo with my IR camera as it passes in front of the camera, neither coming or going, just in passing, what will my camera measure? Let us say that if the ship was stopped it would reg 1000 deg F and now it is moving and time is slower on the ship it would still be 1000 deg F based on the ship's sensors but what would I see?
There's the transverse Doppler effect:
Note that IR radiation is a measure of the heat an object is radiating and not its temperature. Two objects heated to the same temperature do not necessarily radiate the same heat. Compare a well insulated house with a poorly insulated one.
Also, more important, if you have several observers, all moving with respect to each other and all measuring different temperatures of the ship, then you can see that the concept becomes physically less meaningful.
You would have to say: "the temperature of that object relative to me is ...". In the same way that you say "the velocity of that ship relative to me is ...".
As @Ibix has pointed out, temperature is something that we would like to define as invariant - i.e. an inherent quantity of the object itself. Not dependent on the relative velocity of an observer to the object.
#### Ibix
If I take a photo with my IR camera as it passes in front of the camera, neither coming or going, just in passing, what will my camera measure?
It'll show a blackbody spectrum (edit: assuming your ship is a blackbody - see next post), related to the blackbody spectrum you would measure at rest by the transverse Doppler shift. So you'd see a blackbody temperature reduced by a factor of $\gamma$ compared to the at-rest measure.
Whether you regard this measure as "the temperature of the spaceship", or whether you correct for the motion of the ship (recovering the at-rest measure) and call this the temperature of the ship, is up to you.
The measurements are a matter of fact. How you label them follows consensus opinion and, as I said, I don't know what the modern consensus is. I expect it is for using "temperature" to mean invariant (or rest) temperature. But I do not know.
Last edited:
#### Ibix
It's also worth looking at the photo in this post to see the issues with IR for temperature measurement, quite apart from any relativistic effects. It works for blackbodies - not so much for very non-black things.
Nonetheless, whatever IR spectrum you measure will be related to the at-rest IR spectrum by the transverse Doppler effect.
#### russ_watters
Mentor
Why is this even an Einsteinian relativity question? Temperature is not frame dependent in Galilean Relativity either. When you travel in a plane, you'd cook if it was.
#### russ_watters
Mentor
Note that IR radiation is a measure of the heat an object is radiating and not its temperature. Two objects heated to the same temperature do not necessarily radiate the same heat. Compare a well insulated house with a poorly insulated one.
That's oddly put. It should be obvious that IR is a surface temperature measurement and there are limitations associated with that, but that doesn't make the measurement inaccurate, it just makes it different from an internal measurement.
What can make the measurement itself inaccurate is emissivity differences between materials.
#### Suppaman
Thank you all. I think I was looking for how we would view the observable properties of the speedy ship. For example, suppose there was a place that had a very high emission of energy, gigawatt laser, some super high power emitter of some kind that if a physical was in its focus for a super short time it would totally destroy the item. Now our speedy ship passes through this area very quickly, but the time exposed is long enough to do damage. But time on our ship is slow and it is so slow there is not enough time experienced by the ship for the energy beam to do damage. In the physical universe, we can have physical objects that are experiencing slow time, I am looking for information on what the rules are for interacting with them. We know gravity and speed can slow time but we do not know what time really is.
#### Orodruin
Staff Emeritus
Homework Helper
Gold Member
2018 Award
Why is this even an Einsteinian relativity question? Temperature is not frame dependent in Galilean Relativity either. When you travel in a plane, you'd cook if it was.
Again, this depends. If you mean the rest temperature, sure. If you mean the temperature that you would associate with the spectrum radiated from the body that will be affected by the Doppler effect and correspondingly red or blue shifted.
#### Ibix
For example, suppose there was a place that had a very high emission of energy, gigawatt laser, some super high power emitter of some kind that if a physical was in its focus for a super short time it would totally destroy the item. Now our speedy ship passes through this area very quickly, but the time exposed is long enough to do damage. But time on our ship is slow and it is so slow there is not enough time experienced by the ship for the energy beam to do damage.
You contradict yourself here. Either damage was done or it was not. It may be that you don't understand one frame's explanation for why damage was done (I'd have to work through it myself, but I'd suspect Doppler and angular aberration). But if you simply assume there's a contradiction then you are choosing to be confused.
#### Suppaman
You, someone, asked why this was " an Einsteinian relativity question? " Would it be best to ask a question and enclose it in a higher level question asking what category the question should be asked in? Perhaps there is a category or topic that can be used to ask a question and the powers that be are expected to assign it to an appropriate category or topic? I do not want to waste anyone's time. I also would appreciate having my question reworded to make proper sense rather than being criticized for asking a less than best question. Some times a really good question may not yet have a proper or universally accepted answer.
#### jbriggs444
Homework Helper
We know gravity and speed can slow time
No they do not. They affect the way we associate time here measured against this reference frame with time there measured against that reference frame. But they do not "slow time". Time proceeds at one second per second regardless.
#### Suppaman
So there is no twin paradox?
#### jbriggs444
Homework Helper
So there is no twin paradox?
The twin paradox does not involve time slowing down. It involves the comparison of time over here with time over there. And it involves the fact that the elapsed time between events depends on the path taken.
#### Suppaman
One twin is obviously older than the other. How did that happen?
#### PeroK
Homework Helper
Gold Member
2018 Award
One twin is obviously older than the other. How did that happen?
In general, many people before they learn SR have a misapprehension that "speed makes time run slower". If they learn SR properly, they learn that this is a misapprehension. In fact, the first thing they learn is that all inertial motion is relative and that velocity-based time dilation is symmetric.
From your posts above and on previous threads, it's clear that you have never lost the misapprehension about motion and time. In fact, you seem quite subborn in resisting all attempts to explain this to you.
My honest advice is to take a step back, forget all you think you know about SR and to start learning it without any misapprehensions or false preconceptions about what it says. These are preventing you understanding SR and understnding the answers to your questions.
This question about motion and temperature is a case in point.
In particular: there is no physical difference between a spaceship at rest relative to the Earth and te same ship travelling at nearly the speed of light relative to the Earth. It's the same ship and there are no physical differences associated with relative motion. There cannot be as the Earth is likewise travelling at nearly the speed of light relative to the ship.
Last edited:
#### Suppaman
Bailey et al. (1977) measured the lifetime of positive and negative muons sent around a loop in the CERN Muon storage ring. This experiment confirmed both time dilation and the twin paradox, i.e. the hypothesis that clocks sent away and coming back to their initial position are slowed with respect to a resting clock.[27][28] Other measurements of the twin paradox involve gravitational time dilation as well. See for instance the Hafele–Keating experiment and repetitions.
#### Nugatory
Mentor
So there is no twin paradox?
It’s not a paradox, if that’s what you mean.
Time passes at the rate of one second per second for both the traveling twin and the stay at home twin.
Their clocks tick off different amounts of time between the separation and reunion events for the same reason that two cars driving along different routes between the same two points will generally record a different number of kilometers driven: the different routes have different lengths, not because anything funny is happening with their speedometers.
#### PeroK
Homework Helper
Gold Member
2018 Award
Bailey et al. (1977) measured the lifetime of positive and negative muons sent around a loop in the CERN Muon storage ring. This experiment confirmed both time dilation and the twin paradox, i.e. the hypothesis that clocks sent away and coming back to their initial position are slowed with respect to a resting clock.[27][28] Other measurements of the twin paradox involve gravitational time dilation as well. See for instance the Hafele–Keating experiment and repetitions.
There is no shortage of material which, through a general lack of precise language, can reinforce your misconceptions. This is not so much wrong as misleading, when taken out of context.
As a counter to this, you could try:
It's not motion that causes differential aging, but changing from one inertial frame to another and, more generally, the shape of the path through spacetime.
#### Nugatory
Mentor
Bailey et al. (1977) measured the lifetime of positive and negative muons sent around a loop in the CERN Muon storage ring. This experiment confirmed both time dilation and the twin paradox, i.e. the hypothesis that clocks sent away and coming back to their initial position are slowed with respect to a resting clock.[27][28] Other measurements of the twin paradox involve gravitational time dilation as well. See for instance the Hafele–Keating experiment and repetitions.
And careless wording like this is part of the reason that Wikipedia is not, in general, an acceptable source here. This description isn’t exactly wrong but without a great deal more context it is terribly misleading - and it’s misled you.
A decent textbook might say something similar (the English language is not a precision instrument) but will also cover the other concepts that are needed to understand what’s going on: events; the spacetime interval; the distinction between proper time and coordinate time; relativity of simultaneity; frames as conventions for assigning coordinates to events; and the distinction between invariants and coordinate/frame-dependent quantities.
Wikipedia, even when it’s right, won’t always do this. Some Wikipedia articles are good, some aren’t, and the only way of knowing which are which is to ask someone who is already familiar with the subject.
Last edited:
"Does my speedy spaceship (.999+C) have a temperature?"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-07-21 07:17:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.499506413936615, "perplexity": 762.3769700819487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526931.25/warc/CC-MAIN-20190721061720-20190721083720-00493.warc.gz"}
|
https://physics.stackexchange.com/questions/348707/why-can-we-not-set-each-applied-force-equal-to-zero
|
# Why can we not set each applied force equal to zero?
With reference to page 17 of "Classical Mechanics" by Goldstein, Safko and Poole, the small paragraph after eq. 1.43, $$\sum_i \mathbf{F}^{(a)}_i \cdot \delta \mathbf{r}_i ~=~ 0.\tag{1.43}$$ I do not understand why we cannot set each applied force equal to zero if the $\delta \mathbf{r}_i$ are not independent but related by the constraint. Now, I figure that setting the coefficients (the forces) equal to zero is motivated by the consideration of a simpler equivalent scenario in which the work vanishes because the forces vanish one by one. However I cannot see how, in order to require the above, we need to change to generalised (and consequently independent) coordinates first.
In particular, Goldstein considers the case of system of particles indexed by $i$ such that the total force $\mathbf{F}_i$ on each particle vanishes. Then we have that the virtual work due to a change of coordinates $\delta \mathbf{r}_i$ is
$$\sum_i \mathbf{F}_i \cdot \delta \mathbf{r}_i ~=~ 0. \tag{1.40}$$
Then he separates the force on each particle into "applied" and "constraint" forces, $$\mathbf{F}_i ~=~ \mathbf{F}_i^{(a)} + \mathbf{f}_i\tag{1.41}$$ and restricts himself to systems for which the net virtual work of the forces of constraint is 0. Why can we not set each applied force equal to zero if the $\delta \mathbf{r}_i$ are not independent but related by the constraint?
• Not all of us have access to textbooks. Can you provide more context? – probably_someone Jul 26 '17 at 16:28
• The paragraph derives Langrange equation of motion from D'alambert principle. We consider a virtual displacement of coordinates (so at an instant of time t). en.wikipedia.org/wiki/D%27Alembert%27s_principle. The displacements in question are the differentials in the formula as seen in the first equation in the page linked above. – Matt306 Jul 26 '17 at 16:53
• Sorry I believe the paragraph was not very clear in conveying the message. In fact we are not setting the coefficients to be zero. It is a question of linear independence, in which case the coefficients are REQUIRED to be 0. Apologies again. – Matt306 Jul 26 '17 at 16:57
In many physics problems of interest, the applied force $\mathbf{F}_i^{(a)}$ of the $i$'th point particle is not zero.
But your focus should not be on the applied forces $\mathbf{F}_i^{(a)}$. They are there and properly accounted for. Your focus should instead be on the other forces $\mathbf{f}_i$.
The non-trivial statement in eq. (1.43) (apart from Newton's 2nd law for statics $\mathbf{F}_i=0$) is that the other forces produce no virtual work, $$\sum_i \mathbf{f}_i \cdot \delta \mathbf{r}_i ~=~ 0,$$ which e.g. is not true if the other forces include sliding friction.
I think that your doubt can be overcome by reading the equation $(1.43)$ as a linear combination of vectors which are -since they are related by constraints- linear dependent. This fact means that they're not independent, i.e. you can't conclude that all the coefficients (in that case the applied forces) are equal to zero.
|
2019-10-17 00:21:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8853920698165894, "perplexity": 297.19604700023797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00407.warc.gz"}
|
http://crypto.stackexchange.com/questions/3951/proving-correctness-of-pseudorandom-generator-construction-given-existing-pseudo/3964
|
# Proving correctness of pseudorandom generator construction given existing pseudorandom generator
Say if I have a given pseudorandom generator G which takes a k-bit input and outputs a 3k-bit number.
How should I show that a specific construction using this pseudorandom generator is valid?
For example, if I want a generator which takes a 2k-bit input and output a 3k-bit number. Is the following scheme valid?
1. split the 2k-bit input into two half
2. passes the two half through the given generator G to get two 3k-bit numbers
3. XOR the two numbers to output the final 3k-bit number.
-
Well, what properties do you want to prove? One problem I see with your proposal is that, say $A$ and $B$ are k-bit inputs. If I put $A|B$ (where $|$ is concatenation) into your proposed system, the output will be the same as if I put $B|A$ in. I.e., $f(A|B)=f(B|A)$. This may or may not be an issue though depending on the properties you want (and the properties of the original generator $G$). – mikeazo Oct 4 '12 at 13:09
What's the definition of valid? Also, how is G given? As an explicit algorithm as implementable on a Turing Machine? As an explicit combination of ideal primitives/random oracles? Or is it just named and assumed valid? – fgrieu Oct 4 '12 at 13:17
Call your original ($k$-to-$3k$ bit) PRG $G$ and your construction $G'$.
Let $\mathcal{U}_t$ denote the uniform distribution on $\{0,1\}^t$. Then $G'$ is a PRG as long as the distributions $\mathcal{U}_{3k}$ and $G'(\mathcal{U}_{2k})$ are indistinguishable with effort polynomial in $k$; see this reference.
Distribution $G'(\mathcal{U}_{2k})$ is just $G(\mathcal{U}_k) \oplus G(\mathcal{U}_k)$, where the two occurrences of $\mathcal{U}_k$ in the latter expression are independent. By the PRG property of $G$ applied to the first term, the distribution is indistinguishable from $\mathcal{U}_{3k} \oplus G(\mathcal{U}_k)$.
Since $\mathcal{U}_t \oplus \mathcal{D}$ is distributed identically to $\mathcal{U}_t$, for any independent distribution $\mathcal{D}$, we get the desired result.
I find this question a bit odd though. If all you want is a $2k$-to-$3k$ PRG constructed from a $k$-to-$3k$ PRG, then what about the following simpler construction: given $2k$ bits, throw away the first half and run the second half through the $k$-to-$3k$ PRG?
-
+1 for the last paragraph. $\:$ – Ricky Demer Oct 5 '12 at 4:37
But a distinguisher for G' is trivial when the input is malleable! If the two input halves are equal, the output is all zero. I stand by my opinion the problem is ill-defined for lack of definition of valid and given. – fgrieu Oct 5 '12 at 6:03
The definition of "pseudorandom generator" is very standard (e.g., here). There is no requirement for non-malleability of PRGs, whatever that would mean. In the PRG distinguishing game, input to the PRG is chosen randomly and the output is given to the distinguisher. The case you are worried about (two halves being equal) happens with negligible probability ($2^{-k}$) and hence does not compromise the pseudorandomness of $G'$. – Mikero Oct 7 '12 at 2:46
@Mikero: Yes, under a distinguishing game where input is random, G' is secure when G is, for some suitable asymptotic definition of secure, like "with effort polynomial in $k$ for an arbitrary small odd of success"; and your proof is sound. I was unaware this was implied by "pseudorandom generator". Change the answer (preferably clarifying that or linking to a reference, perhaps adding an explicit "with effort polynomial" before "in $k$" to prevent me reading "with effort in 2 to the $k$") and the answer's score will raise by 2. – fgrieu Oct 8 '12 at 11:19
@Mikero: I took the liberty to do what I suggested; feel free to revert. – fgrieu Oct 8 '12 at 14:18
|
2014-09-01 08:32:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8349487781524658, "perplexity": 866.2911002997926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00287-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/55ac19eee4b071e6530c7a4d
|
## anonymous one year ago Find the indicated limit, if it exists. http://prntscr.com/7ulwx5
1. SolomonZelman
for the part of the function that is GREATER than 9, that is: $$\displaystyle \LARGE\lim_{x\rightarrow 9^\color{red}{+}}f(x)$$ --------------------------------------------------- for the part of the function that is SMALLER than 9, that is: $$\displaystyle \LARGE\lim_{x\rightarrow 9^\color{red}{-}}f(x)$$
2. SolomonZelman
If these both limits are equal to each other, then whatever they are both equivalent to, is going to be your answer. If the limit from the left side and the limit from the right side are not equivalent, then your limit DNE
3. SolomonZelman
If you don't understand what I am saying I can redo the explanation better.
4. anonymous
ok
5. anonymous
I understand what you are saying. I just need to know how to find the limit.
6. SolomonZelman
Just plug in 9 into each of the parts
7. anonymous
ok so 18 and 18
8. SolomonZelman
9. anonymous
thanks
10. SolomonZelman
because the limit from both sides is equal to 18, that means that the two-sided limit is also 18.
11. anonymous
yes
12. SolomonZelman
$$\displaystyle \LARGE\lim_{x\rightarrow 9^\color{red}{-}}f(x)=\lim_{x\rightarrow 9}(x+9)=9+9=18$$ $$\displaystyle \LARGE\lim_{x\rightarrow 9^\color{red}{+}}f(x)=\lim_{x\rightarrow 9}(27-x)=27-9=18$$ Therefore, $$\displaystyle \LARGE\lim_{x\rightarrow 9}f(x)=18$$
13. SolomonZelman
the 27-x corresponds to the limit from the right side, because it is for greater-than-9 values of x, and x+9 corresponds to the limit from the left side, because it is for smaller-than-9 values of x.
14. SolomonZelman
You are welcome...
15. anonymous
That helps a lot. Thank you
|
2017-01-21 06:57:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034737944602966, "perplexity": 560.3560953546616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280929.91/warc/CC-MAIN-20170116095120-00222-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://hal-paris1.archives-ouvertes.fr/hal-00382581
|
# Estimation of the drift of fractional Brownian motion
* Corresponding author
Abstract : We consider the problem of efficient estimation for the drift of fractional Brownian motion $B^H:=\left(B^H_t\right)_{ t\in[0,T]}$ with hurst parameter H less than \frac{1}{2}. We also construct superefficient James-Stein type estimators which dominate, under the usual quadratic risk, the natural maximum likelihood estimator.
Keywords :
Domain :
Complete list of metadatas
Cited literature [11 references]
https://hal-paris1.archives-ouvertes.fr/hal-00382581
Contributor : Khalifa Es-Sebaiy <>
Submitted on : Friday, May 8, 2009 - 11:02:24 PM
Last modification on : Sunday, January 19, 2020 - 6:38:29 PM
Document(s) archivé(s) le : Thursday, June 10, 2010 - 10:57:48 PM
### Files
EOOArxiv.pdf
Files produced by the author(s)
### Citation
Khalifa Es-Sebaiy, Idir Ouassou, Youssef Ouknine. Estimation of the drift of fractional Brownian motion. Statistics and Probability Letters, Elsevier, 2009, 79 (14), pp.1647-1653. ⟨10.1016/j.spl.2009.04.004⟩. ⟨hal-00382581⟩
Record views
|
2020-05-25 15:04:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4696197211742401, "perplexity": 9245.292467981802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388758.12/warc/CC-MAIN-20200525130036-20200525160036-00035.warc.gz"}
|
http://mathoverflow.net/questions/34300/mathematical-software-for-computing-in-integral-group-rings-of-discrete-groups?sort=newest
|
# Mathematical software for computing in integral group rings of discrete groups?
I'm doing computations in the integral group ring of a discrete group, in particular the discrete Heisenberg group. In this case elements are integral combinations of monomials $x^k y^m z^n$, where the generators $x$, $y$, and $z$ satisfy $xz=zx$, $yz=zy$, and $yx=xyz$. Is there mathematical software for doing calculations with such objects, for example to compute powers of an element?
I've not been able to do this inside standard mathematical software like Mathematica.
-
As long as you have have an implementation of the group you're interested in and you have a way to sort elements of that group, you can implement the group-ring in a fairly simple and efficient fashion. In C++ I'd implement it as a class built around a std::map< GroupObject, RingObject >, and overload constructors, operators, etc, appropriately. – Ryan Budney Aug 2 '10 at 21:09
Is there a GAP package for this? Or MAGMA? – Jon Bannon Aug 2 '10 at 23:27
I don't have any answer nor any clue of where to look, but I too would be interested to hear of a software solution that didn't involve me (belatedly) learning OOP – Yemon Choi Aug 3 '10 at 4:30
Yemon - one reason for my question to to investigate when such an element has an inverse in the convolution algebra of the group. Your recent paper shows that one-sided inverses are automatically two-sided inverses for arbitrary discrete groups, and for finite-rank abelian groups there is a finite algorithm to decide invertibility. But is there such an algorithm for, e.g., the discrete Heisenberg group? Very interesting question! – Douglas Lind Aug 3 '10 at 14:50
Jon- GAP seems to deal with finitely generated algebras only (but this is based on a quick look). I talked with one of the chief programmers for MAGMA a year ago, and he said it might be possible to do in MAGMA but there's no off-the-shelf way. I also talked with SAGE people (after all, William Stein is just down the hall from me), and they cooked up something which sort of worked. – Douglas Lind Aug 3 '10 at 14:55
You can do this with GAP. The example below assumes that you have the polycyclic package installed.
First, you tell GAP which group you want to work with. Luckily, Heisenberg groups are polycyclic, and the polycyclic package provides a command to obtain them:
gap> G:=HeisenbergPcpGroup(1);
Pcp-group with orders [ 0, 0, 0 ]
Note that we could have also defined it by some other means if polycyclic was not available (e.g. as a matrix group), but this way is the most convenient. Now let's form the integral group ring:
gap> ZG:=GroupRing(Integers,G);
<free left module over Integers, and ring-with-one, with 6 generators>
The extra three generators come from the inverses of x, y and z (note that internally it calls them g1,g2,g3; it would be possible to change that with some effort, but that's beyond the scope here). Let's assign the corresponding generators of the group ring to variables x, y, z, and verify the relations you have given:
gap> x:=ZG.1;; y:=ZG.2;; z:=ZG.3;;
gap> x*z=z*x and y*z=z*y and y*x=x*y*z;
true
Here is an example of powering a group element (this works with more complicated ones, too, but I picked a small one to keep the output readable).
gap> (x+7*y)^2;
(1)*g1^2+(7)*g1*g2*g3+(7)*g1*g2+(49)*g2^2
I hope this helps.
-
|
2016-05-31 16:29:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745258629322052, "perplexity": 663.948387647664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051417337.23/warc/CC-MAIN-20160524005657-00174-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-sqrt-36r-2s
|
# How do you simplify sqrt(36r^2s)?
Mar 15, 2018
color(red)(sqrt(36r^2s) = 6rsqrt(s)
#### Explanation:
Given:
color(blue)(sqrt(36r^2s)
This radical expression can be rewritten as:
sqrt(6^2*r^2*s
$\Rightarrow \sqrt{{6}^{2}} \cdot \sqrt{{r}^{2}} \cdot \sqrt{s}$
$\Rightarrow 6 \cdot r \cdot \sqrt{s}$
Hence,
color(blue)(sqrt(36r^2s) = 6rsqrt(s)
Hope it helps.
|
2019-12-15 21:21:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476439714431763, "perplexity": 11289.266113167396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00152.warc.gz"}
|
http://xtel.in/water-level-monitor-using-sr04-and-labview.html
|
# Water Level Monitor using HC-SR04 and LabView
I have been working with ultrasonic range sensor HC-SR04 and got this wild idea to use it as a water level monitor from the internet. Despite having some concerns about using it for water level monitoring in a production environment, I decided to put together a prototype to experiment with it.
## The HC-SR04 Ultrasonic Range Sensor
The breakout board has 2 pins apart from the power supply pins. One is called TRIGGER and the other is ECHO. The sensor has two cylindrical shaped objects on one side. Apparently one of these is a ultrasound pulse sender, and the other is a receiver. They are relatively close to each other. But since these are two parts, instead of one combo sender/receiver, I’m not sure how exactly does this device range objects. I think the concept is vaguely discussed in some blog posts on the internet. Lets assume that, it is very similar to SONAR. Electrically, the working is relatively simple as shown below.
When we give it a HIGH on the TRIGGER pin for at least 10uS, the device gets triggered. It subsequently sends a 40KHz pulse from the sender, and receives reflections from nearby objects. Then depending on the measured distance, the board sends us back a HIGH on the ECHO pin. The length of this pulse will be directly proportional to the distance measured by the device.
## Programming Arduino to Handle HC-SR04
So, by this understanding about the sensor module, I have wired up the sensor with Arduino. The convenience here is that, since Arduino board itself acts as a serial port device, using the USB-to-TTL emulation, it becomes useful in the next stage.
Corresponding program is a reference program that I happen to find on the internet. I have modified it to simplify the serial communication on the LabView side.
/**
* HC-SR04 Demo
* Demonstration of the HC-SR04 Ultrasonic Sensor
* Date: August 3, 2016
*
* Description:
* Connect the ultrasonic sensor to the Arduino as per the
* hardware connections below. Run the sketch and open a serial
* monitor. The distance read from the sensor will be displayed
* in centimeters and inches.
*
* Hardware Connections:
* Arduino | HC-SR04
* -------------------
* 5V | VCC
* 7 | Trig
* 8 | Echo
* GND | GND
*
* License:
* Public Domain
*/
// Pins
const int TRIG_PIN = 7;
const int ECHO_PIN = 8;
// Anything over 400 cm (23200 us pulse) is "out of range"
const unsigned int MAX_DIST = 23200;
void setup() {
// The Trigger pin will tell the sensor to range find
pinMode(TRIG_PIN, OUTPUT);
digitalWrite(TRIG_PIN, LOW);
// We'll use the serial monitor to view the sensor output
Serial.begin(9600);
}
void loop() {
unsigned long t1;
unsigned long t2;
unsigned long pulse_width;
float cm;
// Hold the trigger pin high for at least 10 us
digitalWrite(TRIG_PIN, HIGH);
delayMicroseconds(10);
digitalWrite(TRIG_PIN, LOW);
// Wait for pulse on echo pin
while ( digitalRead(ECHO_PIN) == 0 );
// Measure how long the echo pin was held high (pulse width)
// Note: the micros() counter will overflow after ~70 min
t1 = micros();
while ( digitalRead(ECHO_PIN) == 1);
t2 = micros();
pulse_width = t2 - t1;
// Calculate distance in centimeters
// are found in the datasheet, and calculated from the assumed speed
//of sound in air at sea level (~340 m/s).
cm = pulse_width / 58.0;
// Assuming when there is no water in the tank,
// the distance measured will be at a maxima.
// In this case, the depth of the tank : 300cm.
cm = 300 - cm;
// Print out results
if ( pulse_width > MAX_DIST ) {
Serial.println("Out of range");
} else {
Serial.print(cm);
Serial.print("\n");
}
// Wait at least 60ms before next measurement
delay(300);
}
## Stage-1 Testing
The first stage testing is to make sure that the data send out by the Arduino reaches the target machine correctly. If you are running Arduino IDE on your target machine itself, it is relatively easy since your drivers are in place, as well as your IDE has a serial monitor. In my case, I’m programming the Arduino from my Mac, and Windows-7 OS is running as a guest on VirtualBox. In that case, the first step is to install FTDI VCP (Virtual COM Port) drivers, since the Arduino Duemilanove I have, uses FT232RL chip. (I will blog about these chips PL2303, CH340G, CP2102 etc in detail in a future post) Once, this is done I fired up CoolTerm to check the virtual serial port, to make sure data from the sensor is coming correctly at the Windows end. You may use any program of your choice to do the same.
## LabView Program
The LabView program is also relatively simple. You will need VISA drivers to let LabView read from the serial devices. You may download it from the NI website. Fundamentally, the serial port is opened first, and the program reads 4 bytes from it at a time. This can be changed at the program level by constant widget, or at the runtime by using a control widget. My assumptions here is that, all the numbers coming from the sensor is less than 300 (Max depth of the tank). The read buffer from the VISA Reader is at the same time attached to an Indicator (so that we can see what data is being received on the front panel), as well as a String —> Decimal number converter. Note that, I have not added a divider to set the tank input between the tank’s scale which is 0 - 100. It should be pretty straight forward to do. Finally, when the read operation is done, close the serial port. Note that, I have also connected the number output to a waveform chart. This is to monitor historical data.
## Front-Panel
The front panel is very straight forward. It contains a tank and a waveform chart. (I’ve a taste for classic controls, you may choose better widgets). The first 4 bytes of every new line printed by the Arduino program is displayed in the read buffer indicator. Once converted to a number, the tank widget is redrawn with appropriate progress. Paralelly, the same data is used to draw the waveform.
## Caveats
1. As this is a prototype project, I have omitted many details. Say for example, the block diagram does a very minimal job in reading data from the serial device. If there is no data coming in from the device for 20 seconds, the program simply crashes.
2. Only the first 4 bytes of incoming data is read. Ideally, the reading should be fully captured, and appropriate error handling should be done when the received data cannot be converted to a number.
3. The LabView read loop looks a little slow for some reason. I personally have no idea why this is so. Needs investigation.
4. In the field, the communication between the monitor and the sensor should be done wirelessly. My plan is to put a ESP8266 into the picture. What is your idea?
|
2019-01-17 13:22:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19520950317382812, "perplexity": 2348.847992448152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00444.warc.gz"}
|
http://jamescurran.co.nz/
|
# Some quirks of JAGS
I return to JAGS infrequently enough these days that I forget a lot of things. Do not get me wrong. I love JAGS and I think it is one of the most valuable tools in my toolbox. However administrative and professional duties often destroy any chance I might have of concentrating long enough to actually do some decent statistics.
Despite this I have been working with my colleagues and friends, Duncan Taylor and Mikkel Andersen–who has been visiting from Aalborg since February and really boosting my research, on some decent Bayesian modelling. Along the way we have encountered a few idiosyncrasies of JAGS which are probably worth documenting. I should point out that the issues here have been covered by Martyn Plummer and others on Stack Overflow and other forums, but perhaps not as directly.
### Local variables
JAGS allows local variables. In fact you probably use them without realizing as loop indices. However, what it does not like is the declaration of local variables within a loop. That is
model{
k <- 10
}
is fine, whereas
model{
for(i in 1:10){
k <- 10
}
}
is not. This would be fine, except the second code snippet will yield an error which tells you that k is an unknown variable and asks you to: Either supply values for this variable with the data or define it on the left hand side of a relation. I think this error message is a bit of a head scratcher because look at your code and say “But I have defined it, and it works in R.” – do not fall into the latter mode of thinking – it’s a trap!
There are a couple of solutions to this problem. Firstly, you could do as I did and give up on having your model code readable, and just not use temporary variables like this, or, secondly, you could like the temporary variable vary with the index, like so
model{
for(i in 1:10){
k[i] <- 10
}
}
### Missing data / ragged arrays
JAGS cannot deal with missing covariates very well. However, it is happy enough for you to include these observations and “step over” them some how. An example of this might be a balanced experiment where some of the trials failed. As an example let us consider a simple two factorial completely randomised design where each factor only has two levels. Let the first factor have levels A and B, and the second factor have levels 1 and 2. Furthermore let there be 3 replicates for each treatment (combination of the factors). Our (frequentist) statistical model for this design would be
$$y_{ijk}=\mu + \alpha_i + \beta_j + \gamma_{ij} + \varepsilon_{ijk},~i\in\{A,B\},j\in\{1,2\},k=1,\ldots,3,\varepsilon_{ijk}\sim N(0,\sigma^2)$$
And we would typically programme this up in JAGS as something like this
model{
for(i in 1:2){
for(j in 1:2){
for(k in 1:3){
y[i,j,k] ~ dnorm(mu[i,j], tau)
}
mu[i,j] <- alpha[i] + beta[j] + gamma[i,j]
}
}
for(i in 1:2){
alpha[i] ~ dnorm(0, 0.000001)
}
for(j in 1:2){
beta[j] ~ dnorm(0, 0.000001)
}
for(i in 1:2){
for(j in 1:2){
gammma[i,j] ~ dnorm(0, 0.000001)
}
}
tau ~ dgamma(0.001, 0.001)
}
Implicit in this code is that y is a 2 x 2 x 3 array, and that we have a fully balanced design. Now let us assume that, for some reason, the treatments $$\tau\in\{A1,A2,B1,B2\}$$ have been replicated 2, 3, 2, 3 times respectively. We can deal with this in JAGS by creating another variable in our input data which we will call reps with reps = c(3, 2, 3, 2). This then can be accommodated in our JAGS model by
model{
for(i in 1:2){
for(j in 1:2){
for(k in 1:reps[(i-1) * 2 + j]){
y[i,j,k] ~ dnorm(mu[i,j], tau)
}
mu[i,j] <- alpha[i] + beta[j] + gamma[i,j]
}
}
y is still a 2 x 2 x 3 array, but simply has NA stored in the positions without information. It is also worth noting that ever since JAGS 4.0 the syntax for loops allows
for(v in V){
where V is a vector containing integer indices. This is very useful for irregular data.
NOTE I have not compiled the JAGS code in my second example, so please let me know if there are mistakes. This is an ongoing article and so I may update it from time to time.
# Conference Programmes in R
It has been a while folks!
Lately – over the last month – I have been writing code that helps with the production of a timetable and abstract for conferences.
The latest version of that code can be found here, the associated bookdown project here, and the final result here. Note that the programme may change a good number of times before next Sunday (December 10, 2017) when the conference starts.
The code above is an improved version of the package I wrote for the Biometrics By The Border conference which worked solely with Google Sheets, by using it as a poor man’s database. I wouldn’t advise this because it is an extreme bottleneck when you constantly need to refresh the database information.
The second project was also driven by necessity, but the need in this case had more to do with the sheer complexity of organising several hundred talks over a four day programme. Now that everything is in place, most changes requested by speakers or authors can be accomodated in a few minutes by simply moving the talk on the Google sheet that controls the programme, calling several R functions, recompiling the book and pushing it up to github.
#### So how does it all work?
The current package does depend on some basic information being stored on Google Sheets. The sheets for my current conference (Joint Conference of the New Zealand Statistical Association and the International Association of Statistical Computing (Asian Regional Section)) can be viewed here on Google Sheets. Not all the worksheets here are important. The key sheets are: Monday, Tuesday, Wednesday, Thursday, All Submissions, Allocations, All_Authors, Monday_Chairs, Tuesday_Chairs, Wednesday_Chairs, and Thursday_Chairs. The first four and the last four sheets (Monday-Thursday) are hand created. The colours are for our convenience and do not matter. The layout does, and some of this is hard-coded into the project. For example all though there are seven streams a day, only six of those belong directly to us as the seventh belong to a satellite conference. The code relies on every scheduled event (including meal breaks) having a time adjacent to it in the Time column. It also collects information about the rooms as the order of these rooms does change on some days. The All_Authors sheet comes from EasyChair. EasyChair provides the facility to download author email information as an Excel spreadsheet which in turn can be uploaded to Google Sheets easily. This sheet is simply the sheet labelled All in the EasyChair file. The All Submissions sheet is a copy-and-paste from the EasyChair submissions page. I probably could webscrape this with a little effort. The Allocations sheet is mostly reflective of the All Submissions sheet, but has user added categorizations that allow us to group the talks into sensible sessions. It also uses formulae to format the titles of the talks so that each title word is capitalized (this is an imperfect process), and that the name of the submitter (who we have assumed is the speaker) is appended to the title so that it can be pasted into the programme sheets.
#### How do we get data out of Google Sheets?
The code works in a number of distinct phases. The first is capturing all the relevant information from the web in placing in a SQLite database. I used Jenny Bryan’s googlesheets package to extract the data from my spreadsheets. The process is quite straightforward, although I found that it worked a little more smoothly on my Mac than on my Windows box, although this may have more to do with the fact that the R install was brand new, and so the httr package was not installed. The difference between it being installed and not installed is that when it is installed authentication (with Google) happens entirely within the browser, whereas without you are required to copy and paste an authentication code back into R. When you are doing this many times, the former is clearly more desirable.
Interaction with Google Sheets consists of first grabbing the workbook (Google Sheets does not use this term, it comes from Excel, but it encapsulates the idea that you have a collection of one or more worksheets in any one project, than the folder that contains them all is called a workbook), and then asking for information from the individual sheets. The request to see a workbook is what will prompt the authentication, e.g.
library(googlesheets)
mySheets = gs_ls()
mySheets
The call to gs_ls will prompt authentication. It is important to note that this authentication will last for a certain period of time, and should only require interaction with a web browser the very first time you do it, and never again. Subsequent requests will result in a message along the lines of Auto-refreshing stale OAuth token. The call to gs_ls will return a list of available workbooks. A particular workbook may be retrieved by calling gs_title. For example, this function allows me to get to the conference workbook:
getProgrammeSheet = function(title = "NZSA-IASC-ARS 2017"){
mySheets = gs_ls()
ss = gs_title(title)
return(ss)
}
I can use the object returned by this function in turn to access the worksheets I want to work with using the gs_read. The functions in googlesheets are written to be compliant with the tidyverse, and in particular the pipe %>% operator. I will be the first to admit this is not my preferred way of working and I could have easily worked around it. However, in the interests of furthering my skills, I have tried to make the project tidyverse compliant and nearly all the functions will work using the pipe operator, although they take a worksheet or a database as their first argument rather than a tibble.
Once I have a workbook object (really just an authenticated connection), I can then read each of the spreadsheets. This is done by a function called updateDB. The only thing worth commenting on in this function is that the worksheets for each of the days have headers which do not really resolve well into column names. They also have a fixed set of rows and columns that we wish to capture. To that end, the range is specified for each day, and the column headers are simply set to be the first seven (A–G) capital letters of the alphabet. These sheets are stored as tibbles which are then written to an internal SQLite database using the dbWriteTable function. There are eight functions (createRoomsTbl, createAffilTbl, createTitleTbl, createAbstractTbl, createAuthorTbl, createAuthorSubTbl, createProgTbl, createChairTbl) which operate on the database/spreadsheet tables to produce the database tables we need for generating the conference timetable, and for the abstract booklet. These functions are rarely called by themselves—we tend to call a single omnibus function rebuildBD. This function allows the user to refresh the information from the web if need be, and to recreate all of the tables in the database. The bottleneck in this function is retrieving the information from the internet which may take around 20-30 seconds depending on the connection speed.
The database tables provide information for four functions: writeTT, writeProg, writeIndices, and writeSessionChairs. Each of these functions produces one or more R Markdown files in the directory used by the bookdown project.
#### Bookdown, ePub and gitBook
The final product is generated using bookdown. Bookdown, in explanation sounds simple. Implementation is really improved by the help of a master. I found this blog post by Sean Kross very helpful along with his minimal bookdown project from github. It would be misleading of me to suggest that the programme book was really produced using R Markdown. Small elements are in markdown, but the vast majority of the formatting is achieved by writing code which writes HTML. This is especially true of the conference timetable, and the hyperlinking of the abstracts to the timetable and other indices. The four functions listed above write out six markdown pages. These are the conference timetable, the session chairs table, four pages for each of the days of the conference, and two indices, one linking talks to authors, and one linking talk titles to submission numbers (which for the most part were issued by EasyChair). There is not a lot more to discussion involved here. Sean’s project sets up things in such a way that changes to the markdown or the yaml files will automatically trigger a rebuild.
#### Things I struggled with
Our conference had six parallel streams. Whilst it is easy enough to make tables that will hold all this information, it is very difficult to decide how best to display this in a way that would suit everyone. The original HTML tables were squashed almost to the point where there was a character per line on a mobile phone screen. We overcame this slightly by fixing the widths of the div element that holds the tables and adding horizontal scrolling. Many people found this feature confusing, and it did not necessarily translate into ePub. We then added some Javascript functionality by way of the tablesaw library. This allowed us to keep the time column persistant no matter which stream people were looking at, and it allowed better scrolling of streams that were offscreen. However, this was still as step too far for the technologically challenged. In the end we resorted to printing out the timetable. I also used excellent Calibre software to take the ePub as input and output it in other format—most usefully Microsoft Word’s .docx format. I know some of you are shuddering at this thought, but it did allow me to create a PDF with the programme timetable rotated and then create a PDF. This made the old fogeys immensely happy, and me immensely irritated, as I thought the gitBook version was quite useful.
#### Not forgetting the abstracts
Omitted from my workflow so far is mention of the abstracts. We had authors upload LaTeX (.tex) files to EasyChair, or text files. If you don’t do this, and they use EasyChair’s abstract box, then you have to find a way to scrape the data. There are downsides in doing so (and it even occurs in the user submitted files) in that some unicode text seems to creep in. Needless to say, even after I used an R function to convert all the files to markdown, we still had to do a bunch of manual cleaning.
#### Anyway
I hope someone finds this work useful. I have no intention of running a conference again for at least four years, but I would appreciate it if anyone wants to build on my work.
# How do I match that?
This is not a new post, but a repost after a failed WordPress upgrade
One of the projects I am working on is the 3rd edition of Introduction to Bayesian Statistics, a text book predominantly written by my former colleague Bill Bolstad. It will be out later this year if you are interested.
One of the things which will make no difference to the reader but will make a lot of difference to me is the replacement of all the manually numbered references in the book for things like chapters, sections, tables and figures. The problem I am writing about today arose from programmatically trying to label the latter — tables and figures — in LaTeX. LaTeX’s reference system, as best I understand it, requires that you place a \label command after the caption. For example
\begin{figure}
\centering
\includegraphics{myfig}
\caption{Here is a plot of something}
\label{fig:myfig}
\end{figure}
This piece of code will create a label fig:myfig which I can later use in a \ref{fig:myfig} command. This will in turn be replaced at compile time with a number dynamically formatted according to the chapter and the number of figures which precede this plot.
The challenge
The problem I was faced with is easy enough to describe. I needed to open each .tex file, find all of the figure and table environments, check to see if they contained a caption and add a label if they did.
Regular expressions to the rescue
Fortunately I know a bit about regular expressions, or at least enough to know when to ask for help. To make things more complicated for myself, I did this in R. Why? Well basically because a) I did not feel like dusting off my grossly unused Perl — I’ve been clean of Perl for 4 years now and I intend to stay that way — and b) I could not be bothered with Java’s file handling routines – I want to to be able to open files for reading with one command, not 4 or 8 or whatever the hell it is. Looking back I could have used C++, because the new C+11 standard finally has a regex library and the ability to have strings where everything does not have to be double escaped, i.e. I can write R”\label” to look for a line that has a \label on it rather than “\\label” where I have to escape the backslash.
And before anyone feels the urge to suggest a certain language I remind you to “Just say no to Python!”
Finding the figure and table environments is easy enough. I simply look for the \begin{figure} and \begin{table} tags, as well as the environment closing tags \end{figure} and \end{table}. It is possible to do this all in one regular expression, but I wanted to capture the \begin and \end pairs. I also wanted to deal with tables and figures separately. The reason for this is that it was possible to infer the figure labels from Bill’s file naming convention for his figures. The tables on the other hand could just be labelled sequentially, i.e. starting at 1 and counting upwards with a prefix reflecting the chapter number.
Lines = readLines("Chapter.tex")
begin = grep("\\begin\\{figure\\}", Lines)
end = grep("\\end\\{figure\\}", Lines)
n = length(begin)
if(n != length(end))
stop("The number of begin and end pairs don't match")
## Now we can work on each figure environment in turn
for(k in 1:n){
b = begin[k]
e = end[k]
block = paste0(Lines[b:e], collapse = "\n")
if(!grepl("\\label", block){ ## only add a label if there isn't one already
## everything I'm really talking about is going to happen here.
}
}
So what I needed to be able to do was find the caption inside my block and then insert a label. Easy enough right? I should be able to write a regular expression to do that. How about something like this:
pattern = "\\caption\\{([^}]+)\\}
That will work most of the time, except as you will find out when the caption contains braces itself, and we have some examples that do have just that
\caption{If \emph{A} is true then \emph{B} is true.'' Deduction is possible.}
My first regular expression would only match up to the end of \emph{A}, which does not help me. I need something that could, in theory match an unlimited number of inner sets of braces.
Matching nested parentheses
Fortunately matching nested parentheses is a well-known problem and Hadley Wickham tweeted me a Stack Overflow link that got me started. There is also a similar solution on page 330 of Jeffrey Friedl’s very useful Mastering Regular Expressions book. The solution relies on a regular expression which employs recursion.
Set perl = TRUE to use PCRE (and recursion) in R
To make this work in R we have to make sure the PCRE library is used, and this is done by setting perl = TRUE in the call to gregexpr
This is my solution:
## insert the label after the caption
pat = “caption(\\{([^{}]+|(?1))*\\})”
m = gregexpr(pat, block, perl = T)
capStart = attr(m[[1]], “capture.start”, TRUE)[1]
capLength = attr(m[[1]], “capture.length”, TRUE)[1]
strLabel = paste0(“\\”,”label{fig:”, figNumber, “}\n”)
newBlock = paste0(substr(block, 1, capStart + capLength),
strLabel,
substr(block, capStart + capLength + 1, nchar(block)))
The regular expression assigned to pat is where the work gets done. Reading the expression from left to right it says:
match caption literally
open the first capture group
match { literally
open the second capture group
match one or more instances of anything that is not an open brace { or a end brace }
or open the third capture group and recursively the first sub-pattern. I will elaborate on this more in a bit
close the second and third capture groups and ask R to match this pattern zero or more times
literally match the end brace }
close the first capture group
I would be the first to admit that I do not quite understand what ?1 does in this regexp. The initial solution used ?R. The effect of this was that I could match all sets of paired braces within block, but I could not specifically match the caption. As much as I understand this, it seems to limit the recursion to the outer (first) capture group. I would be interested to know.
The rest of the code breaks the string apart, inserts the correct label, and creates a new block with the label inserted. I replace the first line of the figure environment block with this new string, and keep a list of the remaining line numbers so that they can be omitted when the file is written back to disk.
# An introduction to using Rcpp modules in an R package
### Introduction
The aim of this post is to provide readers with a minimal example demonstrating the use of Rcpp modules within an R package. The code and all files for this example can be found on https://github.com/jmcurran/minModuleEx.
### What are Rcpp Modules?
Rcpp modules allow R programmers to expose their C++ class to R. By “expose” I mean the ability to instantiate a C++ object from within R, and to call methods which have been defined in the C++ class definition. I am sure there are many reasons why this is useful, but the main reason for me is that it provides a simple mechanism to create multiple instances of the same class. An example of where I have used this is my multicool package which is used to generate permutations of multisets. One can certainly imagine a situation where you might need to generate the permutations of more than two multisets at the same time. multicool allows you to do this by instantiating multiple multicool objects.
### The Files
I will make the assumption that you, the reader, know how to create a package which uses Rcpp. If you do not know how to do this, then I suggest you look at the section entitled “Creating a New Package” here on the Rstudio support site. Important: Although it is mentioned in the text, the image displayed on this page does not show that you should change the Type: drop down box to Package w/ Rcpp.
This makes sure that a bunch of fields are set for you in the DESCRIPTION file that ensure Rcpp is linked to and imported.
There are five files in this minimal example. They are
• DESCRIPTION
• NAMESPACE
• R/minModuleEx-package.R
• src/MyClass.cpp
• R/zzz.R
I will discuss each of these in turn.
#### DESCRIPTION
This is the standard DESCRIPTION file that all R packages have. The lines that are important are:
Depends: Rcpp (>= 0.12.8)
Imports: Rcpp (>= 0.12.8)
RcppModules: MyModule
The imports and LinkingTo lines should be generated by Rstudio. The RcppModules: line should contain the names(s) of the module(s) that you want to use in this package. I have only one module in this package which is unimaginatively named MyModule. The module exposes two classes, MyClass and AnotherClass.
#### NAMESPACE and R/minModule-Ex.R
The first of these is the standard NAMESPACE file and it is automatically generated using roxygen2. To make sure this happens you need select Project Options… from the Tools menu. It will bring up the following dialogue box:
Select the Build Tools tab, and make sure that the Generate documentation with Roxygen checkbox is ticked, then click on the Configure… button and make sure that that all the checkboxes that are checked below are checked:
Note: If you don’t want to use Roxygen, then you do not need the R/minModuleEx-package.R file, and you simply need to put the following three lines in the NAMESPACE file:
export(AnotherClass)
export(MyClass)
useDynLib(minModuleEx)
You need to notice two things. Firstly this NAMESPACE explicitly exports the two classes MyClass and AnotherClass. This means these classes are available to the user from the command prompt. If you only want access to the classes to be available to R functions in the package, then you do not need to export them. Secondly, as previously noted, if you are using Roxygen, then these export statements are generated dynamically from the comments just before each class declaration in the C++ code which is discussed in the next section. The useDynLib(minModuleEx) is generated from the line
#' @useDynLib minModuleEx
in the R/minModuleEx-package.R file.
#### src/MyClass.cpp
This file contains the C++ class definition of each class (MyClass and AnotherClass). There is nothing particularly special about these class declarations, although the comment lines before the class declarations,
//' @export MyClass
class MyClass{
and
//' @export AnotherClass
class AnotherClass{
, generate the export statements in the NAMESPACE file.
This file also contains the Rcpp Module definition:
RCPP_MODULE(MyModule) {
using namespace Rcpp;
class_<MyClass>( "MyClass")
.default_constructor("Default constructor") // This exposes the default constructor
.constructor<NumericVector>("Constructor with an argument") // This exposes the other constructor
.method("print", &MyClass::print) // This exposes the print method
.property("Bender", &MyClass::getBender, &MyClass::setBender) // and this shows how we set up a property
;
class_<AnotherClass>("AnotherClass")
.default_constructor("Default constructor")
.constructor<int>("Constructor with an argument")
.method("print", &AnotherClass::print)
;
}
In this module I have:
1. Two classes MyClass and AnotherClass.
2. Each class class has:
• A default constructor
• A constructor which takes arguments from R
• A print method
3. In addition, MyClass demonstrates the use of a property field which (simplistically) provides the user with simple retrieval from and assignment to a scalar class member variable. It is unclear to me whether it works for more data types, but anecdotally, I had no luck with matrices.
#### R/zzz.R
As you might guess from the nonsensical name, it is not essential to call this file zzz.R. The name comes from a suggestion from Dirk Eddelbuettel. It contains a single, but absolutely essential line of code
loadModule("MyModule", TRUE)
This code can actually be in any of the R files in your package. However, if you explicitly put it in R/zzz.R then it is easy to remember where it is.
### Using the Module from R
Once the package is built and loaded, using the classes from the module is very straightforward. To instantiate a class you use the new function. E.g.
m = new(MyClass)
a = new(AnotherClass)
This code will call the default constructor for each class. If you want to call a constructor which has arguments, then they can be added to the call to new. E.g.
set.seed(123)
m1 = new(MyClass, rnorm(10))
Each of these objects has a print method which can be called using the $ operator. E.g. m$print()
a$print() m1$print()
The output is
> m$print() 1.000000 2.000000 3.000000 > a$print()
0
> m1$print() 1.224082 0.359814 0.400771 0.110683 -0.555841 1.786913 0.497850 -1.966617 0.701356 -0.472791 The MyClass class has a module property – a concept also used in C#. A property is a scalar class member variable that can either be set or retrieved. For example, m1 has been constructed with the default value of bBender = FALSE, however we can change it to TRUE easily m1$Bender = TRUE
m1$print() Now our object m1 behaves more like Bender when asked to do something 🙂 > m1$print()
Bite my shiny metal ass!
Hopefully this will help you to use Rcpp modules in your project. This is a great feature of Rcpp and really makes it even more powerful.
# An R/Rcpp mystery
This morning and today I spent almost four hours trying to deal with the fact that our R package DNAtools would not compile under Windows. The issue originated with a win-builder build which was giving errors like this:
I"D:/RCompile/recent/R/include" -DNDEBUG -I"d:/RCompile/CRANpkg/lib/3.4/Rcpp/include" -I"d:/Compiler/gcc-4.9.3/local330/include" -c DNTRare.cpp -o DNTRare.o ID:/RCompile/recent/R/include: not found
and I replicated this (far too many times) on my Windows VM on my Mac.
In the end this boiled down to the presence of our Makevars file which contained only one line:
CXX_STD = CXX14
Deleting fixed the problem and it now compiles just fine. It compiles fine locally, and I am waiting for the response from the win-builder site. I do not anticipate an issue, but it would be useful to understand what was going wrong. I must admit that I have forgotten what aspects of the C++14 standard we are using, but I do know that changing line to
PKG_CXXFLAGS= -std=c++14
which I use in my multicool package gives me a different pain, with the compiler being unable to locate Rccp.h after seeing a #include directive.
# Extracting elements from lists in Rcpp
If you are an R programmer, especially one with a solid background in data structures or with experience in a more traditional object oriented environment, then you probably use lists to mimic the features you might expect from a C-style struct or a class in Java or C++. Retrieving information from a list of lists, or a list of matrices, or a list of lists of vectors is fairly straightforward in R, but you may encounter some compiler error messages in Rcpp if you do not take the right steps.
### Stupid as bro
This will not be a very long article, but I think it is useful to have this information somewhere other than Stack Overflow. Two posts, one from Dirk and one from Romain contain the requisite information.
The List class does not know what type of elements it contains. You have to tell it. That means if you have something like
x = list(a = matrix(1:9, ncol = 3), b = 4)
void Test(List x){
IntegerMatrix a = x["a"];
}
in your C++, then you might get a compiler error complaining about certain things not being overloaded. As Dirk points out in another post (which I cannot find right at this moment), the accessor operator for a List simply returns a SEXP. Rcpp has done a pretty good job of removing the need for us to get our hands dirty with SEXP‘s, but they are still there. If you know (and you should since you are the one writing the code and designing the data structures) that this SEXP actually is an IntegerMatrix then you should cast it as one using the as<T>() function. That is,
void Test(List x){
IntegerMatrix a = as<IntegerMatrix>(x["a"]);
}
### So why does this work?
If you look around the internet, you will see chunks of code like
int b = x["b"];
NumericVector y = x["y"];
which compile just fine. Why does this work? It works because the assignment operator has been overloaded for certain types in Rcpp, and so you will probably find you do not need explicit type coercion. However, it certainly will not hurt to explicitly do so for every assignment, and your code will benefit from doing so.
# Generating pseudo-random variates C++-side in Rcpp
It is well-known that if you are writing simulation code in R you can often gain a performance boost by rewriting parts of your simulation in C++. These days the easiest way to do that of course is to use Rcpp. Simulation usually depends on random variates, and usually great numbers of them. One of the issues that may arise is that your simulation needs to execute on the C++ side of things. For example, if you decide to programme your Metropolis-Hastings algorithm (not technically a simulation I know) in Rcpp, then you are going to need to be able to generate hundreds of thousands, if not millions, of random numbers. You can use Rcpp’s features to call R routines from within Rcpp to do this, e.g.
Function rnorm("rnorm");
rnorm(100, _["mean"] = 10.2, _["sd"] = 3.2 );
(Credit: Dirk Eddelbuettell)
but this has a certain overhead. C++ has had built-in in random number generation functionality since at least the C+11 standard (and probably since the C+0X standard). The random header file provides a Mersenne-Twister uniform random number generator (RNG), a Linear Congruential Generator (LCG), and a Subtract-with-Carry RNG. There is also a variety of standard distributions available, described here.
### Uniform random variates
The ability to generate good quality uniform random variates is essential, and the mt19937 engine provides. The 19937 refers to the Mersenne Prime $$(2^{19937}-1)$$ that this algorithm is based on, and also to its period length. There are four steps required to generate uniform random variates. These are:
1. Include the random header file
2. Construct an mt19937 random number engine, and initialise it with a seed
3. Construct a $$U(0,1)$$ random number generator
4. Use your engine and your uniform random number generator to draw variates
In code we would write
#include <random>
#include <Rcpp.h>
using namespace std;
using namespace Rcpp;
mt19937 mtEngine;
uniform_real_distribution<double> rngU;
//[[Rcpp::export]]
void setSeed(unsigned int seed){
mtEngine = mt19937(seed);
rngU = uniform_real_distribution<>(0.0, 1.0);
}
double runif(void){
return rngU(mtEngine);
}
The function runif can now be called with runif(). Note that the setSet function has been exported so that you can initialize the RNG engine with a seed of your choice.
### How about normal random variates?
It does not require very much more effort to add a normal RNG to your code. We simply add
normal_distribution<double> rngZ;
to our declared variables, and
//[[Rcpp::export]]
void setSeed(unsigned int seed){
mtEngine = mt19937(seed);
rngU = uniform_real_distribution<>(0.0, 1.0);
rngZ = normal_distribution<double>(0.0, 1.0);
}
double rnorm(double mu = 0, double sigma = 1){
return rngZ(mtEngine) * sigma + mu;
}
to our code base. Now rnorm can be called without arguments to get standard ($$N(0,1)$$) random variates, or with a mean, or a standard deviation, or both to get $$N(\mu,\sigma^2)$$ random variates
### Rcpp does it
No doubt someone is going to tell me that Romain and Dirk have thought of this already for you, and that my solution is unnecessary Morris Dancing. However, I think there is merit in knowing how to use the standard C++ libraries.
Please note that I do not usually advocate having global variables such as those in the code above. I would normally make mtEngine, rngU, and rngZ private member variables a class and then either instantiate it using an exported Rcpp function, or export the class and essential functions using an Rcpp module.
Working C++ code and an R test script can be found here in the RNG folder. Enjoy!
# Embracing the new. Is it time to ditch pointers in C++?
This post was originally posted December 22, 2014
Recently I had the opportunity to revisit some research that I did circa 1997-1998, because someone asked me to write a book chapter on the subject. It is an interesting process to go back and look at your old work and apply all the things that you have learned in the intervening time period.
In this case the research relied on some C/C++ simulation programmes that I had written. The simulations, even for small cases, performed hundreds of thousands of iterations to estimate lower bounds and so C++ was a natural choice at the time. R was still a fledgling, and Splus simply was not up to extensive simulation work. Given the nature of these simulations, I still do not think I would use R, even though it is very fast these days.
Simulations, being simulations, rely extensively on random number generation, and of course these programmes were no exception. Re-running the programmes seemed trivial, and of course the compute time had been substantially reduced over the years. This lead me to think that I could now explore some more realistic scenarios. If you, the reader, think I am being deliberately mysterious about my simulations, I am not. It is more that the actual research is a side issue to the problems I want to talk about here. The “more realistic inputs” simply correspond to larger simulated DNA databases, inline with those now maintained by many jurisdictions, and a set of allele frequencies generated from a much larger data set than that I had access to in 1997 with a different set of loci.
There would clearly be no story if something did not happen with the new work. My early work was with databases of 100, 400 and 1,000 individuals. When I expanded this to 5,000 and 10,000 individuals I found that things began to go wrong.
Firstly, the programme began to quit unexpectedly on Windows, and produce segmentation faults when compiled with gcc on Linux. The crashes only happened with the larger database sizes, but strangely in the case where N = 1,000 — where there had previously been no crash. I thought initially that this might be because I had inadvertently hard-coded some of the array dimensions, and that the new data sets, or larger runs where causing problems. Extensive examination of the code did not reveal any irregularities.
Random number generators and all that
I did discover fairly early on that I could no longer rely on George Marsaglia’s multiply-with-carry (MWC) uniform random number generator. The reason for this is that the generator, as coded, relies on integers of certain widths, and integer overflow, or wrapping. I had pretty much abandoned this some years ago when a former MSc student, Dr Alec Zwart discovered that there were irregularities in the distribution of the bits. Using a random number a bit at a time is very useful when simulating breeding populations — which is something else I do quite often.
### The Mersenne Twister
The Mersenne Twister has been around since 1997, and again advances in computing have made the computing overhead it incurs relatively negligible. My initial switch to a Mersenne Twister uniform random number generator (RNG) was through an implementation distributed by Agner Fog. This implementation has served me well for quite some time, and I have used it extensively. It sadly was not the case this time. I could not get Visual Studio 2013 to understand some of the enums, and faking it caused me problems elsewhere. I am sure there is nothing wrong with this implementation, but I certainly could not get to to work this time.
I discovered by reading around on the web that random number generation has become part of the new C+11 standard, and that it is fairly easy to get a Mersenne Twister random number generator. Most implementations start with a uniform integer, or long integer, random number stream and then wrap different classes around this stream. C++ is no exception
#include <random>
using namespace std;
static mt19937 mtEngine;
uniform_real_distribution<double> rngU;;
void init_gen(unsigned int seed){
mtEngine = mt19937(seed);
rngU = uniform_real_distribution<>(0.0, 1.0);
}
double runif(void){
return rngU(mtEngine);
}
I have used a static variable to store the stream in my implementation but there is no requirement to do this.
### Nothing is ever normal
I have also, for quite some time, used Chris Wallace’s Fastnorm code for very fast generation of standard normal random variates. However, I found that this too appeared to be causing me problems, especially when I changed operating systems. My programming philosophy these days is that my work should really be portable to any mainstream operating system (Windows, Linux, OS X), especially since I almost never write GUI code any more. Running on both Windows and Linux is useful, because when I want to run really big simulations I often will flick the code over to our cluster which strangely enough does not run on Windows – who knew?
It turns out that the C+11 also has a normal random number generator. I have done very little research to find out what method is used, but my guess is that it is either an inverse CDF method, or at worst a Box-Muller based method. Adding a standard normal generator is easy
static mt19937 mtEngine;
static normal_distribution<double> rngZ;
void init_gen(unsigned int seed){
mtEngine = mt19937(seed);
rngZ = normal_distribution<double>(0.0, 1.0);
}
double snorm(void){
return rngZ(mtEngine);
}
So that will work right?
After all of these changes, which do not seem substantial but bear in mind they took me a long time to get to them, everything was stable right? Well no, I was still getting a crash when N = 10,000, and this was not happening when I started the simulation with that case.
Java to the rescue
I decided, probably incorrectly with hindsight, that I must be making some sort of stupid mistake with allocating memory and releasing it. I decided to take that completely out of the equation by switching to Java. A port from C++ to Java is actually a relatively painless thing to do, and I had a working version of my programme in a couple of hours. This was made easier by the fact that my colleague Duncan Taylor had ported Ross Ihaka’s C code, ripped out of R, for gamma random number generation (yes I need that too), and with a little tweaking I had it running in my programme as well. The Java port let me recognize that I had done some silly things in my original programme, such as storing an entire bootstrap sample before processing it and in the process chewing up CPU and memory time with needless copying. And after a little more hacking (like three days) it ran to my satisfaction and all simulations duly completed with about three hours of run time.
Java has some pretty cool ideas, and it is a fun and easy language to programme in. However, my failure to get the C++ working was weighing heavily on my mind. I like to think that I am a hell of a lot better C++ programmer than a Java programmer, and I dislike the idea that I might be writing slower programmes. I also do not think Java is currently well-suited to scientific programming. I am sure some readers will tell me this is no longer true, but access to a well accepted scientific code library is missing, and although there are many good projects, a lot of them are one-man efforts, or have been abandoned. A good example of the latter is the COLT library from CERN.
### Out with pointers
I thought about this for sometime, and eventually it occurred to me that I could write a C++ programme that looked like a Java programme — that is, no pointers. C++ purists might shudder, but if you think of Java as simplified C++, then the concept is not so strange. Java treats every object argument in a function as being a reference. C++ can replicate this behaviour very easily by simply using its reference notation. The big trade-off was that I was going to also have to drop the pointers I used for dynamic allocation of memory. Java sort of fudges this as far as I can tell, because although the scalar types (int, double, boolean and others) are generally not treated as references, I think native arrays of them are, e.g. int[] or double[].
### The STL
The Standard Template Library (STL) provides a set of low-overhead C++ template container classes, such as lists, vectors, maps and queues. These classes can contain themselves, and they can be dynamically resized at execution time. I have avoided using them in this way, especially when writing code to be very fast. However, I am fairly sure my colleague Brendon Brewer, who is much younger and a more modern C++ programmer, has told me that he never uses pointers. Given I had just finished for the year, this seemed like an ideal quick summer project.
Another couple of days recoding got me to running mode, and now it is time to reveal probably what was the issue all along. Remember when I said I did this:
double runif(void){
return rngU(mtEngine);
}
double runif(void){
return mtEngine() / 4294967295.0;
}
The large constant there is $$2^{32}-1$$, the largest unsigned integer that can be represented on a 32-bit CPU. The mt19997 function mtEngine() returns an unsigned 32-bit integer, but for reasons that still escape me this piece of code:
return (int)floor(b * runif());
which should return a number between 0 and b-1 inclusive, was returning b, thereby causing the programme to address unallocated memory, and hence the crash. The reason it took so long to happen is that the random number stream had to be used for a very long time. Using the uniform_read_distribution class stopped this from happening.
So did I take a performance hit? I cannot say equivocally without going back to my original programme and adjusting the RNGs, but it appears that the C++ version actually takes about 10 minutes longer than the Java version. This is a very poor comparison, because the Java version is running on my Windows PC (core i7, 32GB RAM, HDD), and the C++ version is running on my Macbook (core i7, 16GB RAM, SSD), but also because the new C++ version is “more objected-oriented” than the Java version. That is, firstly I used native arrays, and arrays of arrays in Java, like int[][] and double[][]. If I had used Java ArrayLists (why would you), it might have been a different story. Secondly, there is a bit more OO design and architecture in the C++ version, including things like operator overloading and more extensive use of objects to represent the input data. All of these things cost, and they are probably costing a little too much in execution time, although they pay off in readability and usability, especially in well designed IDEs like Visual Studio and Xcode. Finally, my original C++ programme never finished, so I have no idea actually how long it would take to do the same set of simulations. I think in this case I will take a programme that works in three hours over a programme that quickly crashes in one.
# Forensic anthropologists — lend me your data
Please note: This is not a new post, but a restored post from August that I lost in a WordPress upgrade
## Friends, Romans, forensic anthropologists, lend me your data
I have been reading the Journal of Forensic Sciences (JFS) over the last couple of days to see what sort of research is being done in forensic science and to see how many studies are using statistics to make, or to reinforce their conclusions. The answer the the second question is “quite a few.” There has been quite significant adoption of multivariate analysis, most commonly PCA, in a wide variety of forensic disciplines fields and that is very pleasing to see.
## Forensic anthropology
Anthropologists, and in particular forensic anthropologists have long been heavy users of statistical methodology. Many studies use linear regression, linear discriminant analysis, principal component analysis and logistic regression. The well-known and widely used forensic anthropology computer programme FORDISC uses LDA. It is interesting to me to see the appearance of some newer/different classification techniques such as k-nearest neighbour, quadratic discriminant analysis, classification and regression trees, support vector machines, random forests, and neural networks.
Forensic anthropology features heavily in JFS, and the papers contain a large amount of statistical analysis of data. The focus of the articles is often on classification of remains in to age, gender, or racial groups, or on age estimation. The articles are generally quite interesting and well written.
Show me the data
However, there is almost never any provision of the raw data, and my experience whilst writing my data analysis book was that there was no response to my requests for data from even a single anthropologist of the dozen or so that I wrote to. Not even a polite “sorry but we are unable to release the data.” I understand that in all scientific disciplines data can be expensive, in terms of time or money, to collect, and so a researcher might justifiably want to retain a data set as long as possible to get as much research value from it as possible. However, surely there must be a point where the data could be released in the public domain? The University of Tennessee Knoxville does have a forensic anthropology databank, but, at least from the webpage, it seems that there is an emphasis on deposits rather than withdrawals.
## A challenge
I therefore issue a challenge to the forensic anthropology community – release some of your data into the wild. It will benefit your discipline as it will others, and you might find your work cited more as people give you credit for producing the data that they are applying their novel techniques to.
# I am an applied statistician
Today brings a very nice blog post from Rafael Irizarry on being pragmatic in applied statistics rather than rigidly/religiously Bayesian or Frequentist.
Does this article reverse or contradict my thinking in forensic science? Not really. I am a strong proponent of Bayesian thinking in that field. However, in the shorter term I would be happier if practitioners simply had a better understanding of the Frequentist interpretation issues. As a statistician I depend on the collaboration of forensic scientists for both the problems and the data. Telling scientists that everything they are doing is incorrect is generally unhelpful. It is more productive to collaborate and make it better.
|
2018-05-20 23:16:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3718967139720917, "perplexity": 1208.765794035039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863811.3/warc/CC-MAIN-20180520224904-20180521004904-00573.warc.gz"}
|
http://en.m.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics
|
# Fermi–Dirac statistics
In quantum statistics, a branch of physics, Fermi–Dirac statistics describes distribution of particles in a system comprising many identical particles that obey the Pauli exclusion principle. It is named after Enrico Fermi and Paul Dirac, who each discovered it independently, although Enrico Fermi defined the statistics earlier than Paul Dirac.[1][2]
Fermi–Dirac (F–D) statistics applies to identical particles with half-odd-integer spin in a system in thermal equilibrium. Additionally, the particles in this system are assumed to have negligible mutual interaction. This allows the many-particle system to be described in terms of single-particle energy states. The result is the F–D distribution of particles over these states and includes the condition that no two particles can occupy the same state, which has a considerable effect on the properties of the system. Since F–D statistics applies to particles with half-integer spin, these particles have come to be called fermions. It is most commonly applied to electrons, which are fermions with spin 1/2. Fermi–Dirac statistics is a part of the more general field of statistical mechanics and uses the principles of quantum mechanics.
## History
Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current.[3] It was also difficult to understand why the emission currents, generated by applying high electric fields to metals at room temperature, were almost independent of temperature.
The difficulty encountered by the electronic theory of metals at that time was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constant k. This statistical problem remained unsolved until the discovery of F–D statistics.
F–D statistics was first published in 1926 by Enrico Fermi[1] and Paul Dirac.[2] According to an account, Pascual Jordan developed in 1925 the same statistics which he called Pauli statistics, but it was not published in a timely manner.[4] According to Dirac, it was first studied by Fermi, and Dirac called it Fermi statistics and the corresponding particles fermions.[5]
F–D statistics was applied in 1926 by Fowler to describe the collapse of a star to a white dwarf.[6] In 1927 Sommerfeld applied it to electrons in metals[7] and in 1928 Fowler and Nordheim applied it to field electron emission from metals.[8] Fermi–Dirac statistics continues to be an important part of physics.
↑Jump back a section
## Fermi–Dirac distribution
For a system of identical fermions, the average number of fermions in a single-particle state $i$, is given by the Fermi–Dirac (F–D) distribution,[9]
$\bar{n}_i = \frac{1}{e^{(\epsilon_i-\mu) / k T} + 1}$
where k is Boltzmann's constant, T is the absolute temperature, $\epsilon_i \$ is the energy of the single-particle state $i$, and μ is the total chemical potential. At zero temperature, μ is equal to the Fermi energy plus the potential energy per electron. For the case of electrons in a semiconductor, $\mu\$ is typically called the Fermi level or electrochemical potential.[10][11]
The F–D distribution is only valid when the fermions do not significantly interact with each other, so that the addition of a fermion does not disrupt the values of $\epsilon_i \$. Since the F–D distribution was derived using the Pauli exclusion principle, which allows at most one electron to occupy each possible state, a result is that $0 < \bar{n}_i < 1$ .[12]
(Click on a figure to enlarge.)
### Distribution of particles over energy
Fermi function F($\epsilon \$) vs. energy $\epsilon \$, with μ = 0.55 eV and for various temperatures in the range 50K ≤ T ≤ 375K.
The above Fermi–Dirac distribution gives the distribution of identical fermions over single-particle energy states, where no more than one fermion can occupy a state. Using the F–D distribution, one can find the distribution of identical fermions over energy, where more than one fermion can have the same energy.[14]
The average number of fermions with energy $\epsilon_i \$ can be found by multiplying the F–D distribution $\bar{n}_i \$ by the degeneracy $g_i \$ (i.e. the number of states with energy $\epsilon_i \$ ),[15]
\begin{alignat}{2} \bar{n}(\epsilon_i) & = g_i \ \bar{n}_i \\ & = \frac{g_i}{e^{(\epsilon_i-\mu) / k T} + 1} \\ \end{alignat}
When $g_i \ge 2 \$, it is possible that $\ \bar{n}(\epsilon_i) > 1$ since there is more than one state that can be occupied by fermions with the same energy $\epsilon_i \$.
When a quasi-continuum of energies $\epsilon \$ has an associated density of states $g( \epsilon ) \$ (i.e. the number of states per unit energy range per unit volume [16]) the average number of fermions per unit energy range per unit volume is,
$\bar { \mathcal{N} }(\epsilon) = g(\epsilon) \ F(\epsilon)$
where $F(\epsilon) \$ is called the Fermi function and is the same function that is used for the F–D distribution $\bar{n}_i$,[17]
$F(\epsilon) = \frac{1}{e^{(\epsilon-\mu) / k T} + 1}$
so that,
$\bar { \mathcal{N} }(\epsilon) = \frac{g(\epsilon)}{e^{(\epsilon-\mu) / k T} + 1}$ .
↑Jump back a section
## Quantum and classical regimes
The classical regime, where Maxwell–Boltzmann (M–B) statistics can be used as an approximation to F–D statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. Using this approach, it can be shown that the classical situation occurs if the concentration of particles corresponds to an average interparticle separation $\bar{R}$ that is much greater than the average de Broglie wavelength $\bar{\lambda}$ of the particles,[18]
$\bar{R} \ \gg \ \bar{\lambda} \ \approx \ \frac{h}{\sqrt{3mkT}}$
where $h$ is Planck's constant, and $m$ is the mass of a particle.
For the case of conduction electrons in a typical metal at T=300K (i.e. approximately room temperature), the system is far from the classical regime since $\bar{R} \approx \bar{\lambda}/25$ . This is due to the small mass of the electron and the high concentration (i.e. small $\bar{R}$) of conduction electrons in the metal. Thus F–D statistics is needed for conduction electrons in a typical metal.[18]
Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the white dwarf's temperature is high (typically T=10,000K on its surface[19]), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again F–D statistics is required.[6]
↑Jump back a section
## Three derivations of the Fermi–Dirac distribution
### Derivation starting with grand canonical distribution
The Fermi-Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential µ fixed by the reservoir).
Due to the non-interacting quality, each available single-particle level (with energy level ϵ) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. By the Pauli exclusion principle there are only two possible microstates for the single-particle level: no particle (energy E=0), or one particle (energy E=ϵ). The resulting partition function for that single-particle level therefore has just two terms:
\begin{align}\mathcal Z & = \exp(0(\mu - 0)/k_B T) + \exp(1(\mu - \epsilon)/k_B T) \\ & = 1 + \exp((\mu - \epsilon)/k_B T)\end{align}
and the average particle number for that single-particle substate is given by
$\langle N\rangle = k_B T \frac{1}{\mathcal Z} \left(\frac{\partial \mathcal Z}{\partial \mu}\right)_{V,T} = \frac{1}{\exp((\epsilon-\mu)/k_B T)+1}$
This result applies for each single-particle level, and thus gives the exact Fermi-Dirac distribution for the entire state of the system.
The variance in particle number (due to thermal fluctuations) may also be derived:
$\langle (\Delta N)^2 \rangle = k_B T \left(\frac{d\langle N\rangle}{d\mu}\right)_{V,T} = \langle N\rangle (1 - \langle N\rangle)$
This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas[20]. The ability of an energy level to contribute to transport phenomena is proportional to $\langle (\Delta N)^2 \rangle$.
### Derivations starting with canonical distribution
It is also possible to derive approximate Fermi-Dirac statistics in the canonical ensemble. These derivations are lengthy and only yield the above results in the asymptotic limit of a large number of particles. The reason for the inaccuracy is that the total number of fermions is conserved in the canonical ensemble, which contradicts the implication in Fermi-Dirac statistics that each energy level is filled independently from the others (which would require the number of particles to be flexible).
↑Jump back a section
↑Jump back a section
## References
1. Reif, F. (1965). Fundamentals of Statistical and Thermal Physics. McGraw–Hill. ISBN 978-0-07-051800-1.
2. Blakemore, J. S. (2002). Semiconductor Statistics. Dover. ISBN 978-0-486-49502-6.
3. Kittel, Charles (1971). Introduction to Solid State Physics (4th ed.). New York: John Wiley & Sons. ISBN 0-471-14286-7. OCLC 300039591.
↑Jump back a section
## Footnotes
1. ^ a b Fermi, Enrico (1926). "Sulla quantizzazione del gas perfetto monoatomico". Rendiconti Lincei (in Italian) 3: 145–9., translated as Zannoni, Alberto (transl.) (1999-12-14). "On the Quantization of the Monoatomic Ideal Gas". arXiv:cond-mat/9912229 [cond-mat.stat-mech].
2. ^ a b Dirac, Paul A. M. (1926). "On the Theory of Quantum Mechanics". Proceedings of the Royal Society, Series A 112 (762): 661–77. Bibcode:1926RSPSA.112..661D. doi:10.1098/rspa.1926.0133. JSTOR 94692.
3. ^ (Kittel 1971, pp. 249–50)
4. ^ "History of Science: The Puzzle of the Bohr–Heisenberg Copenhagen Meeting". Science-Week (Chicago) 4 (20). 2000-05-19. OCLC 43626035. Retrieved 2009-01-20.
5. ^ Dirac, Paul A. M. (1967). Principles of Quantum Mechanics (revised 4th ed.). London: Oxford University Press. pp. 210–1. ISBN 978-0-19-852011-5.
6. ^ a b Fowler, Ralph H. (December 1926). "On dense matter". Monthly Notices of the Royal Astronomical Society 87: 114–22. Bibcode:1926MNRAS..87..114F.
7. ^ Sommerfeld, Arnold (1927-10-14). "Zur Elektronentheorie der Metalle". Naturwissenschaften 15 (41): 824–32. Bibcode:1927NW.....15..825S. doi:10.1007/BF01505083.
8. ^ Fowler, Ralph H.; Nordheim, Lothar W. (1928-05-01). "Electron Emission in Intense Electric Fields" (PDF). Proceedings of the Royal Society A 119 (781): 173–81. Bibcode:1928RSPSA.119..173F. doi:10.1098/rspa.1928.0091. JSTOR 95023.
9. ^ (Reif 1965, p. 341)
10. ^ (Blakemore 2002, p. 11)
11. ^ Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed.). San Francisco: W. H. Freeman. p. 357. ISBN 978-0-7167-1088-2. More than one of |authorlink=, |authorlink=, and |author-link= specified (help)
12. ^ Note that $\bar{n}_i$ is also the probability that the state $i$ is occupied, since no more than one fermion can occupy the same state at the same time and $0 < \bar{n}_i < 1$.
13. ^ (Kittel 1971, p. 245, Figs. 4 and 5)
14. ^ These distributions over energies, rather than states, are sometimes called the Fermi–Dirac distribution too, but that terminology will not be used in this article.
15. ^ Leighton, Robert B. (1959). Principles of Modern Physics. McGraw-Hill. p. 340. ISBN 978-0-07-037130-9.
Note that in Eq. (1), $n(\epsilon) \,$ and $n_s \,$ correspond respectively to $\bar{n}_i$ and $\bar{n}(\epsilon_i)$ in this article. See also Eq. (32) on p. 339.
16. ^ (Blakemore 2002, p. 8)
17. ^ (Reif 1965, p. 389)
18. ^ a b (Reif 1965, pp. 246–8)
19. ^ Mukai, Koji; Jim Lochner (1997). "Ask an Astrophysicist". NASA's Imagine the Universe. NASA Goddard Space Flight Center. Archived from the original on 2009-01-20.
20. ^ doi:10.1103/PhysRev.181.1336
This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
21. ^ (Reif 1965, pp. 340–2)
22. ^ a b (Reif 1965, pp. 203–6)
23. ^ See for example, Derivative - Definition via difference quotients, which gives the approximation f(a+h) ≈ f(a) + f '(a) h .
24. ^ (Reif 1965, pp. 341–2) See Eq. 9.3.17 and Remark concerning the validity of the approximation.
25. ^ By definition, the base e antilog of A is eA.
26. ^ (Blakemore 2002, pp. 343–5)
↑Jump back a section
|
2013-05-19 21:58:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 116, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.913493275642395, "perplexity": 909.6148938284414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698090094/warc/CC-MAIN-20130516095450-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
|