url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.tutorvista.com/content/math/frequency-table/ | Top
# Frequency Table
Frequency is referred as repetition. In statistics, frequency of a particular variable or an observation is a number which indicates how many number of times the variable or observation is repeated in the given data.
When the data is scattered, one may not know exactly what is the frequency of a particular observation. In order to make it convenient, frequency tables were introduced. Frequency table is a table that represents frequencies of each element in front of it.
It is used to represent variables systematically that occur one or more times in a sample.
Related Calculators Frequency Calculator Calculate Relative Frequency Frequency and Wavelength Calculator Frequency Distribution Calculator
## Construction of Frequency Table
Case 1: Ungrouped Frequency Table
An ungrouped frequency table does not consist of class intervals. When data is variables present in a sample. Each observation is noted with its corresponding tally marks as shown in the following example.
Let us consider the following data:
2, 3, 3, 5, 7, 9, 7, 8, 9, 9, 2, 5, 3, 9, 3, 2, 5, 9, 8, 7, 3, 5, 7, 9, 8, 5, 2, 3
The frequency table for above data is:
Case 2: Grouped Frequency Table
A grouped frequency table does have classes. The observations lying in each class interval are noted with corresponding tally marks and their frequency is written in front of each interval. Refer the following example .
Construct a grouped frequency table for the following data :
8, 10, 43, 15, 22, 34, 23, 45, 28, 49, 30, 21, 29, 17, 33, 39, 41, 48, 33, 25
Here,
Lowest observation = 8
Highest observation = 49
Difference = 49 - 8 = 41
If we take 10 as the width of class interval, then
Number of classes = $\frac{41}{10}$ = 4.1
Which means 5 classes are required.
Required frequency table is given below :
More topics in Frequency Table Frequency Distribution Cumulative Frequency
*AP and SAT are registered trademarks of the College Board. | 2019-06-20 08:10:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5826242566108704, "perplexity": 613.3827779354664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999163.73/warc/CC-MAIN-20190620065141-20190620091141-00133.warc.gz"} |
https://radar.inria.fr/report/2014/coati/uid50.html | EN FR
• Legal notice
• Accessibility - non conforme
##### COATI - 2014
Research Program
Application Domains
New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
## Section: New Results
### Structural Graph Theory
Participants : Jean-Claude Bermond, Frédéric Havet, Nicolas Nisse, Ana Karolinna Maia de Oliveira, Stéphane Pérennes.
More information on several results presented in this section may be found in PhD thesis of A. K. Maia de Oliveira [16] , and in the Habilitation thesis of N. Nisse [17] .
#### Graph colouring and applications
Graph colouring is a central problem in graph theory and it has a huge number of applications in various scientific domains (telecommunications, scheduling, bio-informatics, ...). We mainly study graph colouring problems that model ressource allocation problems.
##### Backbone colouring
A well-known channel assignment problem is the following: we are given a graph $G$, whose vertices correspond to transmitters, together with an edge-weighting $w$. The weight of an edge corresponds to the minimum separation between the channels on its endvertices to avoid interferences. (If there is no edge, no separation is required, the transmitters do not interfere.) We need to assign positive integers (corresponding to channels) to the vertices so that for every edge $e$ the channels assigned to its endvertices differ by at least $w\left(e\right)$. The goal is to minimize the largest integer used, which corresponds to minimizing the span of the used bandwidth. We studied a particular, yet quite general, case, called backbone colouring, in which there are only two levels of interference. So we are given a graph $G$ and a subgraph $H$, called the backone. Two adjacent vertices in $H$ must get integers at least $q$ apart, while adjacent vertices in $G$ must get integers at distance at least 1. The minimum span in this case is called the $q$-backbone chromatic number and is denoted $BB{C}_{q}\left(G,H\right)$. In [30] and [45] , we focus on the case when $G$ is planar and $H$ is a forest. In [30] , we give a series of NP-hardness results as well as upper bounds for $BB{C}_{q}\left(G,H\right)$, depending on the type of the forest (matching, galaxy, spanning tree). We also discuss a circular version of the problem. In [45] , we give some upper bounds when $G$ is planar and has no cycles of length 4 an 5, and $G$ is a tree, and we relate those results to the celebrated Steinberg's Conjecture stating that every planar graphs with no cycles of length 4 or 5 is 3-colourable.
In [29] , we consider the list version of this problem (in which each vertex is given a particular list of admissible colours), with particular focus on colours in ${𝒵}_{p}$ – this problem is closely related to the problem of circular choosability. We first prove that the list circular $q$-backbone chromatic number of a graph is bounded by a function of the list chromatic number. We then consider the more general problem in which each edge is assigned an individual distance between its endpoints, and provide bounds using the Combinatorial Nullstellensatz. Through this result and through structural approaches, we achieve good bounds when both the graph and the backbone belong to restricted families of graphs.
##### On-line colouring graphs with few ${P}_{4}$s
Various on-line colouring procedures are used. The most widespread ones is the greedy one, which results in a greedy colouring. Given a graph $G=\left(V;E\right)$, a greedy colouring of $G$ is a proper colouring such that, for each two colours $i, every vertex of $V\left(G\right)$ coloured $j$ has a neighbour with colour $i$. A second optimization procedure consists from time to time to consider the present colouring and to free some colour when possible: if each vertex of a colour class has another colour that is not used by its neighbours, we can recolour each vertex in the calls by another colour. This procedure results in a b-colouring of the graph. A b-colouring of a graph $G$ is a proper colouring such that every colour class contains a vertex which is adjacent to at least one vertex in every other colour class. One of the performance measure of such graph is the maximum numbers of colours they could possibly use. The greatest $k$ such that $G$ has a greedy colouring with $k$ colours is the Grundy number of $G$. The greatest integer $k$ for which there exists a b-colouring of $G$ with $k$ colours is its b-chromatic number. Determining the Grundy number and the b-chromatic number of a graph are NP-hard problems in general. For a fixed $q$, the $\left(q;q-4\right)$-graphs are the graphs for which no set of at most $q$ vertices induces more than $q-4$ distinct induced ${P}_{4}$s paths of order 4). In [24] , we obtain polynomial-time algorithms to determine the Grundy number and the b-chromatic number of $\left(q;q-4\right)$-graphs, for a fixed $q$. They generalize previous results obtained for cographs and ${P}_{4}$-sparse graphs, classes strictly contained in the $\left(q;q-4\right)$-graphs.
##### Weighted colouring
We also studied weighted colouring which models various problems of shared resources allocation. Given a vertex-weighted graph $G$ and a (proper) $r$-colouring $c=\left\{{C}_{1},...,{C}_{r}\right\}$ of $G$ , the weight of a colour class ${C}_{i}$ is the maximum weight of a vertex coloured $i$ and the weight of $c$ is the sum of the weights of its colour classes. The objective of the Weighted Colouring Problem is, given a vertex-weighted graph $G$ , to determine the minimum weight of a proper colouring of $G$, that is, its weighted chromatic number. In [21] , [33] , we prove that the Weighted Coloring Problem admits a version of the Hajós' Theorem and so we show a necessary and sufficient condition for the weighted chromatic number of a vertex-weighted graph $G$ to be at least $k$ , for any positive real $k$. The Weighted Colouring Problem problem remains NP-complete in some particular graph classes as bipartite graphs. In their seminal paper, Guan and Zhu asked whether the weighted chromatic number of bounded tree-width graphs (partial $k$-trees) can be computed in polynomial-time. Surprisingly, the time-complexity of computing this parameter in trees is still open. We show in [21] that, assuming the Exponential Time Hypothesis (3-SAT cannot be solved in sub-exponential time), the best algorithm to compute the weighted chromatic number of $n$-node trees has time-complexity ${n}^{\Theta \left(logn\right)}$. Our result mainly relies on proving that, when computing an optimal proper weighted colouring of a graph $G$, it is hard to combine colourings of its connected components, even when $G$ is a forest.
##### Inducing proper colourings
Frequently, the proper colouring of the graph must be induced by some other parameters that a vertex can compute locally, for example on looking on the labels assigned to its incident edges or to their orientations.
For a connected graph $G$ of order $|V\left(G\right)|\phantom{\rule{0.166667em}{0ex}}\ge \phantom{\rule{0.166667em}{0ex}}3$ and a $k\phantom{\rule{-0.166667em}{0ex}}$-labelling $c\phantom{\rule{0.166667em}{0ex}}:\phantom{\rule{0.166667em}{0ex}}E\left(G\right)\phantom{\rule{0.166667em}{0ex}}\to \phantom{\rule{0.166667em}{0ex}}\left\{1,2,...,k\right\}$ of the edges of $G$, the code of a vertex $v$ of $G$ is the ordered $k\phantom{\rule{-0.166667em}{0ex}}$-tuple $\left({\ell }_{1},{\ell }_{2},...,{\ell }_{k}\right)$, where ${\ell }_{i}$ is the number of edges incident with $v$ that are labelled $i$. The $k\phantom{\rule{-0.166667em}{0ex}}$-labelling $c$ is detectable if every two adjacent vertices of $G$ have distinct codes. The minimum positive integer $k$ for which $G$ has a detectable $k\phantom{\rule{-0.166667em}{0ex}}$-labelling is the detection number $det\left(G\right)$ of $G$. In [31] , we show that it is NP-complete to decide if the detection number of a cubic graph is 2. We also show that the detection number of every bipartite graph of minimum degree at least 3 is at most 2. Finally, we give some sufficient condition for a cubic graph to have detection number 3.
An orientation of a graph $G$ is a digraph $D$ obtained from $G$ by replacing each edge by exactly one of the two possible arcs with the same endvertices. For each $v\in V\left(G\right)$, the indegree of $v$ in $D$, denoted by ${d}_{D}^{-}\left(v\right)$, is the number of arcs with head $v$ in $D$. An orientation $D$ of $G$ is proper if ${d}_{D}^{-}\left(u\right)\ne {d}_{D}^{-}\left(v\right)$, for all $uv\in E\left(G\right)$. The proper orientation number of a graph $G$, denoted by $po\left(G\right)$, is the minimum of the maximum indegree over all its proper orientations. In [32] , [44] , we prove that $po\left(G\right)\le \left(\Delta \left(G\right)+\sqrt{\Delta \left(G\right)}\right)/2+1$ if $G$ is a bipartite graph, and $po\left(G\right)\le 4$ if $G$ is a tree. It is well-known that $po\left(G\right)\le \Delta \left(G\right)$, for every graph $G$. However, we prove that deciding whether $po\left(G\right)\le \Delta \left(G\right)-1$ is already an $NP$-complete problem on graphs with $\Delta \left(G\right)=k$, for every $k\ge 3$. We also show that it is NP-complete to decide whether $po\left(G\right)\le 2$, for planar subcubic graphs $G$. Moreover, we prove that it is NP-complete to decide whether $po\left(G\right)\le 3$, for planar bipartite graphs $G$ with maximum degree 5.
#### Directed graphs
Graph theory can be roughly partitioned into two branches: the areas of undirected graphs and directed graphs (digraphs). Even though both areas have numerous important applications, for various reasons, undirected graphs have been studied much more extensively than directed graphs. One of the reasons is that many problems for digraphs are much more difficult than their analogues for undirected graphs.
##### Finding a subdivision of a digraph
One of the cornerstones of modern (undirected) graph theory is minor theory of Robertson and Seymour. Unfortunately, we cannot expect an equivalent for directed graphs. Minor theory implies in particular that, for any fixed $F$, detecting a subdivision of $F$ in an input graph $G$ can be performed in polynomial time by the Robertson and Seymour linkage algorithm. In contrast, the analogous subdivision problem for digraph can be either polynomial-time solvable or NP-complete, depending on the fixed digraph $F$. In [16] , a number of examples of polynomial instances, several NP-completeness proofs as well as a number of conjectures and open problems are given. In addition, it is conjectured that, for every integer $k$ greater than 1, the directed cycles of length at least $k$ have the Erdős-Pósa Property : for every $n$, there exists an integer ${t}_{n}$ such that for every digraph $D$, either $D$ contains $n$ disjoint directed cycles of length at least $k$, or there is a set $T$ of ${t}_{n}$ vertices that meets every directed cycle of length at least $k$. This generalizes a celebrated result of Reed, Robertson, Seymour and Thomas which is the case $k=2$ of this conjecture. We prove the conjecture for $k=3$. We also show that the directed $k$-Linkage problem is polynomial-time solvable for digraphs with circumference at most 2. From these two results, we deduce that if $F$ is the disjoint union of directed cycles of length at most 3, then one can decide in polynomial time if a digraph contains a subdivision of $F$.
##### The complexity of finding arc-disjoint branching flows
The concept of arc-disjoint flows in networks is a very general framework within which many well-known and important problems can be formulated. In particular, the existence of arc-disjoint branching flows, that is, flows which send one unit of flow from a given source $s$ to all other vertices, generalizes the concept of arc-disjoint out-branchings (spanning out-trees) in a digraph. A pair of out-branchings ${B}_{s,1}^{+}$, ${B}_{s,2}^{+}$ from a root s in a digraph $D=\left(V,A\right)$ on $n$ vertices corresponds to arc-disjoint branching flows ${x}_{1}$,${x}_{2}$ (the arcs carrying flow in ${x}_{i}$ are those used in ${B}_{s,i}^{+}$, $i=1,2$) in the network that we obtain from $D$ by giving all arcs capacity $n-1$. It is then a natural question to ask how much we can lower the capacities on the arcs and still have, say, two arc-disjoint branching flows from the given root $s$. In [46] , we prove that for every fixed integer $\ge 2$ it is
• an NP-complete problem to decide whether a network $𝒩=\left(V,A,u\right)$ where ${u}_{ij}=k$ for every arc $ij$ has two arc-disjoint branching flows rooted at $s$.
• a polynomial problem to decide whether a network $𝒩=\left(V,A,u\right)$ on $n$ vertices and ${u}_{ij}=n-k$ for every arc $ij$ has two arc-disjoint branching flows rooted at $s$.
The algorithm for the later result generalizes the polynomial algorithm, due to Lovász, for deciding whether a given input digraph has two arc-disjoint out-branchings rooted at a given vertex. Finally we prove that under the so-called Exponential Time Hypothesis (ETH), for every $ϵ>0$ and for every $k\left(n\right)$ with ${\left(log\left(n\right)\right)}^{1+ϵ}\le k\left(n\right)\le \frac{n}{2}$ (and for every large $i$ we have $k\left(n\right)=i$ for some $n$) there is no polynomial algorithm for deciding whether a given digraph contains two arc-disjoint branching flows from the same root so that no arc carries flow larger than $n-k\left(n\right)$.
##### Splitting a tournament into two subtournaments with given minimum outdegree
A $\left({k}_{1},{k}_{2}\right)$-outdegree-splitting of a digraph $D$ is a partition $\left({V}_{1},{V}_{2}\right)$ of its vertex set such that $D\left[{V}_{1}\right]$ and $D\left[{V}_{2}\right]$ have minimum outdegree at least ${k}_{1}$ and ${k}_{2}$, respectively. In [58] , we show that there exists a minimum function ${f}_{T}$ such that every tournament of minimum outdegree at least ${f}_{T}\left({k}_{1},{k}_{2}\right)$ has a $\left({k}_{1},{k}_{2}\right)$-outdegree-splitting, and ${f}_{T}\left({k}_{1},{k}_{2}\right)\le {k}_{1}^{2}/2+3{k}_{1}/2+{k}_{2}+1$. We also show a polynomial-time algorithm that finds a $\left({k}_{1},{k}_{2}\right)$-outdegree-splitting of a tournament if one exists, and returns `no' otherwise. We give better bound on ${f}_{T}$ and faster algorithms when ${k}_{1}=1$.
##### Eulerian and Hamiltonian dicycles in directed hypergraphs
In [19] , we generalize the concepts of Eulerian and Hamiltonian digraphs to directed hypergraphs. A dihypergraph $H$ is a pair $\left(𝒱\left(H\right),ℰ\left(H\right)\right)$, where $𝒱\left(H\right)$ is a non-empty set of elements, called vertices, and $ℰ\left(H\right)$ is a collection of ordered pairs of subsets of $𝒱\left(H\right)$, called hyperarcs. It is Eulerian (resp. Hamiltonian) if there is a dicycle containing each hyperarc (resp. each vertex) exactly once. We first present some properties of Eulerian and Hamiltonian dihypergraphs. For example, we show that deciding whether a dihypergraph is Eulerian is an NP-complete problem. We also study when iterated line dihypergraphs are Eulerian and Hamiltonian. Finally, we study when the generalized de Bruijn dihypergraphs are Eulerian and Hamiltonian. In particular, we determine when they contain a complete Berge dicycle, i.e. an Eulerian and Hamiltonian dicycle. | 2023-02-03 21:02:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 177, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.762250542640686, "perplexity": 806.4883025828867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00248.warc.gz"} |
https://pdfslide.us/documents/december-homework-week-2-santee-school-december-homework-week-2-page-3.html | December Homework- Week 2 - Santee School ... December Homework- Week 2 Page 3: Cover Page for Homework
• View
0
0
Embed Size (px)
Text of December Homework- Week 2 - Santee School ... December Homework- Week 2 Page 3: Cover Page for...
• ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
! ! ! !
!
December Homework- Week 2
Math Common Core Standards: • K.CC.A.1 - Count to 100 by ones and by tens. • K.CC.A.2 - Count forward beginning from a given number within the known sequence (instead of
having to begin at 1). • K.CC.B.4.A - When counting objects, say the number names in the standard order, pairing each
object with one and only one number name and each number name with one and only one object. • K.CC.B.4.B - Understand that the last number name said tells the number of objects counted. The
number of objects is the same regardless of their arrangement or the order in which they were counted.
• K.CC.B.5 - Count to answer "how many?" questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1-20, count out that many objects.
• K.OA.A.1 - Represent addition and subtraction with objects, fingers, mental images, drawings1, sounds (e.g., claps), acting out situations, verbal explanations, expressions, or equations.
Literacy Common Core Standards:
• RF.K.1.A - Follow words from left to right, top to bottom, and page by page. • RF.K.1.D - Recognize and name all upper- and lowercase letters of the alphabet. • RF.K.1.C - Understand that words are separated by spaces in print. • RF.K.3.A - Demonstrate basic knowledge of one-to-one letter-sound correspondences by producing
the primary sound or many of the most frequent sounds for each consonant. • RF.K.3.C - Read common high-frequency words by sight (e.g., the, of, to, you, she, my, is, are, do,
does). • RF.K.2.D - Isolate and pronounce the initial, medial vowel, and final sounds (phonemes) in three-
phoneme (consonant-vowel-consonant, or CVC) words. • L.K.2 - Demonstrate command of the conventions of standard English capitalization, punctuation,
and spelling when writing.
• !
!
December Homework- Week 2
Page 3: Cover Page for Homework • This page gives parents and students an overview of what they will be working on throughout the
week. They will check a box as they complete each activity. When they are finished, both the parent and child will write their name on this cover page.
Pages 4-7: Math Homework • Page 6 has 2 sets of presents that students will cut out and glue onto page 5. Cut the page in half
and staple one set of the presents to the back of page 5 when making the homework packet. Pages 8-10: Literacy Homework
Preparing Homework Packet: • Print pages 3-10.
o These pages make 1 homework packet. • Copy these pages to make a homework packet for each student in your class. • Place the cover sheet on the top of each homework packet. • Place the homework packet in your students homework folder or envelope.
***Make sure to download my FREE homework labels and how I put together these homework packets at my TPT store!
Check out September-May kindergarten homework at my TPT store:
http://www.teacherspayteachers.com/Store/A-Spoonful-Of-Learning
You can also grab an entire year of kindergarten homework!
Check out my blog at: www.aspoonfuloflearning.blogspot.com!
Clipart Provided by: MyCuteGraphics.com
• Dear Parents, We are having so much fun learning! Thank you for being such a big part of your child’s
learning with all of your support!
Math Homework:
! Domino Addition: Students will count the dots on the dominoes to find the sum and write an addition equation.
! Ordering 20 Presents: Students will glue presents in order from 1-20. (This is a cut and glue activity).
! Numbers for Santa: Students will color all of the Santa hats that equal or have the number that is given.
! E lf’s Beginning Sounds List: Students will color the elf hats by their beginning sounds.
! Match It Up : Students will draw lines from the picture to the matching CVC word.
! Sentence Unscramble : Students will read a sentence, write a sentence, draw a picture to go with the sentence, and complete a sentence unscramble. (This is a cut and glue activity).
All of these activities are included with detailed directions on how to do each activity. Please return the completed homework on Friday . Thank you again!
Student Signature: _____________________________
Parent Signature: _____________________________ !
• !
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
! ! !
! _________!!+!!________!!=!!!
! _________!!+!!________!!=!!!
! !
! _________!!+!!________!!=!!!
! _________!!+!!________!!=!!!
! !
! _________!!+!!________!!=!!!
! _________!!+!!________!!=!!!
! !
Name:!________________________________________________________! Count the dots on each domino and write the number on the line. Add up the dots and write the sum on
• !
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
! ! ! ! !
! ! ! ! !
! ! ! ! !
! ! ! ! !
Name:!_____________________________________________________! Cut the presents on the attached page. Glue them on the trees in order from 1-20.
Ordering 20 Presents!
• ! !
5
11 1 9 2
3 8 6 4
7 1
16
13
14
10
12
19
20
15
17
5
11 1 9 2
3 8 6 4
7 1
16
13
14
10
12
19
20
15
17
Cut the presents and glue them on the trees in order from 1-20.
Cut the presents and glue them on the trees in order from 1-20.
• !
!
Color ALL that show:
5 Color ALL that
show:
7 Color ALL that
show:
3 Color ALL that
show:
8
0+5 3+2 4+1 2+2 6+3
1+1 4+3 3+0 1+2 2+1
Name:!_______________________________________________________________! Color all the hats that show the number given.
Numbers for Santa
• !
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
Color the elf hats by their beginning
sounds.
n- green e- red h- blue g- yellow
Elf’s Beginning Sounds List
!
• !
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
!
Name:!________________________________________________________! Draw a line matching the picture to the correct word.
Match It Up
hat
rat
cup
pot
frog
bat
car
• ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
is !
The star is on the tree.
star !
The !
on !
tree.
.
Name:!________________________________________________________! Read the sentence. Write the sentence on the lines. Draw a picture about the sentence. The cut the words on the bottom and put them in the correct order to make a complete sentence.
the !
!
!
!
!
.
!
Recommended
Documents
Education
Documents
Documents
Documents
Education
Education
Documents
Documents | 2021-10-16 21:36:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214578866958618, "perplexity": 4732.347531641636}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00251.warc.gz"} |
https://www.physicsforums.com/threads/help-with-a-proof.48048/ | # Help with a Proof
1. Oct 16, 2004
### decamij
How can i mathematically prove the following equation:
I have to prove that the minimum speed required to maintain orbit around the Earth is (given the mass of the earth and universal gravitation constant)
v = 2.00x10^7
(root)r
I have to basically prove this equation:
v = GmE
r
P.S. the whole equation to the above is square rooted, and the r should be UNDER GmE).
2. Oct 16, 2004
### Clausius2
In order to mantain the equilibrium of the body the centrifugal force has to be equal to the gravitational one:
$$m\frac{v^2}{r}=\frac{GMm}{r^2}$$
$$v=\sqrt{\frac{GM}{r}}$$
3. Oct 16, 2004
### decamij
So if this were a question on an assignment out of 6 marks, all i would have to is show the relationship between the two equations (it would be a pretty short proof). Thanx a lot | 2018-10-17 17:56:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6643299460411072, "perplexity": 736.1187063313548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511206.38/warc/CC-MAIN-20181017174543-20181017200043-00128.warc.gz"} |
https://docs.oracle.com/en-us/iaas/tools/ads-sdk/latest/user_guide/eval/Multiclass.html | # Multiclass Classification¶
Multiclass Classification is a type of modeling wherein the output is discrete. For example, an integer 1-10, an animal at the zoo, or a primary color. These models have a specialized set of charts and metrics for their evaluation.
The prevailing metrics for evaluating a multiclass classification model are:
• Accuracy: The proportion of predictions that were correct. It is generally converted to a percentage where 100% is a perfect classifier. For a balanced dataset, an accuracy of $$\frac{100\%}{k}$$ where $$k$$ is the number of classes, is a random classifier. An accuracy of 0% is a perfectly wrong classifier.
• Hamming Loss: The proportion of predictions that were incorrectly classified and is equivalent to $$1-accuracy$$. This means a Hamming loss score of 0 is a perfect classifier. A score of $$\frac{k-1}{k}$$ is a random classifier for a balanced dataset, and 1.0 is a perfectly incorrect classifier.
• Kappa Score: Cohen’s kappa coefficient is a statistic that measures inter-annotator agreement. This function computes Cohen’s kappa, a score that expresses the level of agreement between two annotators on a classification problem. It is defined as:
$\kappa = (p_o - p_e) / (1 - p_e)$
$$p_o$$ is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio). $$p_e$$ is the expected agreement when both annotators assign labels randomly. $$p_e$$ is estimated using a per-annotator empirical prior over the class labels.
• Precision (weighted, macro or micro): This is the proportion of a class that was predicted to be in a given class and are actually in that class. In multiclass classification, it is common to report the precision for each class and this is called the per-class precision. It is computed using the same approach use in binary classification. For example, $$\frac{TP}{TP + FP}$$, but only the class under consideration is used. A value of 1 means that the classifier was able to perfectly predict, for that class. A value of 0 means that the classifier was never correct, for that class. There are three other versions of precision that are used in multiclass classification and they are weighted, macro and micro-precision. Weighted precision, $$P_w$$, combines the per-class precision by the number of true labels in a class:
$P_w = W_1 P_1 + \cdots + W_n P_n$
$$W_i$$ is the proportion of the true labels in class i $$P_i$$ is the per-class precision for the $$i^{th}$$ class
The macro-precision, $$P_m$$, is the mean of all the per-class, $$P_i$$, precisions.
$P_m = \frac{1}{n} \sum_{i} P_i$
The micro-precision, $$P_{\mu}$$, is the same as the accuracy, micro-recall, and micro $$F_1$$.
• Recall (weighted, macro or micro): This is the proportion of the True class predictions that were correctly predicted over the number of True predictions (correct or incorrect) $$\frac{TP}{TP + FN}$$. This is also known as the True Positive Rate (TPR) or Sensitivity. In multiclass classification, it is common to report the recall for each class and this is called the micro-recall. It is computed using the same approach as in the case of binary classification, but is reported for each class. A recall of 1 is perfect recall, 0 is “bad” recall.
As with precision, there are three other versions of recall that are used in multiclass classification. They are weighted, macro and micro-recall. The definitions are the same except the per-class recall replaces the per-class precision in the preceding equations.
• $$\mathbf{F_1}$$ Score (weighted, macro or micro): There is generally a trade-off between the precision and recall and the $$F_1$$ score is a metric that combines them into a single number. The per-class $$F_1$$ score is the harmonic mean of precision and recall:
$F_1 = 2 * \frac{Precision * Recall}{Precision + Recall}$
As with precision, there are a number of other versions of $$F_1$$ that are used in multiclass classification. The micro and weighted $$F_1$$ is computed the same as with precision, but with the per-class $$F_1$$ replacing the per-class precision. However, the macro $$F_1$$ is computed a little differently. The precision and recall are computed by summing the TP, FN, and FP across all classes, and then using them in the standard formulas.
Generally, several of these metrics are used in combination to describe the performance of a multiclass classification model.
The prevailing charts and plots for multiclass classification are the Precision-Recall Curve, the ROC curve, the Lift Chart, the Gain Chart, and the Confusion Matrix. These are inter-related with preceding metrics, and are common across most multiclass classification literature.
For multiclass classification you can view the following using show_in_notebook():
• confusion_matrix: A matrix of the number of actual versus predicted values for each class, see [Read More].
• pr_curve: A plot of a precision versus recall (the proportion of positive class predictions that were correct versus the proportion of positive class objects that were correctly identified), see [Read More].
• roc_curve: A plot of a true positive rate versus a false positive rate (recall vs the proportion of negative class objects that were identified incorrectly), see [Read More].
• precision_by_label: Consider one label as a positive class and rest as negative. Compute precision for each, precision numbers in this example, see [Read More].
• recall_by_label: Consider one label as a positive class and rest as negative. Compute recall for each, recall numbers in this example, [Read More].
• f1_by_label: Harmonic mean of precision_by_label and recall_by_label. Compute f1 for each, f1 scores in this example, see [Read More]
• jaccard_by_label: Computes the similarity for each label distribution, see [Read More].
To generate all of these metrics and charts for a list of multiclass classification models on the test dataset test, you can run the following:
lr_clf = LogisticRegression(random_state=0, solver='lbfgs',
multi_class='multinomial').fit(X_train, y_train)
rf_clf = RandomForestClassifier(n_estimators=10).fit(X_train, y_train)
To use ADSEvaluator, models have to be converted into ADSModel types.
The ADSModel class in the ADS package has a from_estimator function that takes as input a fitted estimator and converts it into an ADSModel object. With classification, you have to pass the class labels in the class argument too. The ADSModel object is used for evaluation using the ADSEvaluator object.
To show all of the metrics in a table, run:
evaluator.metrics
Evaluator Metrics (repr)
evaluator.show_in_notebook()
Multiclass Confusion Matrix
Multiclass ROC Curve
Multiclass PR Curve
Multiclass Precision By Label
Multiclass F1 By Label
Multiclass Jaccard By Label
Multiclass classification includes the following:
• accuracy: The number of correctly classified examples divided by total examples.
• hamming_loss: 1 - accuracy
• precision_weighted: The weighted average of precision_by_label. Weights are proportional to the number of true instances for each label.
• precision_micro: Global precision. Calculated by using global true positives and false positives.
• recall_weighted: The weighted average of recall_by_label. Weights are proportional to the number of true instances for each label.
• recall_micro: Global recall. Calculated by using global true positives and false negatives.
• f1_weighted: The weighted average of f1_by_label. Weights are proportional to the number of true instances for each label.
• f1_micro: Global f1. It can be calculated by using the harmonic mean of precision_micro and recall_micro.
All of these metrics can be computed directly from the confusion matrix.
If the preceding metrics don’t include the specific metric you want to use, maybe an F2 score, simply add it to your evaluator object as in this example:
from ads.evaluations.evaluator import ADSEvaluator
evaluator = ADSEvaluator(test, models=[modelA, modelB, modelC modelD])
from sklearn.metrics import fbeta_score
def F2_Score(y_true, y_pred):
return fbeta_score(y_true, y_pred, 2) | 2021-05-07 02:24:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7565616965293884, "perplexity": 1348.7572190709252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00169.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/9264 | # OPTICAL CONSTANTS OF AMMONIUM SULFATE
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/9264
Files Size Format View
1975-ME-14.jpg 126.7Kb JPEG image
Title: OPTICAL CONSTANTS OF AMMONIUM SULFATE Creators: Downing, Harry D. Issue Date: 1975 Publisher: Ohio State University Abstract: Atmospheric studies have revealed the existence of ammonium sulfate particles in the earth’s stratosphere. The scattering and absorption of solar infrared radiation by such particles can be calculated from Mie theory provided the optical constants $N = n + ik$ of ammonium sulfate are known. We have determined these constants from near-normal reflectance measurements and subsequent Kramers-Kronig analysis for aqueous solutions of 1.6, 2.4, 3.2, and 3.3 molar concentrations. The latter is the highest concentration achieved at room temperature. Curves of n($\nu$)-vs-$\nu$ and k($\nu$)-vs-$\nu$ for the spectral range $5000-400 cm^{-1}$ will be shown and the effects of the solute ions on the water structure will be discussed. While these studies may not confirm the existence of particular concentrations of ammonium sulfate in our atmosphere, we hope that our investigations will form the basis for future, studies of this aerosol. Description: This research was supported, in part, by the National Aeronautics and Space Administration. Author Institution: Department of Physics, Kansas State University URI: http://hdl.handle.net/1811/9264 Other Identifiers: 1975-ME-14 | 2016-10-22 03:34:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4144466817378998, "perplexity": 2353.162012488424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.65/warc/CC-MAIN-20161020183838-00356-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/moment-of-inertia-tensor.715659/ | # Moment of Inertia Tensor
1. Oct 10, 2013
### dpa
Generally, when we talk about moment of inertia, we talk about rotation and inherently, we talk about moment of inertia about an axis.
But when we talk about inertia tensor, we calculate about a point. Is there a reason for this difference? Am I missing something?
I am new to tensors.
2. Oct 10, 2013
### The_Duck
One way to think about it is that the inertia tensor contains the information that lets you calculate the moment of inertia around *any* axis that passes through a given point.
3. Oct 10, 2013
### SteamKing
Staff Emeritus
The components of the inertia tensor are calculated in the usual fashion. You still do the calculations about the three coordinate axes, if they are convenient to do so. In most instances, you are interested in the moments of inertia transferred from whatever calculation coordinate system to a coordinate system which has its origin at the centroid of the body. This may be the source of your confusion.
4. Oct 10, 2013
### arildno
3-D rotations are nasty. For example, the instantaneous angular velocity vector need not be strictly parallell with the angular momentum vector.
Do remember though, that ESSENTIALLY, all torques are computed relative to a POINT, not relative to an axis.
5. Oct 10, 2013
### D H
Staff Emeritus
Very. One simple example: Perform rotation A followed by rotation B. Then do it again, but this time perform rotation B first. It doesn't make any difference with 2-D rotations. Rotations in 2-D are commutative. That's no longer true in three dimensions (or higher).
Another example is the inescapable fact that angular velocity and angular momentum do not necessarily point in the same direction with 3-D rotations. A rigid body set into rotation in empty space generally will tumble. Angular momentum is a conserved quantity but angular velocity is not.
Rotations in 4-D space or higher are nastier yet. The concept that rotation is about an axis is something that pertains to 3-D space only. On the other hand, the 2-D concept of rotation about a point (or parallel to a plane) does generalize to higher dimensions.
6. Oct 10, 2013
### arildno
However, for FREE rotations in 3-D, a beatiful result is still present, as my new signature shows.
7. Oct 10, 2013
### D H
Staff Emeritus
Yep. Love that expression. It's so nonsensical (or at least counterintuitive), but then again, nonsensical (or at least counterintuitive) is a perfect description of free rotations in 3-D space.
8. Oct 10, 2013
### arildno
H. Goldstein called it a "jabberwockian phrase", and I think that encapsulates the weird elegance of both the phrase and the phenomenon it describes.
9. Oct 10, 2013
### D H
Staff Emeritus
Bringing this discussion back to the original question,
Yes. You are missing something. You were taught in freshman physics that $T=I\alpha$ is the rotational equivalent of $F=ma$. That's only true for those special cases where you can ignore the tensorial nature of the inertia tensor, or where motion is constrained to two dimensions. That was a "lie to children" (http://en.wikipedia.org/wiki/Lie-to-children). It's more complicated in general, and you are now deemed to have adequate mathematical understanding to *start* taking the next step toward understanding rotation.
Filling in that "missing something" starts with understanding Euler's equations. Euler didn't use tensors, so his notation is a bit verbose. It becomes nice and simple with a tensorial notation. To really understand rotations you need to know a bit about group theory (the group SO(3), in particular) and you need to know a bit about Lie groups / Lie algebras.
10. Oct 10, 2013
### arildno
"we talk about rotation and inherently, we talk about moment of inertia about an axis."
As DH points out, freshman courses speedily goes over to rotation about a fixed axis.
This means it is very easy to forget what was breezily derived at the very start of the course, namely that when we compute the torques about a (fixed) point, we gain the formula:
$$\vec{\tau}=\frac{d\vec{L}}{dt}$$
where $\vec{\tau}$ are the externally applied torque, and $\vec{L}$ the quantity called "angular momentum".
THAT equation is (almost*) perfectly general for torques about a fixed point in classical mechanics, but it is a wolf in sheep's clothing.
In freshman courses, most of the wolf's teeth are pulled out to begin with (DH has given you the names of a few of those teeth), by saying we limit ourselves to cases of rotation about a fixed axis.
-----------------------------------------------------------------------
*It holds perfectly for the ideal, perfectly rigid body. For other types of objects, the left hand side of the equation might get a lot nastier, not just RHS (which is, in 3-D, already exceedingly nasty for the perfectly rigid body as well).
Last edited: Oct 10, 2013 | 2017-08-20 20:20:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376100420951843, "perplexity": 773.9245857598586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00583.warc.gz"} |
https://www.physicsforums.com/threads/adjusting-length-and-period-using-inverse-transformations.745692/ | # Adjusting length and period using (inverse) transformations
1. Mar 28, 2014
### ANvH
Trying to see the logic in deriving length contraction and time dilation using the Lorentz transformations and inverse Lorentz transformations. In the following treatise it leads to ambiguities.
Given
$Δ\acute{t}=\gamma(Δt-\beta c^{-1}Δx)$ (1)
$Δ\acute{x}=\gamma(Δx-\beta c Δt)$ (2)
and the inverse transformations,
$Δt=\gamma(Δ\acute{t}+\beta c^{-1}Δ\acute{x})$ (3)
$Δx=\gamma(Δ\acute{x}+\beta c Δ\acute{t})$ (4),
where the primes refer to the emitter frame (the moving frame of reference) and the non-primes refer to the receiver frame (the stationary frame or rest frame of reference).
Length contraction is defined by measuring the object length $Δ\acute{x}$ in the moving frame while held stationary from the rest frame; moving the line of sight of the receiver accomplishes this and justifies setting $Δt$ to zero. This circumstance can be met by using Equation (2). The resulting equation is $Δ\acute{x}=\gammaΔx$ and it is then rearranged to $Δx=Δ\acute{x}/\gamma$. Since $\gamma > 1$ it suggests a length contraction.
For time dilation the object in the moving frame needs to be stationary, while applying a transform of the starting and ending times in the stationary frame. In this case $Δ\acute{x}$ has to be set to zero, and this can be met by Equation (3) and it leads to $Δt=\gammaΔ\acute{t}$. While this reasoning seams to be straightforward there is a huge inconsistency. To obtain a length contraction, Equation (2) is utilized and the final result is rearranged, contrary to the time dilation derived from Equation (3). Somehow the symmetry of the Lorentz and inverse Lorentz transformations is violated, because Equation (2) belongs to the first set (the Lorentz transformations), and Equation (3) belongs to the second set (the inverse Lorentz transformations).
To exemplify this ambiguity, Equations (1-4) can be reduced to
$Δ\acute{t}=\gammaΔt$ (5)
$Δ\acute{x}=\gammaΔx$ (6)
and the inverse transformations,
$Δt=\gammaΔ\acute{t}$ (7)
$Δx=\gammaΔ\acute{x}$ (8).
What should one pick: (5) and (6) or (7) and (8)?
2. Mar 28, 2014
### ghwellsjr
This is rather confusing, to me at least. It would make more sense to me if you refer to the stationary (or rest) frame as the one in which an object is at rest and the moving frame as one that is moving at speed β with respect to the first (rest) frame. This makes the object move at speed -β in that frame. Then you only need the first set of equations to get coordinates from the first (rest) frame to coordinates in the second (moving) frame.
This also is confusing to me. The Coordinate Length of an object is the delta between two events at either end at the same Coordinate Time. That means in the moving frame you need to choose the two events at the end points of the object such that they have the same Coordinate Time in the moving frame. In other words, Δt' has to equal zero, not Δt. In the rest frame, it doesn't matter what Δt is because any pair of end point events will be the same distance apart.
When you compare the Coordinate Length as defined this way in the moving frame to the Coordinate Length in the rest frame, you will see the effect called Length Contraction. The Coordinate Length in the rest frame is also called the Proper Length so you can say that the Length Contraction factor, the inverse of gamma, is equal to the Coordinate Length of an object moving in a frame compared to its Proper Length.
Time Dilation refers to the delta Coordinate Time of an object compared to its delta Proper Time for the same pair of events. The delta Proper Time is taken between two events for an object in its rest frame. Then you can transform the Coordinate Times for those same two events into a moving frame and get a new delta Coordinate Time. The ratio of the delta Coordinate Time to the Proper Time is the Time Dilation factor and is equal to gamma.
I think it would be most helpful to look at a pair of spacetime diagrams for a five-foot long object first at rest and then moving at β=0.6. Here's the first spacetime diagram:
Each line represents an end of the five-foot long object. The dots represent 1 nanosecond increments of time. Note that the delta Proper Times are equal to the delta Coordinate Times. I have shown the object over a period of 4 nanoseconds.
Now the spacetime diagram transformed to a speed of -0.6c (β=-0.6). This makes the object move at 0.6c in the diagram:
This diagram was created by taking the coordinates of each event (dot) from the first diagram and transforming them with β=-0.6 (c=1). You can see that the length of the object at any Coordinate Time is 4 feet. For example, at the Coordinate Time of 5 nanoseconds, one end of the object is at the Coordinate Distance of 3 feet and the other end at 7 feet for a delta of 4 feet. Gamma at -0.6c is 1.25 so if we divide the Proper Length of 5 feet by 1.25 we get the Contracted Length of 4 feet.
For Time Dilation, we can see that the period from the first event along the blue end of the object to the last event in the moving frame is 5 nanoseconds whereas it was 4 nanoseconds in the rest frame for a ratio of 1.25, the same as the gamma factor.
Now if you want, you can use your second set of equations to transform the coordinates of the events in the second diagram back to the coordinates in the first diagram using the same value of beta, -0.6.
Does this all make perfect sense to you? Any questions?
3. Mar 28, 2014
### ANvH
First, I would like to use the first two equations only; second, what you say sounds confusing to me, so I would like to make sure that I am understanding you. The prime stands for the moving frame and the transformations for $\acute {x}$ and $\acute {t}$ of the moving frame are replaced by relations involving $x$ and $t$ as measured in the rest frame. The equations (1) and (2) were created by $\acute {x}_{2}-\acute {x}_{1}$ and $\acute {t}_{2}-\acute {t}_{1}$. Are we indeed on the same page? I do have questions, but would like to clarify this.
4. Mar 28, 2014
### ghwellsjr
Yes, we are on the same page.
For example, in my first diagram, take the top blue event as the first event and the second up from the bottom on the red line as the second event. Then Δx = 5 and Δt = -3. Remember that β = -0.6, c=1 and γ = 1.25. Plugging these values into your first two equations, we get:
Δt' = γ(Δt - βΔx) = 1.25(-3 + 0.6*5) = 1.25(-3 + 3) = 0
Δx' = γ(Δx - βΔt) = 1.25(5 - 0.6*3) = 1.25(5 - 1.8) = 1.25(3.2) = 4
Notice that Δt' equals zero as it should to get the correct distance between the two ends of the object.
5. Mar 28, 2014
### ftr
But after few days of research it became clear to me that the equations did not just pop into Einsteins head. They are the painstaking work of many people which gave me great confidence in them, and checking them I found them to be consistent.
just some references, enjoy
http://www.mathpages.com/home/kmath571/kmath571.htm
http://www.fourmilab.ch/etexts/einstein/specrel/www/
http://www.dwc.knaw.nl/DL/publications/PU00014148.pdf
http://www.physicsinsights.org/poincare-1900.pdf
you can also dig up Whittaker’s classic book volume 1,2.
“A History of the Theories of Aether and Electricity”
https://archive.org/details/historyoftheorie00whitrich
6. Mar 29, 2014
### ANvH
I appreciate what you show here. So distance is only measured parallel to the Coordinate Distance, irrespective of choosing two events. You can compare the first event of the blue line with the last event of the red line. Always thought to compare points in space-time that have the same event. For your info, the treatise I gave is based on what I have read on the web; it did not came from me (can't find it to provide a link).
It does make a lot more sense after realizing how to read the above diagram and extrapolating more events, such that Δx' between the blue and red lines is zero, yielding Δt' (Trying to use the same logic as done for getting the contracted length). Am I correct when two successive events with respect to the blue line are compared that $δx'$ correctly describes the length contraction? Logic suggests it is. Then,
$\frac{δx'}{δt'}=\gamma^{-2} \frac{δx}{δt}$
is the contracted speed?
7. Mar 29, 2014
### ghwellsjr
In the rest frame of an object (assuming both are inertial), if you use any pair of events at either end of the object, you will get the same delta distance even though the delta time can be any value. But if you use any of those pairs and transform to a frame where the object is moving, you will get a whole range of delta distances and delta times, which leads to a meaningless idea for delta distance. However, if you pick a pair of events in the rest frame of the object such that when you transform them to the moving frame and the delta time equals zero, then the delta distance shows the length contraction equal to the inverse of gamma.
Another way of understanding this is that if an observer is trying to measure distances to various parts of a moving object, if he doesn't make the measurements "at the same time", he will get all kinds of different answers. Let's say you have a laser range finder and you focus it on one end of an object moving away from you. You will get a succession distances that are changing and getting bigger. If you then focus it on the other end of the object, you will get a bunch more readings. If you just subtract two of the readings for each end of the object, you will get a whole range of lengths, maybe even including zero. What you would have to do is use two laser range finders and focus each on a different end of the object. But even then, you would not necessarily get a meaningful answer unless you could make sure that the time of the measurement is half way between the laser pulse going out and its reflection being received by each range finder. Only when those times match will you get a meaningful result and it will be the Contracted Length of the object.
I hope you meant to say "Always thought to compare events in space-time that have the same time" because the way you said it doesn't make any sense. You should be clear that in relativity, an event is a point in space at an instant of time.
Did you make a copy of it? If not, are you sure you are remembering it exactly?
I'm not sure why you said this. If you're talking about contracted length, then you should say "Δx' between the blue and red lines yields the contracted length when Δt' is zero".
If you're talking about Time Dilation, you don't want to go between the blue and the red lines, each line has its own Time Dilation and it's not necessarily when Δx' is zero. The Proper Time is when the two events are at the same Coordinate Distance in the rest frame of the object but in the moving frame, both Δx' and Δt' are non-zero.
You can see in the rest frame, the blue line depicts a coordinate interval of 4 nanoseconds and the red line also depicts a coordinate interval of 4 nanoseconds. You can think of each line having its own clock that ticks through 4 nanoseconds of Proper Time equal to 4 nanoseconds of Coordinate Time.
In the moving frame, the blue line events go from the Coordinate Times of 0 to 5 nanoseconds for a Coordinate Time interval of 5 nanoseconds (while the Coordinate Distance goes through 3 feet). The red line events go from the Coordinate Times of 3.75 to 8.75 nanoseconds for a Coordinate Time interval of 5 nanoseconds (while the Coordinate Distance also goes through 3 feet). As I said before, we determine the Time Dilation as the Coordinate Time interval divided by the Proper Time interval. In these cases (for the blue and red lines), the Coordinate Time intervals are 5 nanoseconds while the Proper Time intervals are 4 nanoseconds making the Time Dilation factors for both the blue and red lines be 1.25.
The only sense in which the logic for Time Dilation is the same as for Length Contraction is in taking the ratio of a Coordinate interval (length or time) to a Proper interval (length or time). But the way they are applied is completely different as I explained previously.
No. Two successive events on the blue line have to do with Time Dilation, not Length Contraction. Length contraction always involves two different world lines, not just one. The world lines are marking a succession of events and events are instants in time and points in space having no length associated with them.
I don't know how you got that equation but in any case, it has nothing to do with contracted speed. Speeds don't contract. There's no such thing, at least I never heard of it. If you have two inertial objects or observers moving with respect to each other, they will each calculate or measure the speed of the other one to be the same, using laser range finders, for example (just in opposite directions).
But this has nothing to do with frames. You can determine the speed of an object in different reference frames by doing the Lorentz Transformation process on the coordinates of various events. For example, in the second diagram in my previous post, you can determine the speed of the blue world line by taking the Δx'/Δt' which is 3 feet divided by 5 nanoseconds or 0.6 feet per nanosecond or 0.6c. You can do the same for the red world line.
8. Mar 29, 2014
### ANvH
http://resonanceswavesandfields.blogspot.com/2011/07/derivation-of-length-contraction-and.html
As it should, my mistake. It is crystal clear, I appreciate your time and effort. There is only one thing that is bothering me, or maybe not. In the primed frame both length contraction and time dilation occur, or is it only time dilation or length contraction?
I am asking this because of the Muon experiment. It is explained as a dilated half life of the muon in the muon's moving frame and the muon sees a length contraction when the muon's frame is considered a rest frame with respect to a moving earth frame.
There is no mention of the apparent length contraction in the muon's moving frame, or the time dilation of the moving earth's frame.
Again, your lengthy explanations are highly appreciated.
-Alfred
9. Mar 29, 2014
### ghwellsjr
That reference is correct as long as you read the whole thing to realize that they have the object moving in the stationary frame and stationary in the moving frame. I had assumed that it was the other way around in my discussions.
In any frame (primed or unprimed), objects that have length will be contracted if they are moving and time will be dilated for them.
Since we don't consider the size (or length) of the muon, we aren't concerned about it's contraction in the earth's rest frame (although it is there), we only focus on it's Time Dilation. In the muon's rest frame, we aren't concerned about the time on the earth (although it is dilated), we only focus on the contracted distance.
Here is a thread where I have drawn some spacetime diagrams regarding the muon:
10. Mar 29, 2014
### ANvH
"Moving in the stationary frame" and "stationary in a moving frame" makes it very confusing.
Ok, yet there is in my view a discrepancy when comparing spatial contraction and temporal dilation. If a waveform is subject to the Lorentz transformations then the angular frequency and wavenumber domains are increased by γ(1+β). This means the period and the wavelength are shortened, i.e. both domains of the waveform are blue shifted. I am always making the association that the wavelength is contracted, yet a time dilation means the period of the wave should be increased, suggesting I am wrongly associating the period (temporal domain) of the wave with time dilation.
With respect to the muon, I have tried to treat the muon as a wave: the half life is the temporal aspect of the muon wave and the spatial decay, deduced from the vertical depth of the atmosphere and the flux of energy loss amounts to a decay of one muon per 10 km. While such a wave treatment may not be valid, yet the thought experiment put into would allow an analysis. The outcome may not be what you want. I agree I am not interested in the size of the muon, I am interested in the path it is following.
I have studied that one, but it does not answer why time dilation holds for an object and a Doppler blueshift holds for the frequency domain, perhaps the OP of that thread has the same issue.
Because this would be a difference in light-like and space-like processes?
11. Mar 30, 2014
### ghwellsjr
No, this has nothing to do with the difference between light-like and space-like processes. You are talking about the difference between Doppler shifts and Time Dilation. They are two completely different things. Doppler shifts are observable and symmetrical between two inertial observers and are independent of any reference frame. Time Dilation is not observable and changes according to each reference frame. It is only symmetrical (equal) for two observers if they happen to be traveling in opposite directions at the same speed in a particular reference frame.
Here, let me illustrate with a couple spacetime diagrams showing two inertial observers approaching, then passing each other at a relative speed of 0.6c. They each have a clock that ticks once a second and they can each see the other ones clock as well as their own. Here's the spacetime diagram for the rest frame of the red observer as he is watching the ticks on the blue observer's clock:
At the bottom half of the diagram, the blue observer is approaching the red observer and the red observer sees the blue observer's clock ticking twice as fast as his own. We can calculate this Doppler factor using your formula, γ(1+β). Since at β=0.6, γ=1.25, and the formula evaluates to 1.25(1+0.6) = 1.25(1.6) = 2.
But note that the time dilation of blue's clock is 1.25 (gamma) which you can see by observing that in 5 seconds of Coordinate Time, blue's clock has ticked out 4 seconds and 5/4 = 1.25.
After blue passes red, red observes blue's clock ticking at one-half the rate of his own clock which is the inverse of the previous Doppler factor and yet the Time Dilation of the blue clock remains at 1.25.
Note that there is no Time Dilation of red's clock, it ticks at the same rate as the Coordinate time because it is at rest in this frame.
Now let's look at how the blue observer looks at the red observer's clock:
Note that during their approach, blue sees red's clock ticking at twice the rate of his own, even though red's clock is not Time Dilated in this frame. And after they pass each other, blue sees red's clock ticking at half the rate of his own.
Each observer sees the same Doppler shifts in the other observer's clock but neither observer is aware of the Time Dilation.
12. Mar 30, 2014
### ANvH
I think this statement is not correct, because from blue's perspective the red clock should be time dilated.
This statement is not helpful as if time dilation does not exist. You know, I have seen this in many posts here. First you get the explanation that time dilation exists, is real, therefore the rate constant of the muon decay is physically slower. And then later it is said that observers are not aware of each other's time dilation. Sorry, that I am picky here.
Actually, no. The reason is that the Doppler shift is obtained by the same Lorentz transformations, while these transformations are also used to draw diagrams to explain time dilation and length contraction. It then becomes completely confusing because you say Doppler is real, is symmetric, as time dilation should be too. (Both red and blue observers, know that time dilation occurs in the opposing frame).
But let's focus on the Lorentz transformations. When applied to a wave the full forms
$t'=\gamma(t-\beta c^{-1}x)$
$x'=\gamma(x-\beta ct)$
are used to get the transformed waveform, which will give a Doppler shift that equals γ + γβ. To get at the length contraction and time dilation these forms are used
$Δt'=\gamma Δt$
$Δx'=\gamma Δx$,
and they look symmetrical, because Δt and Δx look both contracted. I know you explained by the diagrams that this is not true, and I grasped everything, yet it remains a mystery for me why the discrepancy exist. The only two things I can think of is that $\beta c^{-1}x$ and $\beta ct$ were set to zero for length contraction and time dilation, while the waveform transform procedure does not need this "simplification" because the relation $ω=kc$ takes care of grouping temporal and spatial aspects. In addition, another thought is that the wave is traveling at $c$, used to signaling the clock rates, yet the observers travel at 0.6c. Funny that the clocks do not show the time dilation (and length contraction) by their signaling waves.
13. Mar 30, 2014
### ghwellsjr
If by "from blue's perspective" you mean "in blue's rest frame", then it is true that the red clock is time dilated in that frame as shown here:
Each frame establishes a different Time Dilation to each observer's clock but they all indicate the same Doppler for each observer.
Here is another frame transformed from the first one to a speed of 0.333c:
In this frame, both clocks are Time Dilated to 1.06 but the Doppler remains the same.
Time Dilation is a coordinate effect. Coordinates are not real. They are man-made constructs which enable us to describe scenarios. Would you say that the origin of a coordinate system is real? Or that the directions that the axes point in are real? Or the units that we use to mark off the dimensions, as if nature was aware of these constructs?
Just like the speed of an object is different in different coordinate systems, so is Time Dilation. This doesn't render speed or Time Dilation to the realm of non-existence.
When I say that observers are not aware of each others Time Dilation, I mean that they cannot measure, see, or observe it directly like they can Doppler shifts. In order to become aware of the Time Dilation of a moving clock, they have to make radar measurements and apply Einstein's convention that the time of the outgoing radar signal is equal to the time of the returning echo signal. They have to keep track of a lot of measurements and then after the fact they can compile all the information and calculate the Time Dilation (and Length Contraction) of the other clocks and objects. So they can become aware of the others Time Dilation but only after doing a lot of work. Then when they get all done, they can transform from their own rest frame (in which they assumed the propagation of light was c in all directions) to any other frame and get a different set of Time Dilations, even for their own clock.
Your opinion on Time Dilation is tantamount to saying that each observer's own rest frame is preferred. But no frame is preferred in relativity. They are all equally valid.
You should abandon the short-cut formulas for Time Dilation and Length Contraction since, as you noted, they cannot be right in all situations. The Lorentz Transforms that you presented always work. That's what I use.
#### Attached Files:
File size:
4.5 KB
Views:
161
File size:
5.3 KB
Views:
157
14. Mar 30, 2014
### ANvH
You mean the Doppler shift of 2, as established by γ=1.25, β=0.6? But I think that that is not correct. You also need to transform the wave using β=0.333c and γ=1.06 →shift =1.41. So the Doppler shift changes too.
I agree, that is why the Doppler shift should change too if you create a new frame that is associated with a time dilation of 1.06.
I am not sure. You have shown me how to get to the right form of time dilation and length contraction using the Lorentz transformations and I am pleased with this. You clearly showed that one get to this by using one set of Lorentz transformations. The shortcut allows a quick calculation of the value of time dilation and length contraction with respect to the relative speed between two frames of references. You do the same.
I guess I have to live with the apparent discrepancy.
15. Mar 30, 2014
### ghwellsjr
No, you have merely shown that another shortcut formula doesn't work in all cases but the diagrams show that the Doppler is the same in all frames as determined by the Lorentz Transformation. If they didn't, the Lorentz Transformation wouldn't comport with reality and would have to be abandoned.
Actually, I prefer a different shortcut formula for Doppler because it depends only on the relative speed between the two clocks:
SQRT((1+BETA)/(1-BETA))
16. Mar 30, 2014
### ANvH
I am sorry to bother again, but I think you misread what I wrote. I think you are ambiguous when you say that the Doppler is the same in *all frames* and showing 3 diagrams and avoiding an answer. With respect to two frames having a relative speed of 0.6c moving toward each other, the Doppler shift is
$\gamma(1+\beta)=\sqrt{\frac{1+\beta}{1-\beta}}$.
Once the two frames pass each other the Doppler becomes red-shifted
$\gamma(1-\beta)=\sqrt{\frac{1-\beta}{1+\beta}}$
Using other frames will require again transforms. You gave the impression with your third diagram moving at 0.333c relative to the the first diagram that the Doppler remains 2 or 0.5, which cannot be true.
I don't think we disagree on this, nor do I think we disagree on time dilation and length contraction. All I am trying to make clear is that I fully understand the Doppler shift, and that I fully understand the diagrams providing length contraction and time dilation.
I am also trying to make clear that Doppler involves a blueshift (or a redshift) of both the frequency and wavenumber domains. The blueshift of the frequency domain translates to a shortening of the period, which seems paradoxical to a time dilation; the redshift of the wavenumber domain translates to a wavelength increase, which seems paradoxical to a length contraction.
Just to make clear that I am utilizing the Lorentz transforms to a wave:
$\omega t'-kx'=\gamma[\omega(t-\beta c^{-1}x)-k(x-\beta ct)]$
Gathering the temporal components and the spatial components, and using $ω=kc$ we get
$\omega t'-kx'=\gamma(1+\beta)(\omega t-kx)$
17. Mar 31, 2014
### ghwellsjr
Frames do not pass each other, the two observers with their clocks pass each other. I already twice described the inverse Doppler relationship when they pass in post #11.
Of course it is true and of course we disagree. Look at any of the diagrams. You will see that prior to the passing, between each pair of dots representing an observer's clock there is another thin line which shows that the Doppler factor is 2. After the passing, you will see that there is an additional dot between each pair of thin lines, which shows a Doppler factor of 0.5. These ratios are identical in all three frames. That is absolutely unambiguous.
I previously suspected that you were treating the rest frame of an observer as preferred but now it appears that you are equating the rest frame of an observer with the observer, otherwise, I cannot understand how you would say "the two frames pass each other". Am I correct? Is that why you think my third diagram is invalid, because there is no at-rest observer in it?
Last edited: Mar 31, 2014
18. Mar 31, 2014
### ANvH
I do not understand why the Doppler shift is "absolute" when a Lorentz transform is applied with β=0.333. The Doppler shift was first established by a Lorentz transform with β=0.6, correct? Then we established the time dilation related to β=0.6 using the first diagram. Then we evaluated that both red and blue observers see the Doppler shift of 2 before the clocks pass each other and then the shift of 0.5 after the clocks pass. I have no problem with that, and the time dilation of the moving frame remains 1.25 before and after.
Each clock has its own frame of reference, so when a clock moves relative to the other clock, then the frame associated with the moving clock moves too. When the clock passes, its frame passes (This is wrong?).
A clock emits a light wave at every tick, and if a clock's ticking is subject to a Lorentz transform (the frame without a rest observer, β=0.333, has a time dilation of 1.06), so should be the light wave that was emitted in another frame of reference. The way I understand you suggests that I am wrong and I don't get it, sorry.
Maybe the misunderstanding is just semantics, I don't know.
19. Mar 31, 2014
### ghwellsjr
It's the coordinates of events that are subject to the Lorentz Transformation when we go from one frame to another, not a clock's ticking, or Doppler shifts, or time dilation. Nothing changes except the values of the coordinates.
If you set up a scenario according to the coordinates of one frame, you are describing where (spacial coordinates) each observer, object, clock, etc is at each moment in time (temporal coordinates). If the object (observer, clock) is moving in that frame, then the object is subject to time dilation, not the frame. If the object is not moving, then its time dilation factor is 1 (or you could say it has no time dilation, they mean the same thing).
Light propagates at c along 45 degree diagonals the way we draw our diagrams. I put tick marks (dots) to show the progress of time on the thick world line for an object (observer or clock). These tick marks are subject to Time Dilation. I typically draw thin lines from some of these tick marks to show the image of the time on the clock as it propagates to the world lines of other objects (observers or clocks). The Doppler is shown by the ratio of thin lines to dots along a world line.
When you transform to a different frame at some speed with respect to the defining frame, you get a new set of coordinates for all the events (dots, intersections of lines, etc) but it doesn't change any information about what is happening to the various objects. This includes Doppler.
So you can transform the coordinates to another frame moving at any speed with respect to the defining frame, it doesn't have to be a speed where any particular object is at rest. You don't even have to have any object at rest in the defining frame, it's totally arbitrary, you do whatever you want.
I hope this clears up any semantic issues.
20. Mar 31, 2014
### ANvH
Thanks,
I also hope that this will clear up a number of semantic issues. The coordinates of the events are transformed and mapped to the frame. One can go from one frame to another. If a clock is moving in a particular frame then it is subject to time dilation. I get this and I assume you will agree with my repeat of your words.
How I see it when it comes to Doppler:
The clock is emitting a signal, which consists of a light wave with a wavelength of say 600 nm (as measured from a defining frame). If the clock is moving in a frame, it is subject to time dilation and appears to tick slower. At each tick a light wave is emitted and the wavelength of 600 nm is (should be) Doppler shifted. If we transform to a frame that will further increase the speed of the clock, the time dilation factor is increased further, the clock ticks even slower, and at each tick the wavelength of the signaling wave is (should be) more Doppler shifted.
What I think how you responded:
My guess is that you do not agree with the Doppler shift assessment. I have tried to remove semantic issues. If I understand you correctly, the defining frame is the frame from where the clock is emitting the signal and any frame transformation should not, does not, and cannot affect the Doppler shift. | 2017-12-17 23:39:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6612629294395447, "perplexity": 363.55872008161225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00213.warc.gz"} |
http://www.flyingcoloursmaths.co.uk/category/blog/ | # Browsing category blog
## $n$ maths blogs I often read
What does a blogger do when he has a stinking headache, no ideas, and a commitment to write a blog post? Of course! A lazy list post! Luckily, it’s a lazy list post that gives a shout-out to people whose stuff I enjoy, so it’s not all bad. News, roundups
## Call for guest posts
If you subscribe to the Sum Comfort newsletter, you might have picked up that I'm going to be a dad soon. I understand this will be pretty all-consuming for at least the next few months - which means I'm more than usually open to guest posts. Here are some guidelines | 2019-06-20 09:49:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1925414651632309, "perplexity": 3080.448454547999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999200.89/warc/CC-MAIN-20190620085246-20190620111246-00022.warc.gz"} |
http://mathhelpboards.com/content/?s=1ca04dd0651c53b59c7b59a6f60f77fd | • # The Front Page
### Welcome to the new CMS. Read me first.
by Published on November 6th, 2009 02:00
1. Categories:
2. CMS,
3. Article
Welcome to the new CMS. Here's a quick guide of the different areas of this page.
1. Section Navigation Widget. This widget allows you to go to different sections. The "plus icon" means that this section has sub-sections. Clicking on the "plus icon" will display the sub-sections.
...
### Managing CMS Section and Content
by Published on November 10th, 2009 02:00
1. Categories:
2. CMS,
3. Article
Here's a quick Visual Guide on how to Manage Sections in the new CMS.
1. Editing a Section: If you have permissions to manage a Section, as you hover over the Section title, a pencil icon will display.
After clicking the pencil icon, you will be taken to the Section Edit page. Here's what you will see:
2. Section Name:
Enter the Section Name
3. SEO URL Alias: This is the SEO Friendly URL. By default, if this is blank, the system will automatically copy the section title.
4. Section Layout: For each section you can define an individual section layout.
...
### How to Create a New Article
by Published on November 6th, 2009 02:00
1. Categories:
2. CMS,
3. Article,
4. Insert Images
Here's a quick visual guide on how to create a new article with the CMS.
1. Create New Article Button: Navigate to the section you want the article to be published in. Click on the "Create a New Article" button. This will open an article form.
2. Article Title: In the "Add/Edit Article" screen, enter the title of your article in the "Title" textbox.
...
### Promoting Articles from the Forums
by Published on November 6th, 2009 02:00
1. Categories:
2. CMS,
3. Article,
4. Promote,
5. Forums,
6. Insert Images
One of the innovative new features on vBulletin 4.0 Publishing Suite is the cross-publishing "Promote to Article" ...
### Article with Video
by Published on November 6th, 2009 02:00
This is a sample article with a YouTube video clip. ...
• ### Recent Forum Posts
#### Re: Infinitesimal = number?
I am getting the impression that "highmath", "Monoxdifly", and "UYTIYTYIUI" (on a different board) are the same person.
Country Boy Today, 11:50
#### Sum of two infinite series
Evaluation of $\displaystyle \displaystyle \sum_{r=1}^\infty \left(\frac{1}{36r^2-1}+\frac{2}{(36r^2-1)^2}\right)$
jacks Today, 10:43
#### Re: Version of Baire Category Theorem
Hi [unm]joypav[/unm],
Suppose $\{E_n\}_{n = 1}^\infty$ is a sequence of nowhere dense sets in $X$. For each $n$, $X - \overline{E_n}$ is a dense open
Euge Today, 10:32
#### Re: Remark on the Definition of Differentials ... Lafontaine page 5 ...
In any affine space, including $\displaystyle \mathbb{R}^n$, there is an operation of plotting a vector $\displaystyle \vec{v}$ from point $\displaystyle P$, denoted by $\displaystyle P+\vec{v}$. The result is another
Evgeny.Makarov Today, 09:01
#### Re: Differentiability of Multivariable Vector-Valued Functions ... ...
Thanks Opalg ... | 2018-10-22 16:44:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20097950100898743, "perplexity": 6728.15983784804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515352.63/warc/CC-MAIN-20181022155502-20181022181002-00095.warc.gz"} |
http://mathoverflow.net/api/userquestions.html?userid=10366&page=1&pagesize=10&sort=newest | 12
# Questions
1
6
0
108
views
### Convergence at the radius of convergence
may 14 at 20:07 Neil Strickland 13.3k12866
1
vote
0
52
views
### Is this basis of simplex polynomials known?
may 3 at 13:30 Neil Strickland 13.3k12866
4
10
2
372
views
### Polynomial maps between noncommutative groups
dec 15 at 23:04 pavel 562
3
19
1
344
views
### Is $\mathbb{H}P^\infty_{(p)}$ an H-space?
apr 15 at 22:08 Gustavo Granja 13622
1
4
2
285
views
### Modular representations with unequal characteristic - reference request
may 16 12 at 19:16 Jim Humphreys 25.9k3782
4
16
3
2k
views
### OCR for handwritten mathematics
nov 15 at 17:29 Fazal Karim 1
4
0
277
views
### Fourier theory of characteristic functions
sep 15 11 at 15:05 Neil Strickland 13.3k12866
4
8
1
379
views
### Unbased spectral sequences
sep 2 11 at 15:02 Tyler Lawson 18.7k24490
2
4
1
467
views
### Curvature formula
jul 9 11 at 10:57 Jean-Marc Schlenker 678158
7 | 2013-05-18 17:10:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6988272070884705, "perplexity": 13243.707294334597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382560/warc/CC-MAIN-20130516092622-00051-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://usaco.org/current/data/sol_maxcross_silver_feb17.html | (Analysis by Nick Wu)
We start by making an observation about the optimal block of signals to consider such that we minimize the number of signals that need to be repaired. We claim that there is an optimal block where the leftmost signal in the block does not need to be repaired. Assume for the sake of contradiction that this is not the case, and consider any optimal block of signals. If we slide it over to the leftmost contiguous block to the right of the optimal one such that the leftmost block does not need to be repaired, note that we removed signals that did need to be repaired. Therefore, in the worst case, this block requires at most the original number of blocks to be repaired, making it optimal.
Therefore, we only need to constrain ourselves to consider contiguous blocks where the leftmost block does not need to be repaired. If we sort the signals that are working, we can use binary search to find the rightmost working signal that would be within a block of $K$. This gives us an $O(N \log N + B \log N)$ algorithm.
Here is Mark Gordon's code, which is actually $O(N \log N + B)$. His solution leverages the fact that as we iterate over the leftmost signal from left to right, the rightmost signal that would be valid also iterates from left to right.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
vector<int> A;
int main() {
int N, K, B;
cin >> N >> K >> B;
A.resize(B);
for (int i = 0; i < B; i++) {
cin >> A[i];
}
sort(A.begin(), A.end());
int hia = 0;
while (hia < B && A[hia] <= K) {
hia++;
}
int result = hia;
for (int i = 0; i < B; i++) {
if (A[i] + K > N) {
break;
}
while (hia < B && A[hia] <= A[i] + K) {
hia++;
}
result = min(result, hia - i - 1);
}
cout << result << endl;
return 0;
}
An alternative approach, which can get the running time as low as $O(N)$ is to think of the array of signals as a binary array: 0 for a working signal, 1 for a broken signal. We then want to find a subarray of length $K$ whose sum is minimal, which can be done very easily after first computing an array $P$ of "prefix" sums, where $P[j]$ gives the sum of the first $j$ elements of our binary array. The sum of any range from signal $i$ to signal $j$ is then easy to obtain by taking $P[j] - P[i-1]$, so evaluating every length-$K$ range takes only $O(N)$ time.
Between these two approaches, note that Mark's approach above can more easily extend to the case where signals live at potentially large $x$ coordinates on a number line, not just at locations small enough to fit into an array. | 2018-04-20 01:10:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.724428653717041, "perplexity": 464.2073693289981}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937090.0/warc/CC-MAIN-20180420003432-20180420023432-00290.warc.gz"} |
http://aggflow.com/rspca-pet-xnbl/85ed87-minecraft-binary-adder | Dabei liefert der Ausgang. The example on the right uses a 4-bit design, so you can handle a hexadecimal key. Not really a con: in this circuit the following happens with maybe the code 311: 3 pressed, A activated; 1 pressed, B activated, C activated. The serial full adder has three single-bit inputs for the numbers to be added and the carry in. In the fourth and final layer, a key set of components are combined to create functional computer systems which can process any arbitrary data, often without user oversight. This sort of adder is the one usually used in the big calculating computers. And for [1,9] the MUX-table upon. To do this mine the. Now we have to make sure, that the state will be erased, if the following digit is wrong. Of all these systems, only Redstone was specifically added for its ability to manipulate information, in the form of Redstone signals. Socialize Forums Wall Posts Discord Members . The outputs of a combinational logic circuit depend on the present input only. Binary 2-bit decoder. So we combine (B* & A) =: (AB*). This is how you input the addends, and hence I have two rows for two addends. Now we enter a correct second digit. It does not require any memory like component. Full adders are made up of two half adders combined in a specific way – the difference is full adders also consider a carry input. Ein Volladdierer (englisch full adder) ist ein Schaltnetz, das üblicherweise als digitale Schaltung realisiert wird. Suppose we wanted to build a device that could add two binary bits together. The data from the timer will be preserved. In this program we can choose a lot of capes with different design. Please note the following when making a duration timer: D flip-flop is an electronic component that allows you to change its output according to the clock. Connect all the white ends to one addend (in order) and all the orange ends to the other. Therefore, we combine (((b1=b1 & b2=b2) & b3=b3) & b4=b4) =: (b*=b*). It also popularly known as binary adder in digital electronics & communications. Adder Also, just in case I wasn't clear in the first post this adder is 3 ticks for the first 8 bits and 1 tick for every additional 8 bits. It functions on the rules of Boolean algebra and is used in programming extensively. Does anybody have a design for such a decoder? Socialize Forums Wall Posts Discord Members . Even if you don't post your own creations, we appreciate feedback on ours. Binary adder is one of the basic combinational logic circuits. In other words, outputs of combinational logic circuit do not depend upon any previously applied inputs. Combination locks can be very useful in creating adventure maps. Redstone, like electricity, has high reliability and high switching-speeds, which has seen it overtake the other mechanical systems as the high-tech of Minecraft, just as electricity overtook the various mechanics such as pneumatics to bec… The first layer is that of atomic components; redstone/redstone torches/repeaters/blocks, pistons, buttons, levers and pressure plates are all capable of affecting redstone signals. Two binary inputs are provided, which will be added, then our result will be the sum of these (in binary). Designs more than two blocks high are represented by animated gifs or labeled side by side. Binary Adder | Digital Circuit. Full Adder. Follow. Join us! Because of the delay that the redstone torch adds, the delay of the initial repeater, the one that stays unlocked, must be increased to 2 ticks. Minecraft HDL is a digital synthesis flow for minecraft redstone circuits. For example, if 5 of the locked repeaters are powered, it means the time difference was 0.4-0.5 seconds, ignoring lag. Note that if you are playing in survival multiplayer, other players will still be able to break into the mechanism and cause it to activate without knowing the password. For /C\ the reset-event is only the manual-reset-line, from B. For simpler mechanisms, see electronic mechanisms, wired traps, and Redstone. Entertainment Contests Events . Search Planet Minecraft. If you want to understand more with binary + Minecraft, I would look up some of bennycubes videos. That's the bit of wire that'll lead to the next adder which will add like binary is supposed to work. 0-9 if the input is decimal or 0-F if the input is hexadecimal). Make a structure block file of this (so it easily copied). It relies on two XOR gates, two AND gates, and one OR gate. The /b1\-box outputs the first bit, the /b2\-box the second, and so on. Place a row I.W. Continue making rows until you have reached only one I.W. It takes two numbers of two bits and adds them. The comparison works the same way for Key[2], and Key[3]. So, if you want to input the combination 1-0-1-0, follow these steps: In theory, you can program the lock from this serial interface as well. There should be eight inputs and outputs. A full adder takes two inputs A and B and a Carry input and produces the Sum and Carry outputs. Try placing levers and redstone lamps at the respective ends to test your creation. /AB*\ now resets the memory-cell /A\, if the second digit is entered false and the first key has been already entered. While not very exciting, this is the building block to all other, more complicated adders. 4 bit Adder. Roll Random Map! This sort of adder is the one usually used in the big calculating computers. They are useful in many ways as they are compact, 5×5×3 at the largest. Such devices include mathematical adders, combination locks, memory-registers, etc. At the end of the gif I set both input numbers to '11' which is the binary representation of the number 3. Entertainment Contests Events . TheRedstoneChicken. Many people have seen the massive undertakings of minecrafters. With the /dt-\-blocks, we give /B\ the chance to act, before key-pressed-event is activated. The actual significance of these going in also doesn't matter, the same as when adding in decimal. Volladdierer. In minecraft we have to use four ANDs like the left handside. Visualize /A\ is on. The main point is to understand that logic gates can be used in circuits to do extremely complex tasks. You can decrease the amount of digits by one by setting any digit (except the last) to (0000), You can open the door permanently by setting the last digit to (0000). The main point is to understand that logic gates can be used in circuits to do extremely complex tasks. The actual configuration of the gates goes a little beyond the scope of this article. A full legend is on the. If you are measuring higher scales, the second signal might not reach all of the repeaters. It adds two 2-bit binary numbers and converts the sum into decimal for simple arithmetic with a max sum of 6. Home Minecraft Maps 2-bit Binary Adder Minecraft Map. Half and Full adders. Content Maps Skins Mobs Texture Packs Data Packs Mods Blogs . The adder is now finished. The actual significance of these going in also doesn't matter, the same as when adding in decimal. Our particular adder will add three ones, with a grand total adding capacity of three. With some thought, these gates can be compressed (as both AND and XOR gates already exist in the game, and an OR gate can simply be a redstone wire). Despite how it may look, this mechanism is fairly simple. Basically, it's equivalent to the expression: "Set the output Q to the input D when the input C goes from 0 to 1". Torches: 14 Redstone wire: 15 Size: 5x6x3 4 Bit Adder. Support … Search the diagram for the three blocks near "dt-". Everything required for its construction is shown above, except that under the piston next the purple there is a block (non-glass, or ice... it must transmit redstone current) and before it a repeater. To understand it you will need a basic knowledge in binary addition, which I will upload a video of later tonight or possibly tomorrow. Aptoide Minecraft Download Nero 2015 Platinum Keygen Cypher System Pdf Free Download Skyrim Patch 1.9 Download Serial Subtractor. As you build your adder, pay attention to when redstone torches turn on and off; … The black lines are imaginary AND gates. If you want to use very long keys, you also should softcode the key-setting. But mention, in fact the key-setting-input will be very small, but the circuit will be much more bigger, than using hard-coded key-setting. So B will be on, and (not B) is off. So 0010 + 0011 should yield 0101 (2 + 3 = 5, we are reading right not left). A series of gates that converts a 3-bit binary input to a single active line out of many. It relies on two XOR gates, two AND gates, and one OR gate.With some thought, these gates can be compressed (as both AND gates already exist in the XOR gate, and an OR gate can simply be a redstone wire). Report. They are useful in many ways as they are compact, 3×5×2 at the largest. Search Planet Minecraft. You can also measure how long a signal lasts. Now you see Key[i] with i=1..3, here you set the key you want to use. You can use this in reverse as well (not as a multiplexer, but if you reverse the repeaters the signal from every ex-outptut (0–7) will only propagate if it matches the current state of the demultiplexer, so it works like "Output3 = (Input3) AND (Demux=011)"). It is now complete, should all your linkages be made. Here you can add also buttons for [A,E], but I disclaimed them preferring a better arranging. As there are many lines combined using implicit-ORs, you have to place diodes before each input into a circuit to keep signals from feeding back into other inputs. They are usually composed of many simpler components, such as logic gates. The third layer is high-level components, made by combining logic gates. Converters include Binary to BCD, Binary to Octal, Binary to Hex, BCD to 7-Segment, etc. Redstone, like electricity, has high reliability and high switching-speeds, which has seen it overtake the other mechanical systems as the high-tech of Minecraft, just as electricity overtook the various mechanics such as pneumatics to become the high-tech of our world. So you can use 15 various digits, [1,F] or [0,E]. The question is, why we use the minor-delay-blocks /dt-\. The two binary numbers to be added are A3A2A1A0 and B3B2B1B0 which are applied to the corresponding inputs of full adders. These when flipped on indicate 1 and when off indicate 0. Therefore, we handle a key-press-event (--/b1 OR b2 OR b3 OR b4\--/dt-\--/dt-\--). You can use any number of bits, but this configuration is already pretty secure even if someone figures out what a lock it is. The adder is 5x7x5 per unit (175 blocks) at it's smallest, and the carry line only adds a block or 2 to the side. The actual configuration of the gates goes a little beyond the scope of this article. They generally have a number of components which must be set in the right combination in order to activate something such as a door. Join Planet Minecraft! This is the first row. Therewith /A\ will be not reset, if we enter the first digit, /A\ only should be reset, if /A\ is already active. Here look through the first two columns. For the output I used redstone lamps that indicate in the same method as the levers below them. In Minecraft, several in-game systems can usefully perform information processing. Students can use provided binary worlds as a guide, or use youtube videos about binary or decimal locks. And you will also need some basic knowlegde about how redstone works. Vielleicht findet ihr auch ein besseres! mfg. Advanced redstone circuits encompass mechanisms that require complicated redstone circuitry. A personal favorite of mine was the redstone computer, but I have not yet explored enough of electronics to attempt that undertaking (or have a great enough graphics unit). The first picture here is an example of 1 + 1 = 10 (1+1=2). Gathering resources on peaceful difficulty, How to survive in a single area indefinitely, Joining a LAN world with alternate accounts, Save game data to Dropbox (world data only), https://minecraft.gamepedia.com/Tutorials/Advanced_redstone_circuits?oldid=1677024. Hope I broke it down for you (or others) somewhat well! In the following we'll use (0)16 = (1111)2. This design is not very practical as a lock, but might be a nice feature on something like a puzzle challenge map. Home Minecraft Maps 8 bit binary adder Minecraft Map. This page was last edited on 3 September 2020, at 02:45. minecraft-0-bit-binary-adder. Share it with us! Alternate Full Adder. We can easily implement this using 3 "punch cards" that consist of solid blocks and air. I'm currently working on a 4-bit multiplier in the game Minecraft and have a design for a 2-bit multiplier, but can't find one for a 4-bit and can't figure it out.. Es besteht aus drei Eingängen (. Here's a two bit adder made from a couple of gates (four, actually): The two inputs on the left are on, so the output is the binary form of 2: on and off or "10." Dependencies from previous output only. If the number of bits is Q, the most significant bit reverses every Q/2 numbers, the next bit reverses every Q/4 numbers an so on until we get to the Qth bit. Minecraft Cape Adder Minecraft is a sandbox independent video game originally created by … The full adder adds in ones. The first represents the input-digit in (hexa)decimal, the second represents the input-digit in binary code. If every comparison is correct, we set the state, that the first digit is correct. You also can convert a 1-of-16 signal to a 4-bit binary number. Then connect the purple feeds to the orange lines. The systems include water, sand, minecarts, pistons, and redstone. The hard-coded key-setting is a compromise for a pretty smaller circuit, when using not too long keys. You only can use 15, because the state (0)16 = (0000)2 won't activate the system. Mit einem Volladdierer kann man drei einstellige Binärzahlen addieren. The figure below shows a parallel 4 bit binary adder which has three full adders and one half-adder. But here is … I included sixteen bits but you only really need one more the max number of bits you have in addends. The only changes are, that the manually reset comes from (not A) and the auto-reset (wrong digit after), comes from (C). Support Tickets Help . Carry input and output are aligned to easily connect many of these modules in series. The OR after /AB*\ is used, for manually resetting, i.e. Tools PMCSkin3D Banners Papercraft . So the following won't be activated with the actual digit. A Binary Adder Chapter 16 - Principles Of Digital Computing PDF Version. Demultiplexer is a circuit that uses the following logic: The most obvious way to implement a demultiplexer would be to put a whole bunch of logic gates and connect them together, but even with 3 or 4 bits it turns into a mess. 10/18/2020 The serial binary adder or bit-serial adder is a digital circuit that performs binaryaddition bit by bit. Edit: I'll put together a world save. Browse Servers Collections Time Machine . The bar to set the key will be get the bigger, the longer the key you want to be. Half adders are the combination of an XOR and an AND gate and they are the basis of binary addition (and as such of every other operation too). For example, a logic gate is a device (either physical or digital) that takes one or more binary inputs and produces a single binary output. We have constructed a 2-bit binary full adder for our System Source Computer Museum. Could someone provide me with the basic logic behind binary multipliers. A full adder takes two inputs A and B and a Carry input and produces the Sum and Carry outputs. Sorry fo rthe quality, I'm only using free software and a cheap microphone. Minecraft - How to Build a Binary Adder-Subtractor Array With Basic Redstone Tutorial Here is my first ever youtube video. Therefore, we have to mux the used buttons to binary data. Minecraft Cape Adder works in 100% and you did not need any mods on minecraft! Any arithmetic operation in digital circuits happen in the binary form, therefore, the Binary addition is one of a most basic & important arithmetic operations to process the instruction. Requirements for each output line (excluding separating diodes): A series of gates that converts a 4-bit binary input to a single active line out of many (e.g. This particular one uses binary. Binary adder or addition calculator - online tool, logic & solved example to perform addition between to binary numbers. The example on the right uses ORs (>=1), XNORs (=), RS NOR latches (SR) and some delays (dt*). However, you will need to be careful when reading these timers. This circuit allows you to input a 4-bit number with two levers. than the row previous, while odds (not one, because there is no previous & also as it is technically classified as a unit not odd) will have the same. Two binary inputs are provided, which will be added, then our result will be the sum of these (in binary). Because the repeaters will still be powered when the timer is used again, the circuit must be obstructed between uses in order to unlock the repeaters. (interworkings) with a spacing of one block and in number to be equal to the number of bits you have inputting (for each addend). Hallo Leute, in diesem Thread könnt ihr eure Arten des Full Adders zeigen/vorstellen oder erklären. The first output of them is the 1-bit, the second the 2-bit and so on. This means once this mechanism is built it can be copied throughout the entire device. Maps Skins Servers … A full adder gives the number of 1s in the input in binary representation. It's and RS NOR latch that sets its value to the D input when the ">" (clock) input is changing its state from low to high (in some cases from high to low). How to Survive Your First Winter With Houseplants, RC Arduino Domino Layer With Bluetooth App Control. by a pressure plate. Login; or; Sign up; Dark mode. Videos könnt ihr dazu natürlich auch gerne machen. {\displaystyle c_ {\mathrm {out} }} ). Minecraft . Minecraft Cape Adder is also known under the name of Minecraft Cape Hack and Minecraft Cape Generator! For example, if they are set to 4 ticks and the first 3 are active, it means the time difference was 0.8-1.2 seconds. Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to emphasize their use in a multi-bit adder. The first step is to make a user interface, this is where you give it commands from. Community . Truth table for a three-bit sorting device: Timers can detect the time difference between the first input and the second. Here we look, if any key is pressed, and we forward the event with a minor delay. :smile.gif: 15 days ago | 0 view. If you want to handle 16 states, you edit the logic, to interact for a 5-bit input, where the 5th bit represents the (0)16 state. To the right you can see what's called a full adder, which basicly can add 1+1 whcih in binary would give you the answer 10. Tools PMCSkin3D Banners . The inputs are at the bottom and right and the outputs are at the top and left. In this lock, the > signal propagates from the rightmost flip-flop to the leftmost, so the signal shifts to the right. So it is prevented to be activated, before /B\ is true. You will need repeaters to replenish the signal. For the XNORs I would prefer the C-design. Logic gates can be used to create what is called an adder The white and orange ends represent two values being added, such as a + b, and the wonder of binary is only four responses are 0, 1, or 10. 8 months ago. Note: on the white line, the block powering the piston is supposed to have a repeater before it. Next you will make another row, but feeding the lowest value's back to the lowest value on your display (U.I.). When you add 3 + 5 it will equal 8, and so long as they are in the same place that value will always be the same aka 300 + 500 = 800. Use here the MUX-table upon, and for (0)h := (1111)2. If we enter the first digit, we have to compare the bits by pairs (b1=b1, b2=b2, b3=b3, b4=b4). You can set your key here with levers in binary-encryption. On the other hand, Pin 6, 2, 15, 11 is the second 4-bit number where the Pin 6 is the MSB and pin 11 is the LSB. The major delay /dt+\ must be used, because /A\ resets itself, if we press the digit-button too long. Binary Adder by HazJM; Balloon POP! A full adder adds binary numbers and accounts for values carried in as well as out. And it will be deactivated, when a pressure-plate resets /A\ and /B\. Technical ArCoAn. As you can see, this system is very compact and comprehensible. 1. Redstone, like electricity, has high reliability and high switching-speeds, which has seen it overtake the other mechanical systems as the high-tech of Minecraft, just as electricity overtook the various mechanics such as pneumatics to becom… Single active line out of many simpler components, such as logic gates is not very exciting, is. As binary adder or addition calculator - online tool, logic & example... Adding in decimal understand that logic gates under misc to one addend ( in binary representation of the number.., when a pressure-plate resets /A\ and /B\ binary is supposed to work Layer! Like the left handside in 100 % and you will also need some basic knowlegde about how works. User interface, this is how you input the addends, and for ( 0 ) 16 (... It takes two numbers of two bits and adds them when using not too long simple arithmetic with a total... Circuit itself 1, F ] or [ 0, E ] easily expandable, shown! Your first Winter with Houseplants, RC Arduino Domino Layer with Bluetooth App Control pairs ( b1=b1,,!, please place it in the diagram for the programming levers wire that 'll lead to the /A\! Active line out of many simpler components, made by combining logic.. That looks like this: where the green triangles are minecraft binary adder Texture Packs Player Skins Mob Skins Packs..., this is the 1-bit, the > signal propagates from the orange block, I... Very compact and comprehensible wire: 15 Size: 5x6x3 4 bit binary adder bit-serial! 15, because /A\ resets itself, if the first picture here my! Input a 4-bit binary number consist of solid blocks and air only the manual-reset-line, from B add binary! Using redstone to Survive your first Winter with Houseplants, RC Arduino Domino Layer Bluetooth... Two XOR gates, and key [ 3 ] feeding back into other inputs that converts a binary. Binary + Minecraft, several in-game systems can usefully perform information processing, i.e attention when... Or decimal locks 4 RS NOR latches and a cheap microphone + 11111111 = 111111110 255+255=510. Collaboratively built a ‘ binary learning world ’ that can be very useful in many ways as they usually! And /B\ digit-button too long Mods on Minecraft 2 wo n't be activated, if 5 of gif!, depending on key-length status to the leftmost, so the signal to! Binary ) notice a pattern applied inputs before key-pressed-event is activated max value this can! Placing levers and redstone from the rightmost flip-flop to the RS-latch /A\ a world save complicated redstone.. The /b1\-box minecraft binary adder the first step is to make sure, that our is... Erased, if the second may not have time to read the.. Manipulate information, in the form of assessing student knowledge on binary and decimal conversion is 1100 + 1000 10100! The leftmost, so the minecraft binary adder travels and Minecraft Cape Generator that our key is pressed and the first the..., please place it in the form of binary number the output I used redstone at. With a minor delay another format key-pressed-event is activated produces the sum Carry! As a lock, but I disclaimed them preferring a better arranging will. Are provided, which can be determined by how far the signal minecraft binary adder to the inputs... A ) and ( reset B ) is off add will be get the bigger, the I... That require complicated redstone circuitry you have reached only one I.W and ( reset B ) Skins! Not a ) and connect them up ; Dark mode can set your key here with levers in binary-encryption b3. So I finished the binary representation 16 = ( 0000 ) 2 wo n't be activated with the actual of. Compact and comprehensible search the diagram for the three blocks near dt- '', we. If any key is pressed and the first bit, we should make a that! The addends, and so on information processing instead create a quiz or other! 3 ] are aligned to easily connect many of these modules in series block file of article! Designed so that it is prevented to be added and the second signal might not reach all of gates. First output of them is the building block to all other, more complicated adders you n't... You input the addends, and redstone engineering, the same as when in. Goes a little beyond the scope of this minecraft binary adder easily expandable, as shown the... Sort of adder is a digital circuit that performs binaryaddition bit by bit extend key. A better arranging huge difference between the first bit, we appreciate feedback on.. As out need to be a parallel 4 bit adder bit adder the final.. Each server diagram for the numbers to '11 ' which is the binary of... Locked repeaters are powered, it means: any key is entered false number bits. 2 + 3 = 5, we are reading right not left ): Home Minecraft 8. Locks, memory-registers, etc lamps at the end of the locked are. Cheap microphone patterns of bits, often abstracting them into a more humanly comprehensible encoding like natural.! A 4-bit binary number, possibilities: digitLength ) before key-pressed-event is activated moved! You only can use D flip-flops to shift the value from left to right Layer with Bluetooth App.... Packs Data Packs Mods Blogs to Hex, BCD to 7-Segment, minecraft binary adder detect time..., with 8 inputs each in a specific way attempt to use industry standard design tools and methods generate... 4-Bit number with two levers to read the Data youtube videos about binary or decimal locks input a... Orange lines the third Layer is high-level components, such as logic gates can set... '' or the masks are being moved by pistons with slime blocks up some of videos. It means: any key is order-sensitive connect many of these ( in binary code the! Complicated adders /dt-\ -- ) b3 or b4\ -- /dt-\ -- ) pairs ( b1=b1 b2=b2. Programming extensively Octal, binary to Hex, and redstone lamps that indicate in the calculating! Content Maps Texture Packs Player Skins Mob Skins Data Packs Mods Blogs /A\ itself. For a pretty smaller circuit, when a pressure-plate resets /A\ and /B\ ) is off figure below a! The basic logic behind binary multipliers orange lines somewhat well or some other form assessing. Skins Mob Skins Data Packs Mods Blogs too long keys redstone feeding the repeater between purple and cyan blocks the!, before key-pressed-event minecraft binary adder activated attention to when redstone torches turn on and off ; … full adder Cin. What I belive to be activated, before /B\ is true save the to. Time can be used, because /A\ resets itself, if the second is 1100 + =... Input-Digit in binary representation of the gif I set both input numbers to be added, then result! Values carried in as well as out be used to perform binary subtraction the. Ein Schaltnetz, das üblicherweise als digitale Schaltung realisiert wird any amount of time can copied! Be determined by how far the signal is only the manual-reset-line, from B circuits. Have reached only one I.W, pistons, and for ( 0 ) h: = ( 1111 ) wo... Such a decoder your adder, pay attention to when redstone torches on! With different design Survive your first Winter with Houseplants, RC Arduino Domino Layer with Bluetooth App Control use ANDs., 3×5×2 at the top and left the output I used redstone lamps that indicate in the big computers! To our students ( minecraft binary adder Science & information Technology ) buttons for [,. The signals are short times ( like if you fix this, the block powering the piston supposed... Rc Arduino Domino Layer with Bluetooth App Control than two minecraft binary adder high are represented by animated gifs or side... Is wrong, we are reading right not left ) high-level components, by!, go into Minecraft and build a binary adder in digital electronics redstone... Suppose we wanted to build a device that Could add two binary inputs are provided which. Neuen design with basic redstone Tutorial here is my first ever youtube.... Left ) on key-length rows for two addends the name of Minecraft Cape adder is the usually! The RS-latch /A\ and B and a Carry input and produces the sum and Carry outputs first,. The addends, and redstone on each server on sides of block all! Circuit do not depend upon any previously applied inputs and when off indicate 0 can a! ) 16 = ( 0000 ) 2 finished the binary adder ( picture show ) all! 14 redstone wire: 15 Size: 5x6x3 4 bit binary adder exactly. … binary adder you can understand this design is not entered circuit that like! Key you want to use of assessing student knowledge on binary and decimal conversion look, this mechanism fairly... 11111111 = 111111110 ( 255+255=510 ) first picture here is an attempt to use standard! Right uses a 4-bit binary number article under misc only using free software and a minecraft binary adder is where give... Every comparison is correct, we use the delay /dt+\ must be set the. These systems, only set a delay with a grand total adding capacity of three or or... How far the signal shifts to the corresponding inputs of a given to... Difference between the first step is to understand that logic gates by how far the signal travels - online,! 1 + 1 = 10 ( 1+1=2 ) softcode the key-setting you to a.
2017 Toyota Corolla Le, Qualcast Suffolk Punch 30s Manual Pdf, Tns 865 Driver, 2017 Toyota Corolla Le, Evs Worksheet For Class 1, Increased Stroma In Ovary, Sonicwall Vpn Connected But Can't Access Network, Fastest Growing Mlm Companies, Printable Map Of Hawaiian Islands, Union Wharf Hackney, | 2021-12-06 20:44:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3967994749546051, "perplexity": 2574.426435530952}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00625.warc.gz"} |
http://papisiano.com/2022/07/31/family-components-and-the-danger-of-extreme-covid/ | This doc describes the obligations of military commanders for environmental safety during the preparation and execution of navy activities. It also recognises the necessity for “a harmonisation of environmental ideas and insurance policies for all NATO-led military activities”. It instructs NATO commanders to use “greatest practicable and feasible environmental safety measures”, in an goal to reduce the environmental impacts brought on by army exercise. The document is complemented with several other NATO Environmental Protection Standardization Agreements and Allied Joint Environmental Protection Publications , which are all focused on defending the surroundings throughout NATO-led navy activities. The aim of the STEEEP is to integrate environmental protection and energy effectivity laws into technical requirements and specs for armaments, gear and supplies on ships, and the ship to shore interface in Allied and partner countries’ naval forces.
A clumped distribution, may be seen in crops that drop their seeds straight to the bottom, such as oak timber; it can be seen in animals that reside in social teams . Uniform distribution is noticed in crops that secrete substances inhibiting the expansion of close by individuals . It can also be seen in territorial animal species, similar to penguins that preserve an outlined territory for nesting. The territorial defensive behaviors of every particular person create an everyday pattern of distribution of similar-sized territories and individuals within those territories.
A continuum exists from closed populations which are geographically isolated from, and lack trade with, different populations of the same species to open populations that present varying degrees of connectedness. There are two principal types of competition-namely, interference and exploitative. Interference competition happens when one particular person immediately harms another. Interference could additionally be dramatic, as in deadly aggression, or refined, as when social interactions reduce the time obtainable for gathering resources or enhance the chance of predation. Exploitative competitors occurs when one particular person consumes a resource, such as meals, that otherwise would have been consumed by another particular person.
A multivariable logistic regression model was used to estimate the affiliation between presence of children in households, variety of individuals living in a household, and property sorts on the risk of hospitalization with COVID signs. We ran three models, all adjusted for age, gender, race/ethnicity, income, shut contact, important employee standing, and the county-level community transmission fee. Models examining the exposures of the number of people living in family and property types fashions had been also adjusted for presence of youngsters in the household. These variables chosen were based on hypothesized causal associations and confounders, and direct acyclic graphs had been developed for each model. Study participants had been people screened for enrollment into the Communities, Households, and SARS/CoV-2 Epidemiology COVID Cohort study who completed an preliminary baseline assessment.
It is taken into account an essential topic able to throwing gentle on the character of population training. The statistic is the imply grade point average, $$\bar$$, of the sample of a hundred college college students. The pattern is a random selection of a hundred faculty college students within the United States. Or, we might use $$\hat$$, the proportion in a random pattern of a thousand likely American voters who approve of the president’s job efficiency, to estimate p, the proportion of all doubtless American voters who approve of the president’s job efficiency. Parameter A parameter is any abstract number, like a mean or share, that describes the whole population.
Strategic administration is the continued planning, monitoring, evaluation and assessment of all necessities a company needs to … Although the researchers wouldn’t have a precise quantity, as lengthy as the sample is large enough and the study adequately managed, they need to have a quantity that provides them a fairly good thought of the prevalence of fixed-mobile substitution amongst that demographic. According to the United States Census Bureau the world’s inhabitants was about 7.5 billion in 2019 and that the 7 billion number was surpassed on 12 March 2012. According to a separate estimate by the United Nations, Earth’s population exceeded seven billion in October 2011, a milestone that offers unprecedented challenges and alternatives to all of humanity, based on UNFPA.
What’s extra, labeling a population “at risk” or “vulnerable” with out providing any historical context suggests that greater vulnerability is an inherent characteristic of that inhabitants, although this vulnerability typically stems from centuries of exploitation by Europeans. Yet by the mid-1980s, extra critically minded scientists decided that drier situations within the Sahel had been an effect of large-scale climatic shifts – specifically, changes in ocean surface temperatures – not of native human actions. This was a direct echo of accusations made by 19th-century French colonial officers to justify their very own incursions into the area.
Infection with the COVID-19 virus might result in critical complications and tens of millions of deaths, especially among older folks and people who have present well being conditions. What share of a neighborhood must be immune to be able to achieve herd immunity? The extra contagious a illness is, the greater the proportion of the population that needs to be resistant to the illness to cease its spread. It’s estimated that 94% of the inhabitants must be proof against interrupt the chain of transmission. Often, a percentage of the inhabitants should be able to getting a illness in order for it to spread. If the proportion of the population that’s proof against the illness is bigger than this threshold, the spread of the disease will decline.
Researchers and policymakers share the task of selecting appropriately from among alternate rural definitions presently obtainable or creating their own unique definitions. These are just a few examples of how one might use standard deviation, but many extra exist. Generally, calculating standard deviation is efficacious any time it is desired to understand how far from the mean a typical worth from a distribution could be.
In 2021, NATO adopted an formidable Climate Change and Security Action Plan to mainstream local weather change considerations into NATO’s political and navy agenda. In 2006, NATO’s Science Committee merged with the CCMS to form the Science for Peace and Security Programme to develop initiatives on emerging safety challenges, including environmental security points like water management and the prevention of natural catastrophes, and power security. Advancing inclusive research is advanced, involving genomic intricacies and intersecting social nursing care plans drivers of health. Achieving broader range, equity, and inclusion in scientific trials requires nothing lower than a common dedication to various, equitable and inclusive research that can lead to higher medical treatments for more folks. It is time to maneuver from “should” to “must.” Rather than recommend, we feel FDA ought to require the development and implementation of enrollment plans centered round growing variety for all Phase 2 through four trials.
Furthermore, due to accessibility issues, marginalized tribes or villages might not present data at all, making the information biased in the direction of certain regions or groups. Dense inhabitants clusters usually coincide with geographical areas often referred to as city, or as an urban or metropolitan area; sparsely populated areas are also identified as rural. These phrases do not have globally agreed upon definitions, but they are useful generally discussions about population density and geographic location. Studies of human populations usually happen at or below the town stage in places like Manhattan, which is a part of New York City, New York, United States.
When you measure a certain statement from a given unit, similar to a person’s response to a Likert-scaled merchandise, that observation is called a response (see Figure 8.2). In different phrases, a response is a measurement worth offered by a sampled unit. Each respondent will provide you with different responses to totally different items in an instrument.
If the population grows indefinitely, much less and fewer assets will be available to maintain the inhabitants. This course of during which per capita inhabitants development changes when inhabitants density changes is referred to asdensity dependence. In conclusion, to enhance causal inference and policies and action primarily based on this data, the inhabitants sciences have to increase and deepen theorizing about who and what makes populations and their means. At a time when the subject of causality within the sciences remains hotly debated by philosophers and researchers alike, all events nonetheless agree that “the query of how probabilistic accounts of causality can mesh with mechanistic accounts of causality desperately needs answering” . As my article makes clear, the concept and reality of “population” reside on the nexus of this question. Clarifying the substantive https://library.lasalle.edu/c.php?g=130983&p=3053126 defining features of populations, together with who and what structures the dynamic and emergent distributions of their characteristics and components, is thus crucial to both analyzing and altering causal processes.
Growing opposition to the slender inhabitants control focus led to a significant change in population management insurance policies in the early Eighties. A group of individuals of the identical species occupying a specific geographic space. Populations may be comparatively small and closed, as on an island or in a valley, or they could be more diffuse and with no clear boundary between them and a neighboring inhabitants of the same species. For species that reproduce sexually, the members of a inhabitants interbreed both solely with members of their very own population or, where populations intergrade, to a greater degree than with members of other populations.
Additionally, as a outcome of transportation has become easier and more frequent, diseases can unfold quickly to new regions. In both circumstances, there’s a sequential change in species until a more or less everlasting community develops. Voracious feeders and fast reproducers, Asian carp could outcompete native species for meals and will lead to their extinction. It competes with native species for these sources and alters nursery habitats for other fish by eradicating aquatic plants. Another species, the silver carp, competes with native fish that feed on zooplankton. | 2023-04-02 00:27:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3377079963684082, "perplexity": 3132.9859724883904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00582.warc.gz"} |
https://mathoverflow.net/questions/386168/which-graphs-on-n-vertices-have-the-largest-determinant | # Which graphs on $n$ vertices have the largest determinant?
This is a question that seems like it should have been studied before, but for some reason I cannot find much at all about it, and so I am asking for any pointers / references etc.
The determinant of a simple graph is the determinant of its adjacency matrix which is, of course, a symmetric binary matrix with zero-diagonal.
Question: Which graph has the largest value of $$|\det(A)|$$ over all simple undirected graphs on $$n$$ vertices?
(We could ask separately for the graphs with minimum determinant, which will be some negative number, and those of maximum determinant, but by taking the absolute value, this just becomes a question of which graphs have the most extreme determinant.)
For small numbers of vertices, we can just find the answer almost by hand:
• On $$2$$ vertices, the winner is the complete graph $$K_2$$ with spectrum $$\{-1,1\}$$, so determinant $$-1$$, of absolute value $$1$$.
• On $$3$$ vertices, the winner is $$K_3$$ which has absolute value of determinant equal to $$2$$
• On $$4$$ vertices, the winner is $$K_4$$ with value $$3$$.
• On $$5$$ vertices, there are two winners (both with value $$4$$), namely $$K_5$$ and the "bowtie" graph (two triangles sharing a common vertex).
Sequence so far is $$1$$, $$2$$, $$3$$, $$4$$ (don't bother looking this up in OEIS).
For larger numbers of vertices, we turn to the computer and discover that on $$6$$ vertices, the maximum value is $$7$$, and this is realised by two graphs, namely a triangle and a $$4$$-clique sharing a vertex, and two $$4$$-cliques sharing an edge.
From there, the sequence continues: 12, 28, 128, 256, 576 for 7, 8, 9, 10 and 11 vertices respectively, with between 2 and 7 graphs achieving the maximum for each of these values.
So now I do have enough to look up in the OEIS, but there are no matches.
The problem of finding the maximum determinant of a (0,1)-matrix, often called the Hadamard maximal determinant problem has been extensively studied, and there are lots of bounds, and constructions of extremal matrices etc. (There is a mapping between (0,1)-matrices and (-1,1)-matrices which changes the determinant predictably and when they exist, Hadamard matrices are the winners.)
So lots is known, and it is summarised in sequence A003432 on the OEIS which gives exact values up to $$n=20$$.
Just for comparison, for $$n = 6$$ to $$n=11$$, we have
• 7, 12, 28, 128, 256, 576 (for graphs)
• 9, 32, 56, 144, 320, 1458 (for (0,1)-matrices)
From several sources, including the OEIS, I have been advised to check a website attributed to Will Orrick at http://www.indiana.edu/~maxdet/ but this appears to be offline or removed or something, and nor could I find his email address in the directory for that university.
So my question remains: what is known about maximal determinant (in absolute value) of the adjacency matrix of a simple graph on $$n$$ vertices?
• You can often try the internet wayback machine when a page is down. In this case, it works and you can check an old version of Will Orrick's site at: web.archive.org/web/20160229005938/http://www.indiana.edu/… (Feb 29 2016) and a few other dates if you wish. – Asvin Mar 11 at 19:28
• It's at least exponential by taking $\lfloor n/s\rfloor$ disconnected copies of $K_s$ to get a det of $\pm (s-1)^{\lfloor n/s\rfloor}$ (which is about $1.3^n$ for $s=4,5$). An another condition you might add is connectedness. – rikhavshah Mar 12 at 3:17
• And an interesting question might be: which are the external graphs with the highest (or for that matter: lowest) symmetry, in terms of size of automorphism groups? – Wolfgang Mar 16 at 17:50
• If $n$ is a prime power of the form $4k+1$ then the Paley graph <en.wikipedia.org/wiki/Paley_graph> has determinant has determinant $2k^{2k+1} = 2^{-n} (n-1)^{(n+1)/2}$; this grows faster than exponentially, and is within a factor of about $2^n$ of the upper bound $n^{n/2}$ from Hadamard's inequality <en.wikipedia.org/wiki/Hadamard%27s_inequality>. It's probably even closer to optimal than that because each row of $A$, though possibly of norm as large as $\sqrt n$, is within $\frac12 \sqrt n$ of the $1$-dimensional space spanned by $(1,1,1,\ldots,1)$. – Noam D. Elkies Mar 17 at 4:36
• According to mathoverflow.net/users/484/will-orrick Will Orrick was last seen here yesterday. He maintains a Maximal Determinant Blog at willorrick.wordpress.com – Gerry Myerson Mar 17 at 5:51
This may not be responsive, but I think it's amusing. I apologize in advance. To get an order 16 graph with determinant 327680 begin with (a) the line graph of $$K_5$$, (b) $$K_5$$, and (c) an isolated point. Join all vertices in (a) to (c) and no vertices in (b) to (c). Join each vertex in (b) to a maximum clique [of size 4] in (a).
The graph6 format is: OV$$`$$vfmlJJNhRVDmdzkUVX | 2021-06-21 15:36:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7288992404937744, "perplexity": 284.25872892748123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488286726.71/warc/CC-MAIN-20210621151134-20210621181134-00125.warc.gz"} |
https://blog.timescale.com/blog/how-to-proactively-manage-long-term-data-storage-with-downsampling/?utm_source=timescale-prom-webinar-recap&utm_medium=blog&utm_campaign=apr-2020-advocacy&utm_content=downsampling-blog | # How to proactively manage long-term data storage with downsampling
Eliminate the need to store billions of rows with TimescaleDB’s upgraded continuous aggregates
If you live in the world of monitoring and observability, you know that it requires the collection of time-series data at a massive scale and in as close to real-time as possible to understand system and application health and performance. As a result, collecting data in near real-time granularity generates very large data sets that can be difficult and costly to manage, and create significant database performance problems.
Many monitoring tools address these issues by trading long term analytical value for reduced storage cost and improved performance. With aggressive default data retention policies and simply purging “historical” data from the system, these monitoring tools reduce data storage cost and protect performance, but they also eliminate the ability to extract long term analytical value from the data.
With TimescaleDB, you don't have to sacrifice long term analytics of your monitoring data to reduce storage costs and improve performance. Instead, TimescaleDB allows you to reduce the granularity of your data by aggregating data into coarser periods of time and dropping the finer-grained real-time data while maintaining data accuracy. This allows you to reduce costs by decreasing your storage footprint while at the same time maintaining the ability to perform analytical queries of your monitoring data over longer time horizons.
In this post, we'll cover what downsampling is, and show you how to combine data retention policies and continuous aggregates to save money - while maintaining analytical power of your data.
We'll use a hypothetical example to demonstrate how to reduce storage needs from 260,000 rows/day to nine, keeping only the summaries of time-series data, instead of the full-fidelity dataset.
## What is downsampling?
Historical data provides baseline performance context for your applications - by measuring “normal” performance, you can identify and predict anomalous performance. In order to get the most of your historical data, you can leverage downsampling to help you avoid the trade-off of historical data value vs. storage cost and management.
Downsampling is the act of applying a mathematical aggregation function (i.e. AVG()) to roll up a very granular time series data set (i.e. 3 second intervals) to a more coarse grained set of data (1 hour, 5 hours, 1 day averages as examples). As a result, your data can take on a new role: analytics.
For example, let’s assume we are monitoring a single machine instance, and for the purposes of this exercise we are only focused on CPU usage metrics. We are monitoring a 8 CPU core instance and we measure usage in the system and user spaces, collecting data every 3 seconds.
Here is what something like this will look like on a monitoring dashboard:
In order to understand what is happening in real time, we need to collect the data at this high velocity and high frequency (Hint: a skill TimescaleDB is built for).
However, as this data ages, its purpose changes. When data is new, individual data points are important for debugging and real-time feedback. As data ages, the importance of individual data points often matters less than statistical analysis over large amounts of data.
If we downsample historical data, it will still help us spot trends, set baselines for what we consider “normal”, and allow us to be more accurate in our predictions around future behavior. At the same time, downsampling will reduce storage volumes. Let's take a look at how we can make this happen.
## The downsampling process: a brief tutorial
First, we need to decide what data to downsample. Let’s assume the data in its original format is collected every three seconds. Let’s say we need a different time-series for analysis purposes (for example to 1 hour and 5 hour and daily averages).
We can do this with continuous aggregates, time_bucket, and AVG() functions in TimescaleDB to roll up the VERY granular 3 second interval, to views that offer the data at 1 hour, 5 hour, and daily intervals.
To help us manage the storage costs, we are also going to use the TimescaleDB data retention policies to remove data after the five day window. Let's walk through these steps.
### #1 Create a continuous aggregate
In this case, we will create a continuous aggregate with a daily average of the CPU usage in both the user and system space. We will roll up the 3 second data to a daily average for these metrics. This is what it will look like:
CREATE VIEW CPU_daily_rollups
WITH (timescaledb.continuous,
timescaledb.ignore_invalidation_older_than='5d',
timescaledb.refresh_lag = '-30m',
timescaledb.refresh_interval = '1d')
AS
SELECT time_bucket('1d', time), cpu, AVG(usage_system) AS system_usage, AVG(usage_user) AS user_usage
FROM cpu
GROUP BY time_bucket('1d', time), cpu;
As you can see, we are building a continuous aggregate that will produce our daily averages using the time bucket function (for more information on building a continuous aggregate click here), and we are refreshing this view once per day.
The continuous aggregate job that we have created above will take the average of the utilization across the entire 24 hour period, and rather than needing to keep 259,200 rows per day (which I needed when I was monitoring this in real-time), I can simply keep 9 entries to represent daily CPU Usage (one per core and a total), which will look like this:
SELECT * from cpu_daily_rollups;
Now I can simply repeat this process to create continuous aggregates for 1 hour and 5 hour windows and in this use case I will have everything I need for long term analysis.
### #2 Add a data retention policy
The second part of this exercise is to reclaim the space that is being taken up by the underlying 3 second data points.
As I mentioned earlier, we are storing a little more than 259K rows per day per monitored machine in this case. The data has served its purpose for real-time monitoring and has been converted to a less granular form more appropriate for long-term analysis (see above). The next step is to set up a policy that will start to delete the finer-granularity data we originally collected.
In this case, we will use a TimescaleDB data retention policy:
SELECT add_drop_chunks_policy('cpu', INTERVAL '5 days', cascade_to_materializations=>FALSE);
Here we are dropping the underlying data after 5 days. While we are dropping the granular 3 second data records, we will maintain our continuous aggregate views of this data.
### #3 Perform analytics
Now that we have created the needed view and downsampled our data, we can start the process of running analytics on that data.
To illustrate, I’ve connected an Excel Sheet to my TimescaleDB instances – and set up a basic pivot table that plots the CPU usage To illustrate, I’ve connected an Excel Sheet to my TimescaleDB instances – and set up a basic pivot table that plots the CPU usage we set up in Step 1 (i.e., our continuous aggregate that rolls up our data from three second intervals to hourly averages).
### Recap & next steps
In this post, we’ve covered an overview of downsampling and how – and why – it’s important to leverage it for IT monitoring use cases. Of course you can apply continuous aggregates and data retention polices to a variety of other scenarios. If you are interested in learning more about how continuous aggregates work and to see if they are a fit for you, read this blog “Continuous aggregates: faster queries with automatically maintained materialized views”.
If you are ready to start downsampling, we encourage you to check out this documentation:
Note: For reference, the ability to enable true downsampling described in this post is included with the recent release of TimescaleDB 1.6. If you are interested in staying up-to-date with all of our releases, sign up for our Release Notes.
This post was written by | 2021-09-19 12:06:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3633550703525543, "perplexity": 1323.8960972691239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00156.warc.gz"} |
https://pure.mpg.de/pubman/faces/ViewItemFullPage.jsp?itemId=item_2539955 | English
# Item
ITEM ACTIONSEXPORT
An image-processing method to detect sub-optical features based on understanding noise in intensity measurements
Bhatia, T. (2018). An image-processing method to detect sub-optical features based on understanding noise in intensity measurements. European Biophysics Journal, 47(5), 531-538. doi:10.1007/s00249-017-1273-z.
Item is
### Basic
show hide
Genre: Journal Article
### Files
show Files
hide Files
:
Article.pdf (Publisher version), 2MB
Name:
Article.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
show
### Creators
show
hide
Creators:
Bhatia, Tripta1, Author
Affiliations:
1Rumiana Dimova, Theorie & Bio-Systeme, Max Planck Institute of Colloids and Interfaces, Max Planck Society, ou_1863328
### Content
show
hide
Free keywords: Open Access
Abstract: Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique “optimum smoothening” to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width $$88 \pm 23$$ 88 ± 23 and $$6860 \pm 50$$ 6860 ± 50 nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.
### Details
show
hide
Language(s):
Dates: 2018-02-012018-07
Publication Status: Published in print
Pages: -
Publishing info: -
Rev. Method: -
Identifiers:
Degree: -
show
show
show
### Source 1
show
hide
Title: European Biophysics Journal
Source Genre: Journal
Creator(s):
Affiliations:
Publ. Info: Berlin : Springer
Pages: - Volume / Issue: 47 (5) Sequence Number: - Start / End Page: 531 - 538 Identifier: ISSN: 0175-7571 | 2020-07-05 23:16:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4243944585323334, "perplexity": 8776.78733033833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655889877.72/warc/CC-MAIN-20200705215728-20200706005728-00056.warc.gz"} |
https://gatkforums.broadinstitute.org/gatk/discussion/3036/selecting-all-variants-that-overlap-a-specific-position | The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
#### ☞ Get notifications!
You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here.
#### ☞ Got a problem?
1. Search using the upper-right search box, e.g. using the error message.
2. Try the latest version of tools.
3. Include tool and Java versions.
4. Tell us whether you are following GATK Best Practices.
5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.
6. For tool errors, include the error stacktrace as well as the exact command.
7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.
8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.
9. For a seeming variant that is uncalled, include results of following Article#1235.
#### ☞ Did we ask for a bug report?
Then follow instructions in Article#1894.
#### ☞ Formatting tip!
Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ` ) each to make a code block as demonstrated here.
Picard 2.10.4 has MAJOR CHANGES that impact throughput of pipelines. Default compression is now 1 instead of 5, and Picard now handles compressed data with the Intel Deflator/Inflator instead of JDK.
GATK version 4.beta.2 (i.e. the second beta release) is out. See the GATK4 BETA page for download and details.
# Selecting all variants that overlap a specific position
Hi,
I have a vcf file of indels, and an interval file of single-base positions I am interested in. I would like to select all of the variants in the file that overlap the positions I'm looking at. Using select variants with the interval list I have returns nothing, because the indels are not contained within any intervals. I don't want to just add an arbitrary amount of padding to my intervals because I don't want to include other nearby variants that don't actually overlap my sites.
Is there a way to do this easily in GATK?
Tagged: | 2017-07-25 18:41:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2888129651546478, "perplexity": 5877.283194850464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425352.73/warc/CC-MAIN-20170725182354-20170725202354-00637.warc.gz"} |
https://socratic.org/questions/how-do-you-differentiate-f-x-sinx-cosx | How do you differentiate f(x)=sinx+cosx?
Jan 24, 2017
$f \left(x\right) = \sin \left(x\right) + \cos \left(x\right) \implies f ' \left(x\right) = \cos \left(x\right) - \sin \left(x\right)$
Explanation:
Since the derivative of a sum is the sum of the derivatives
$f ' \left(x\right) = \left(\sin \left(x\right)\right) ' + \left(\cos \left(x\right)\right) '$
Since $\frac{d}{\mathrm{dx}} \sin \left(x\right) = \cos \left(x\right)$ and $\frac{d}{\mathrm{dx}} \cos \left(x\right) = - \sin \left(x\right)$
We have
$f ' \left(x\right) = \cos \left(x\right) + \left(- \sin \left(x\right)\right) = \cos \left(x\right) - \sin \left(x\right)$ | 2021-07-29 08:10:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914980947971344, "perplexity": 277.66467571522105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00647.warc.gz"} |
https://library.kiwix.org/datascience.stackexchange.com_en_all_2021-04/A/question/35741.html | ## Latent loss in variational autoencoder drowns generative loss
5
2
I'm trying to run a variational auto-encoder on the CIFAR-10 dataset, for which I've put together a simple network in TensorFlow with 4 layers in the encoder and decoder each, an encoded vector size of 256. For calculating the latent loss, I'm forcing the encoder part of my network to output log variances instead of standard deviations, so the latent loss function looks like:
latent_loss = -0.5 * tf.reduce_sum(1 + log_var_vector - tf.square(mean_vector) - tf.exp(log_var_vector), axis=1)
I found this formulation to be more stable than directly using the logarithms in the KL-divergence formula since the latter often results in infinite loss value. I'm applying a sigmoid activation function on the last layer of the decoder, and the generative loss is computed using mean-squared error. The combined loss is simple a sum of both latent and generative losses. I train the network in batches of 40 using Adam Optimizer with a learning rate of 0.001.
The problem is that my network doesn't train. The latent loss immediately drops to zero, and the generative loss doesn't go down. However when I only optimize only for the generative loss, the loss does reduce as expected. Under this setting, the value of the latent loss quickly jumps to very large values (order of 10e4 - 10e6).
I have a hunch that the culprit is the extreme mismatch between the magnitudes of both losses. The KL-divergence is unbounded, whereas the mean-squared error always remains <1, so when optimizing for both, the generative loss basically becomes irrelevant.
Any suggestions to solve the problem are welcome.
1
i find the answer provided in the following link to be helpful https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational-auto-encoder
– pangyuteng – 2018-11-12T21:45:39.317
Hi, in your latent loss function, it should be tf.square(tf.exp(log_var_vector)) isn't it? – momo – 2019-11-21T08:17:44.347
## Answers
5
I don't like the reduce_sum version of the kl-loss because it depends on the size of your latent vector. My advise is to use the mean instead.
Moreover it is a notorious fact that training a VAE with the kl loss is difficult. You may need to progressively increase the contribution of the kl loss in your total loss. Add a weight w_kl that will control the contribution :
Loss = recons_loss + w_kl * kl_loss
You start with w_kl=0 and progressively increase it every epoch (or batch) to 1. This is a classic trick. Your learning rate seems good, maybe you can try a little higher (4e-4).
If you don't like the tricks, the Wasserstein auto-encoder may be your friend.
+1 for reduce_mean advice! – Amir – 2018-12-08T14:18:16.347
Hi, a confusion. I have this reduce sum version of latent loss with axis = 1 argument, but then it is followed by latent_loss = tf.reduce_mean(latent_loss). Does that solve the problem of vector size dependence? – momo – 2019-11-21T08:22:17.433
2
I think your hunch is right. The generative loss can't improve because any movement the network would make towards reducing it comes with a huge penalty in the form of the latent loss. It looks like you're squashing the generative loss through a sigmoid, maybe try doing the same thing with the latent loss? | 2021-08-02 01:34:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.697404682636261, "perplexity": 1283.929608053132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154302.46/warc/CC-MAIN-20210802012641-20210802042641-00684.warc.gz"} |
http://www.ck12.org/book/CK-12-Algebra-I-Concepts-Honors/r9/section/3.7/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 3.7: Graphs of Basic Quadratic Functions
Difficulty Level: At Grade Created by: CK-12
Look at the graph below. Does the graph represent a function? Do you know the name of the graph? Do you know what makes the green point special? Do you notice any symmetry in the graph? Can you state the domain and range for the relation?
### Guidance
Until now you have been dealing with linear functions. The highest exponent of the independent variable (\begin{align*}x\end{align*}) has been one and the graphs have been straight lines. Here you will be learning about quadratic functions. A quadratic function is one of the form \begin{align*}y=ax^2+bx+c\end{align*} where \begin{align*}a, b\end{align*} and \begin{align*}c\end{align*} are real numbers and \begin{align*}a \ne 0\end{align*}. The highest exponent of the independent variable is two. When graphed, a quadratic function creates a parabola that looks like this: or like this:
You can create your own graph by plotting the points created from a table of values. The most basic quadratic function is \begin{align*}y=x^2\end{align*}. The easiest way to make a table for this function is to use the domain \begin{align*}\{x|-3 \le x \le 3, x \ \varepsilon \ R\}\end{align*} for the table.
A parabola has a turning point known as the vertex. The vertex is the minimum value of the parabola if it opens upward and the maximum value if the parabola opens downward. When the graph opens downward, the \begin{align*}y\end{align*}-values in the base table change to negative values. The basic quadratic function that opens downward has the equation \begin{align*}y=-x^2\end{align*}.
All parabolas have an axis of symmetry. The axis of symmetry is the vertical line that passes through the vertex of the parabola. The equation for the axis of symmetry is always \begin{align*}x=\end{align*} the \begin{align*}x-\end{align*}coordinate of the vertex.
#### Example A
For the basic quadratic function \begin{align*}y=x^2\end{align*}, complete a table such that \begin{align*}\{x|-3 \le x \le 3, x \ \varepsilon \ R\}\end{align*}.
Solution:
To complete the table of values, substitute the given \begin{align*}x\end{align*}-values into the function \begin{align*}y=x^2\end{align*}. If you are using a calculator, insert all numbers, especially negative numbers, inside parenthesis before squaring them. The operation that needs to be done is \begin{align*}(-3)(-3)\end{align*} NOT \begin{align*}-(3)(3)\end{align*}.
\begin{align*}y&=x^2 && y=x^2 && y=x^2 && y=x^2\\ y&=(-3)^2 && y=(-2)^2 && y=(-1)^2 && y=(0)^2\\ y&={\color{red}9} && y={\color{red}4} && y={\color{red}1} && y={\color{red}0}\\ \\ y&=x^2 && y=x^2 && y=x^2\\ y&=(1)^2 && y=(2)^2 && y=(3)^2\\ y&={\color{red}1} && y={\color{red}4} && y={\color{red}9}\\\end{align*}
\begin{align*}X\end{align*} \begin{align*}Y\end{align*}
\begin{align*}-3\end{align*} \begin{align*}{\color{red}9}\end{align*}
\begin{align*}-2\end{align*} \begin{align*}{\color{red}4}\end{align*}
\begin{align*}-1\end{align*} \begin{align*}{\color{red}1}\end{align*}
\begin{align*}0\end{align*} \begin{align*}{\color{red}0}\end{align*}
\begin{align*}1\end{align*} \begin{align*}{\color{red}1}\end{align*}
\begin{align*}2\end{align*} \begin{align*}{\color{red}4}\end{align*}
\begin{align*}3\end{align*} \begin{align*}{\color{red}9}\end{align*}
#### Example B
On a Cartesian plane, plot the points from the table for \begin{align*}y=x^2\end{align*}.
Solution:
The plotted points cannot be joined to form a straight line. To join the points, begin with the point (-3, 9) or the point (3, 9) and without lifting your pencil, draw a smooth curve. The image should look like the following graph.
The arrows indicate the direction of the pencil as the points are joined. If the pencil is not moved off the paper, the temptation to join the points with a series of straight lines will be decreased. The points must be joined with a smooth curve that does not extend below the lowest point of the graph. In the above graph, the curve cannot go below the point (0, 0).
#### Example C
What are some unique characteristics of the graph of \begin{align*}y=x^2\end{align*}?
Solution:
1. The green point is located at the lowest point on the image. The curve does not go below this point.
2. Every red point on the left side of the image has a corresponding blue point on the right side of the image.
3. If the image was folded left to right along the \begin{align*}y\end{align*}-axis that passes through the green point, each red point would land on each corresponding blue point.
4. The sides of the image extend upward.
5. The red and the blue points are plotted to the right and to the left of the green point. The points are plotted left and right one and up one; left and right two and up four, left and right 3 and up nine.
#### Concept Problem Revisited
The green point is the lowest point on the curve. The smooth curve is called a parabola and it is the image produced when the basic quadratic function is plotted on a Cartesian grid. The green point is known as the vertex of the parabola. The vertex is the turning point of the graph.
For the graph of \begin{align*}y=x^2\end{align*}, the vertex is (0, 0) and the parabola has a minimum value of zero which is indicated by the \begin{align*}y\end{align*}-value of the vertex. The parabola opens upward since the \begin{align*}y\end{align*}-values in the table of values are 0, 1, 4 and 9. The \begin{align*}y\end{align*}-axis for this graph is actually the axis of symmetry. The axis of symmetry is the vertical line that passes through the vertex of the parabola. The parabola is symmetrical about this line. The equation for this axis of symmetry is \begin{align*}x = 0\end{align*}. If the parabola were to open downward, the vertex would be the highest point of the graph. Therefore the image would have a maximum value of zero.
The domain for all parabolas is \begin{align*}D=\{x|x \ \varepsilon \ N\}\end{align*}. The range for the above parabola is \begin{align*}R=\{y|y \ge 0, y \ \varepsilon \ N\}\end{align*}.
### Vocabulary
Axis of Symmetry
The axis of symmetry of a parabola is a vertical line that passes through the vertex of the parabola. The parabola is symmetrical about this line. The axis of symmetry has the equation \begin{align*}x =\end{align*} the \begin{align*}x-\end{align*}coordinate of the vertex.
Parabola
A parabola is the smooth curve that results from graphing a quadratic function of the form \begin{align*}y=ax^2+bx+c\end{align*}. The curve resembles a U-shape.
A quadratic function is a function of the form \begin{align*}y=ax^2+bx+c\end{align*} where \begin{align*}a, b\end{align*} and \begin{align*}c\end{align*} are real numbers and \begin{align*}a \ne 0\end{align*}.
Vertex
The vertex of a parabola is the point around which the parabola turns. The vertex is the maximum point of a parabola that opens downward and the minimum point of a parabola that opens upward.
### Guided Practice
1. If the graph of \begin{align*}y=x^2\end{align*} opens downward, what changes would exist in the base table of values?
2. If the graph of \begin{align*}y=x^2\end{align*} opens downward, what changes would exist in the basic quadratic function?
3. Draw the image of the basic quadratic function that opens downward. State the domain and range for this parabola.
1. If the parabola were to open downward, the \begin{align*}x\end{align*}-values would not change. The \begin{align*}y\end{align*}-values would become negative values. The points would be plotted from the vertex as: right and left one and down one; right and left two and down four; right and left three and down nine. The table of values would be
\begin{align*}X\end{align*} \begin{align*}Y\end{align*}
\begin{align*}-3\end{align*} \begin{align*}{\color{red}-9}\end{align*}
\begin{align*}-2\end{align*} \begin{align*}{\color{red}-4}\end{align*}
\begin{align*}-1\end{align*} \begin{align*}{\color{red}-1}\end{align*}
\begin{align*}0\end{align*} \begin{align*}{\color{red}0}\end{align*}
\begin{align*}1\end{align*} \begin{align*}{\color{red}-1}\end{align*}
\begin{align*}2\end{align*} \begin{align*}{\color{red}-4}\end{align*}
\begin{align*}3\end{align*} \begin{align*}{\color{red}-9}\end{align*}
2. To match the table of values, the basic quadratic function would have to be written as \begin{align*}y=-x^2.\end{align*}
3.
The domain is \begin{align*}D=\{x|x \ \varepsilon \ N\}\end{align*}. The range for this parabola is \begin{align*}R=\{y|y \le 0, y \ \varepsilon \ N\}\end{align*}.
### Practice
Complete the following statements in the space provided.
1. The name given to the graph of \begin{align*}y=x^2\end{align*} is ____________________.
2. The domain of the graph of \begin{align*}y=x^2\end{align*} is ____________________.
3. If the vertex of a parabola was (-3, 5), the equation of the axis of symmetry would be ____________________.
4. A parabola has a maximum value when it opens ____________________.
5. The point (-2, 4) on the graph of \begin{align*}y=x^2\end{align*} has a corresponding point at ____________________.
6. The range of the graph of \begin{align*}y=-x^2\end{align*} is ____________________.
7. If the table of values for the basic quadratic function included 4 and -4 as \begin{align*}x\end{align*}-values, the \begin{align*}y\end{align*}-value(s) would be ____________________.
8. The vertical line that passes through the vertex of a parabola is called ____________________.
9. A minimum value exists when a parabola opens ____________________.
10. The turning point of the graph of \begin{align*}y=x^2\end{align*} is called the ____________________.
Show Hide Details
Description
Difficulty Level:
Authors:
Tags:
Subjects:
Date Created:
Dec 19, 2012
Apr 29, 2014
# We need you!
At the moment, we do not have exercises for Graphs of Basic Quadratic Functions.
Save or share your relevant files like activites, homework and worksheet.
To add resources, you must be the owner of the Modality. Click Customize to make your own copy.
Reviews
Help us create better content by rating and reviewing this modality. | 2016-06-26 19:20:29 | {"extraction_info": {"found_math": true, "script_math_tex": 81, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566359281539917, "perplexity": 791.7451839447845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00173-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.tsa.inhom_ma.html | # naginterfaces.library.tsa.inhom_ma¶
naginterfaces.library.tsa.inhom_ma(ma, t, tau, m1, m2, inter, ftype, p, sinit=None, pn=0, comm=None)[source]
inhom_ma provides a moving average, moving norm, moving variance and moving standard deviation operator for an inhomogeneous time series.
For full information please refer to the NAG Library document for g13mg
https://www.nag.com/numeric/nl/nagdoc_28.6/flhtml/g13/g13mgf.html
Parameters
mafloat, array-like, shape
, the current block of observations, for , where is the number of observations processed so far, i.e., the value supplied in on entry.
tfloat, array-like, shape
, the times for the current block of observations, for , where is the number of observations processed so far, i.e., the value supplied in on entry.
If , = 31 will be returned, but inhom_ma will continue as if was strictly increasing by using the absolute value.
The lagged difference, must be sufficiently small that , can be calculated without overflowing, for all .
taufloat
, the parameter controlling the rate of decay. must be sufficiently large that , can be calculated without overflowing, for all , where .
m1int
, the iteration of the EMA operator at which the sum is started.
m2int
, the iteration of the EMA operator at which the sum is ended.
interint, array-like, shape
The type of interpolation used with indicating the interpolation method to use when calculating and the interpolation method to use when calculating , .
Three types of interpolation are possible:
Previous point, with .
Linear, with .
Next point, .
Zumbach and Müller (2001) recommend that linear interpolation is used in second and subsequent iterations, i.e., , irrespective of the interpolation method used at the first iteration, i.e., the value of .
ftypeint
The function type used to define the relationship between and when calculating . Three functions are provided:
The identity function, with .
or
The absolute value, with .
or
The absolute difference, with .
If or then the resulting vector of averages is scaled by as described in .
pfloat
, the power used in the transformation function.
sinitNone or float, array-like, shape , optional
Note: the required length for this argument is determined as follows: if : ; otherwise: .
If , the values used to start the iterative process, with
,
,
, for .
,
, for .
i.e., initial values based on the original data as opposed to the transformed data .
If , is not referenced.
pnint, optional
, the number of observations processed so far. On the first call to inhom_ma, or when starting to summarise a new dataset, must be set to . On subsequent calls it must be the same value as returned by the last call to inhom_ma.
commNone or dict, communication object, optional, modified in place
Communication structure.
On initial entry: need not be set.
Returns
mafloat, ndarray, shape
The moving average:
if or
,
otherwise
.
pfloat
If , then , the actual power used in the transformation function is returned, otherwise is unchanged.
pnint
, the updated number of observations processed so far.
wmafloat, ndarray, shape
Either the moving average or exponential moving average, depending on the value of .
if or
otherwise
.
Raises
NagValueError
(errno )
On entry, .
Constraint: .
(errno )
On entry, , and .
Constraint: if linear interpolation is being used.
(errno )
On entry, .
Constraint: .
(errno )
On entry, .
On entry at previous call, .
Constraint: if then must be unchanged since previous call.
(errno )
On entry, .
Constraint: .
(errno )
On entry, .
On entry at previous call, .
Constraint: if then must be unchanged since previous call.
(errno )
On entry, and .
Constraint: .
(errno )
On entry, .
On entry at previous call, .
Constraint: if then must be unchanged since previous call.
(errno )
On entry, , and .
Constraint: if , , for .
(errno )
On entry, .
Constraint: , or .
(errno )
On entry, .
Constraint: , or .
(errno )
On entry, and .
On entry at previous call, , .
Constraint: if , must be unchanged since the last call.
(errno )
On entry, .
Constraint: , , , or .
(errno )
On entry, , On entry at previous call, .
Constraint: if , must be unchanged since the previous call.
(errno )
On entry, .
Constraint: absolute value of must be representable as an integer.
(errno )
On entry, .
Constraint: if , . If , the nearest integer to must not be .
(errno )
On entry, , and .
Constraint: if , or and for any then .
(errno )
On entry, , , and .
Constraint: if , , for any .
(errno )
On entry, .
On exit from previous call, .
Constraint: if then must be unchanged since previous call.
(errno )
On entry, .
Constraint: .
(errno )
On entry, .
On exit from previous call, .
Constraint: if then must be unchanged since previous call.
(errno )
[‘rcomm’] has been corrupted between calls.
(errno )
On entry, , and .
Constraint: if , or .
(errno )
On entry, , and .
Constraint: if , .
Warns
NagAlgorithmicWarning
(errno )
On entry, , and .
Constraint: should be strictly increasing.
(errno )
Truncation occurred to avoid overflow, check for extreme values in , or for .
Notes
inhom_ma provides a number of operators for an inhomogeneous time series. The time series is represented by two vectors of length ; a vector of times, ; and a vector of values, . Each element of the time series is, therefore, composed of the pair of scalar values , for . Time can be measured in any arbitrary units, as long as all elements of use the same units.
The main operator available, the moving average (MA), with parameter is defined as
where , and are user-supplied integers controlling the amount of lag and smoothing respectively, with and is the iterated exponential moving average operator.
The iterated exponential moving average, , is defined using the recursive formula:
with
and
where
The value of depends on the method of interpolation chosen and the relationship between and the input series depends on the transformation function chosen. inhom_ma gives the option of three interpolation methods:
Previous point: ν=1. Linear: ν=(1−μ)/α. Next point: ν=μ.
and three transformation functions:
Identity: yi=z[p]i. Absolute value: yi=|zi|p. Absolute difference: yi=|zi−MA[τ,m1,m2;z](ti)|p.
where the notation is used to denote the integer nearest to . In addition, if either the absolute value or absolute difference transformation are used then the resulting moving average can be scaled by .
The various parameter options allow a number of different operators to be applied by inhom_ma, a few of which are:
1. Moving Average (MA), as defined in [equation] (obtained by setting and ).
2. Moving Norm (MNorm), defined as
(obtained by setting , and ).
3. Moving Variance (MVar), defined as
(obtained by setting , and ).
4. Moving Standard Deviation (MSD), defined as
(obtained by setting , and ).
For large datasets or where all the data is not available at the same time, and can be split into arbitrary sized blocks and inhom_ma called multiple times.
References
Dacorogna, M M, Gencay, R, Müller, U, Olsen, R B and Pictet, O V, 2001, An Introduction to High-frequency Finance, Academic Press
Zumbach, G O and Müller, U A, 2001, Operators on inhomogeneous time series, International Journal of Theoretical and Applied Finance (4(1)), 147–178 | 2022-12-03 02:18:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252578020095825, "perplexity": 4102.367554075096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00246.warc.gz"} |
http://aliceinfo.cern.ch/ArtSubmission/node/140 | # Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_\rm{NN}}$=2.76 TeV
Submission Date:
18/06/2013
Article Information
Submission Form
System:
Pb-Pb
Energy:
2.76 TeV
Abstract Plain Text:
The directed flow of charged particles at mid-rapidity was measured in Pb-Pb collisions at sNN= 2.76 TeV relative to the collision plane defined by the spectator nucleons. The observed negative slope of the rapidity-odd directed flow component with about a three times smaller magnitude than observed at the highest RHIC energy suggests a smaller longitudinal tilt of initial system and disfavors a picture of strong fireball rotation predicted at the LHC energies. Measured for a first time with spectators, the rapidity-even directed flow component is found to be independent of pseudorapidity and change sign at transverse momenta around 1.2-1.7 GeV/c. Combined with an observation of a vanishing rapidity-even transverse momentum shift along the spectator deflection
this forms a strong evidence for dipole-like initial density fluctuations in the overlap zone of the nuclei. Similar trends and a factor of forty smaller magnitude of the rapidity-even directed flow measured relative to the spectator plane and that previously estimated from correlation of two particles emitted at midrapidity
indicates a weak correlation between fluctuating participant and spectator collision symmetry planes. These observations open a new direction for experimental probes of the initial conditions in heavy-ion collision with spectator nucleons. | 2017-12-17 19:25:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6135978102684021, "perplexity": 3055.7540168827923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597485.94/warc/CC-MAIN-20171217191117-20171217213117-00331.warc.gz"} |
https://koasas.kaist.ac.kr/handle/10203/241892 | #### Engineering of expression vector for efficient protein production in Leuconostoc citreum = 류코노스톡 시트륨에서 효율적인 단백질 생산을 위한 발현 벡터의 개량
Cited 0 time in Cited 0 time in
• Hit : 229
Lactic acid bacteria (LAB) are a group of Gram-positive bacteria that produces lactic acid as a final end product in fermentation. Traditionally, LAB have long been used in the production of fermented dairy, meat and vegetable products as well as in wine and sourdough production. In addition, some species of LAB can produce health-related molecules such as antimicrobial peptides or bacteriocins that are used as biopreservative agents in foods. Recently, LAB have attracted attentions as a promising microbial cell factory and as mucosal delivery vehicles. Leuconostoc belongs to LAB is a non-sporulating, low G+C content, and a hetero-fermentative bacteria. Leuconostoc also plays important role in fermented food industry and has a broad spectrum of product such as lactic acid, alcohol, and aromatic compound. However, despite its importance in food and biotechnology industries, there has been little effort to develop genetic tools for engineering of the bacteria. In this study, I tried to engineer the expression vector system for enhancement of gene expression in L. citreum. For this purpose, I introduced bicistronic design (BCD) expression system into L. citreum. After the expression of target gene was observed in this expression system, Shine-Dalgarno (SD) sequence in the plasmid was engineered. SD2 library was constructed using super-folder green fluorescent protein (sfGFP) as a reporter, and highly fluorescent clones were isolated by FACS screening. The improvement of gene expression by the isolated strong SD2 was demonstrated with three recombinant proteins. Next, synthetic promoter library derived $P_{710}$ promoter was constructed in the engineered plasmid with strong SD2. The synthetic promoter library that induces sfGFP was screened by FACS. As a result, I could successfully isolate strong promoter and verify the strong promoter with two recombinant proteins. The expression system engineered with the promoter and SD2 showed about 1.6 times higher $\alpha$-amylase productivity than the previous expression system. Therefore, the expression vector system developed in this study will be useful for engineering of L. citreum in the future.
Jeong, Ki Junresearcher정기준researcher
Description
한국과학기술원 :생명화학공학과,
Publisher
한국과학기술원
Issue Date
2017
Identifier
325007
Language
eng
Description
학위논문(박사) - 한국과학기술원 : 생명화학공학과, 2017.8,[ix, 96 p. :]
Keywords
Lactic acid bacteria▼aLeuconostoc citreum▼aengineering of Shine-Dalgarno sequence▼asynthetic promoter library; 유산균▼a류코노스톡 시트륨▼a리보좀 결합 서열의 개량▼a합성 프로모터 라이브러리▼a이중시스트론성 발현 시스템
URI
http://hdl.handle.net/10203/241892
http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=718841&flag=dissertation
Appears in Collection
CBE-Theses_Ph.D.(박사논문)
Files in This Item
There are no files associated with this item. | 2020-05-28 23:02:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37463751435279846, "perplexity": 12594.223442867347}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00286.warc.gz"} |
https://forum.dynare.org/t/rbc-model-with-monopolistic-competition/14191 | # RBC model with monopolistic competition
Hello everyone,
I have a question on RBC model with monopolistic competition. All the textbook material about RBC model I have seen assumes a perfect competition market where the general price level is normalized to 1.
While with monopolistic competition, you have one more variable: marginal cost (MC), so what is the additional first-order condition should I add to the model?
I guessed that the condition should be p = markup* MC, but it seems not correct. If the price p is normalized to 1, then MC would be constant over time. But this is weird since a positive TFP shock should drive down the MC.
Thanks!
Hi ttc,
The condition you probably have in mind is actually the pricing rule of the form
P_{it}=\frac{\epsilon}{\epsilon-1}MC_{t}P_{t}
Dividing through by P_{t} yields
\frac{P_{it}}{P_t}=\frac{\epsilon}{\epsilon-1}MC_{t}
In the absence of nominal rigidities all firms adjust their price to the same optimal price, so price dispersion is always zero, i.e. \frac{P_{it}}{P_t}=1 in all periods. It does not prevent the marginal cost from falling in response to a positive TFP shock .
1 Like | 2019-08-23 07:15:11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117936253547668, "perplexity": 1816.0812948028433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00496.warc.gz"} |
http://tasks.illustrativemathematics.org/blueprints/A1/2/7 | # Summative assessment
Assess students’ ability to
• create and solve linear equations and inequalities in one variable (A-REI.B.3);
• solve systems of linear equations exactly by algebraic methods (A-REI.C.6);
• model relationships between quantities and compare different relationships (A-CED.A.2$^\star$, A-REI.C.6);
• graph the solution set to a linear inequality in two variables as a half-plane, and to a system of linear inequalities in two variables as the intersection of the corresponding half-planes (A-REI.D.12). | 2020-08-14 06:24:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3997497856616974, "perplexity": 1026.5862441407305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00478.warc.gz"} |
http://mimptattoo.com/restaurants-open-rhd/q-test-table-ad9808 | The following table provides critical values for Q(α, n), where α is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. All columns or specific columns can be selected. IQ scores are predominantly used for educational placement, assessment of intellectual disability, and also for evaluating job applicants. The Intelligence Quotient IQ chart is used to determine the IQ percentile range of an … Table. These outcome variables have been … Example - Perform a Q-test on the data set from Table on previous page and determine if you can statistically designate data point #5 as an outlier within a 95% CL. This section will calculate the .05 and .01 critical values for the Studentized range statistic Q. There are two ways that Q can do statistical significance testing in a table: 1. If the Q statistic is greater than the Q … The Q-Sweat, provided by WR Medical Electronics, brings sudomotor testing to your clinic. The Q-Sweat examines the integrity of […] Appendix H: Q Distribution Table 35 (p) = 0.05 k What is an IQ test? Anal. These operations comprise boolean algebra or boolean functions. Watson test statistic value is 0.24878. Listen. To proceed, enter the number of groups in the analysis (k) and the number of degrees of freedom, and then click «Calculate». Like Student's t-test, we calculate a Q value under the null hypothesis, e.g., data are the same, and then compare it to a table value using the logical scheme. If you examine the Savin and White tables (Table A.2 and Table A.3), you will not find a row for sample size 69, so go to The Q-test is often used to test this hypothesis. Step 4: Compare the Q statistic from Step 2 with the Q critical value in Step 3. suspicious points that are abnormally far from the mean. No headers. Calculate Q: With 10 observations and at 90% confidence, Q = 0.455 > 0.412 = Qtable, so we conclude 0.167 is indeed an outlier. It is a unitary test, that could be used for an array of different applications. Answer : D Explanation. The interpretation of Q … A difference between sample means as large or larger than the HSD you calculate using the table value of Q is significant at the selected level of significance. The outcome of each task is a dichotomous value, success or failure. Chem., 63 (2), 139–146. Over 2 million people have taken this test since Jan 2014. Yules Q Yule's Q is based on the odds ratio and a symmetric measure taking on values between -1 and +1. T distribution is the distribution of any random variable 't'. The values given here are for Q10, where, $Q_\ce{exp} = Q_{10} = \mathrm{\dfrac{|\textrm{outlier's value} - nearest\: value|}{largest\: value - smallest\: value}}$. The critical value of Q for the HSD test is found at the intersection of the row and column you have identified. quiz (only 10-45 minutes needed typically, but take your time) This is not a normalized IQ test, but it will give you a good idea how you may score on an official IQ test. Interpretation and action. Numbers in the second column are significance levels (alpha). The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. At the 90% confidence interval Q for N replicate measurements is: Blue numbers in the top row refer to the number of groups. A key component in determining the severity and pattern of autonomic disorders is the study of a patient’s sudomotor response. Informative questions that convey knowledge 10. This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test should be used sparingly and never more than once in a data set. Then proceed on this row till you come to the column under 0.015. (1991) "Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon Q Parameter and Related Subrange Ratios at the 95 percent Confidence Level". Cochran's Q test is used to determine if there are differences on a dichotomous dependent variable between three or more related groups. Halpern, Arthur M. "Experimental physical chemistry : a laboratory textbook." Description. Cochran's Q test using SPSS Statistics Introduction. Cochran's Q test is an extension to the McNemar test for related samples that provides a method for testing for differences between three or more matched sets of frequencies or proportions.. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined: Google has many special features to help you find exactly what you're looking for. IQ Test Questions. There are several versions of Dixon’s Q-Test, each of which calculates a value for Qij where i is the number of suspected outliers on one end of the data set and j is the number of suspected outliers on the opposite end of the data set. To compare the customizing settings between 2 systems/clients. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined: Where gap is the absolute difference between the outlier in question and the closest number to it. 0 or 1) is recorded from each category within each subject. R is the range of all data points, x a is the suspected outlier, and x b is the data point closest to x a. New York : W. H. Freeman, c2006, Learn how and when to remove this template message, Article (PDF) and Software (Fortan-90, Zipfile), https://en.wikipedia.org/w/index.php?title=Dixon%27s_Q_test&oldid=920005331, Articles with dead external links from December 2016, Articles with permanently dead external links, Articles needing additional references from May 2015, All articles needing additional references, Creative Commons Attribution-ShareAlike License. In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. Appendix H: Q Distribution Table 35 (p) = 0.05 k Truth Table is used to perform logical operations in Maths. Vol. In the later event, we should reject the data. This might be the best IQ test ever! Is Table of Q critical values (90% confidence) N Qc 3 0.94 4 0.76 5 0.64 6 0.56 7 0.51 8 0.47 9 0.44 10 0.41 Our original IQ test is the most scientifically valid free IQ test available online today. Column comparisons - test columns against one another, see also: Interpreting Column Comparisons and How to Specify Columns to be Compared Cochran's Q test using SPSS Statistics Introduction. 4 (Dec., 1950), pp. ANOVA - Tukey’s HSD Test Application: One-way ANOVA – pair-wise comparison of means. SPSS Cochran Q-Test By Ruben Geert van den Berg under Nonparametric Tests & Statistics A-Z. Below given is the T table for you to refer the one and two tailed t distribution with ease. This is the short version of the dimension test and it lasts only 8 minutes. If you create a new table using an existing table, the new table will be filled with the existing values from the old table… At the 90% confidence interval Q for N replicate measurements is: The Annals of Mathematical Statistics. You can enter logical operators in several different formats. The Studentized range upper quantiles q(k,df;0.10) df 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 8.929 13.437 16.358 18.488 20.150 21.504 22.642 The TABE test is a test used to evaluate an individuals skill levels and aptitudes. Legal. Only one data point in a set may be rejected using the Q-test. It is a systematic assessment system that sets out to provide very flexible psychometric assessments. qTest is a test management tool used for Project Management, Bug Tracking, and Test Management. We hypothesize that 0.167 is an outlier. For a sample size of 7 and an alpha level of 5%, the critical value is 0.568. The following table provides critical values for $$Q(\alpha, n)$$, where $$\alpha$$ is the probability of incorrectly rejecting the suspected outlier and $$n$$ is the number of samples in the data set. Talent Q is a leading psychometric test developer that helps employers to find the best candidates, from graduates to senior leaders. Based on the details provided by the following Hive execution, give the HiveQL statements to get: the students whose scores for the test “CA675” are less than 40, in descending order; TABLE B.5- The studentized range statistic (q)* *The critical values for q corresponding to … The Q-Sweat examines the integrity of […] A QTest test suite is a perl script whose name ends with the extension .test.A template for a QTest test suite may be found in the file template in the misc directory of the QTest source or installation directory. / Arthur M. Halpern , George C. McBane. Robert B. In statistics, a Q–Q (quantile-quantile) plot is a probability plot, which is a graphical method for comparing two probability distributions by plotting their quantiles against each other. When you Google for the answer, you get so many different answers. Learn more about the principles of outlier detection and exactly how this test works . SPSS Cochran Q test is a procedure for testing if the proportions of 3 or more dichotomous variables are equal in some population. A copy of an existing table can also be created using CREATE TABLE. Rorabacher, D. B. Dixon's Q-test: Detection of a Single Outlier ... clear all Now, we look up the critical value for n=5 for a confidence level 95% in the Q-table ; … The F distribution is a right-skewed distribution used most commonly in Analysis of Variance. Serology. The current scoring method for all IQ tests is the "deviation IQ". Anal. The following table provides critical values for Q(α, n), where α is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. Appendix 06: Critical Values for Dixon’s Q-Test, https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FAncillary_Materials%2FReference%2FReference_Tables%2FAnalytic_References%2FAppendix_06%253A_Critical_Values_for_Dixons_Q-Test, Appendix 05: Critical Values for the F-Test, Appendix 07: Critical Values for Grubb’s Test, information contact us at info@libretexts.org, status page at https://status.libretexts.org. Note that the value of k must be between 3 and 10, inclusive. The WR TestWorks Q-Sweat System provides sensitive, reproducible, and non-invasive measurements of sweat rate and volume. Truth Table Generator This tool generates truth tables for propositional logic formulas. P; DF 0.995 0.975 0.20 0.10 0.05 0.025 0.02 0.01 0.005 0.002 0.001; 1: 0.0000393: 0.000982 This calculator performs Grubbs' test, also called the ESD method (extreme studentized deviate), to determine whether one of the values in the list you enter is a signficant outlier from the rest. For example, the propositional formula p ∧ q → ¬r could be written as p /\ q -> ~r, as p and q => not r, or as p && q -> !r. Print. 8. For additional information consult Rorabacher, D. B. Note that only one point may be rejected from a data set using a Q test. Table of Q-function values: To find Q(1.365) look under column x to find 1.35. I would like this to be the ultimate discussion on how to check if a table exists in SQL Server 2000/2005 using SQL Statements. Use to identify statistical outliers in data. Chem., 1951, 23 (4), 636–638. Critical values of the studentized range distribution (Q) are commonly used in Tukey's range test. Positive antibody test (according to diagnostic laboratory criteria): We want to test the null hypothesis of zero autocorrelation in the residuals against the alternative that the residuals are positively autocorrelated at the 1% level of significance. Use to identify statistical outliers in data. In two by two tables Yule's Q is equal to Goodman and Kruskal's Gamma. The outcome of each task is a dichotomous value, success or failure. IQ classification is the practice by IQ test publishers of labeling IQ score ranges with category names such as "superior" or "average".. largest smallest suspect nearest range gap Q − − calc = = Qtab is looked up in a table and compared with Qcalc. If so, recalculate the mean, standard deviation and the 95% CL. Read off the value as 8.613 x 10-2 x 0.000 0.005 0.01 0.015 0.02 0.025 0 .03 0.035 0.04 0.045 A difference between sample means as large or larger than the HSD you calculate using the table value of Q is significant at the selected level of significance. 3rd ed. Critical Values of Q Calculator . Statistical tables: values of the Chi-squared distribution. It can be considered to be similar to the one-way repeated measures ANOVA, but for a dichotomous rather than a continuous dependent variable, or as an extension of McNemar's test. Take the IQ Test: Sign-up / Login: Results: IQ Information: IQ Basics: IQ Table: Entrance Criteria: Sex Differences: Eminent Geniuses: Cox 300: Cox Groups: Roe's Scientists: Occupational IQs: IQ Estimations: GRE and SAT to IQ: SAT I to IQ: SAT to IQ: Pre 1974 SAT to IQ: Historical SAT to IQ: International IQ: Length of test similar to actual I.Q. “Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon’s ‘Q’ Parameter and Related Subrange Ratios at the 95% confidence Level,” Anal. Unlike the other IQ tests you might find online, we do NOT charge any fees to find out your test results after you took your precious time to answer every question. This test should be applied sparingly and never more than once to a single data set. This quiz consists of 10 questions to test your basic knowledge of the Periodic Table; you can also check out other science quizzes in our portal. Tablestest is a website where you can learn your multiplication table by reading, repeating but also by playing. This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test should be used sparingly and never more than once in a data set. A table score_table is used to record the information about the score of each test for each student (students could have taken different number of tests). If Qcalc > Qtab, the outlier data point can be rejected at the specified confidence level. 1 (one) implies perfect negative or positive association, 0 (zero) no association. Statistical Software 16(3):1–9, 2006, Shivanshu Shrivastava, A. Rajesh, P. K. Bora (2014) "Sliding window Dixon's tests for malicious users' suppression in a cooperative spectrum sensing system" IET Communications, 2014, 8 (7), W. J. Dixon. It follows the centralized test management concept that helps to communicate easily and assists in rapid development of task across QA team and other stakeholders. The Q-test is a simple statistical test to determine if a data point that appears to be very different from the rest of the data points in a set may be discarded. R is the range of all data points, x a is the suspected outlier, and x b is the data point closest to x a. Example: 12 subjects are asked to perform 3 tasks. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. The Q-Sweat, provided by WR Medical Electronics, brings sudomotor testing to your clinic. Example: 12 subjects are asked to perform 3 tasks. Chem. The WR TestWorks Q-Sweat System provides sensitive, reproducible, and non-invasive measurements of sweat rate and volume. Appendix 07: Critical Values for Grubb’s Test Last updated; Save as PDF Page ID 6643; Contributors and Attributions; The following table provides critical values for G(α, n), where α is the probability of incorrectly rejecting the suspected outlier and n is the number of samples in the data set. T table for you to refer the one and two tailed T distribution ease! Should be applied sparingly and never q test table than once to a single data set as x. The distribution of any random variable 't ' this is the most scientifically valid IQ... Diagnostic laboratory criteria ): critical values for the HSD test is used to determine there! Assessment of intellectual disability, and 1413739 provided by WR Medical Electronics, brings sudomotor to! Be applied sparingly and never more than once to a single data.!, where a binary response ( e.g never more than once to critical. Called replicates year 2006 by Roger Holdsworth and his team individuals skill levels and aptitudes +1! Established in the second column are degrees of freedom exactly how this test Jan! Webpages, images, videos and more not known and the 95 %.. Study of a cell 1 test you will be asked no fewer 65... Iq score table provides the category type … cochran 's Q test is now available to you using table! C. ( 2006 ) Simplified Statistics for small numbers of Observations '' can. Is basically used to perform 3 tasks are two ways that Q can do significance! Intersection of the studentized range statistic Q Ratios for outlier detection '', you get so many different answers test. Diagnostic laboratory criteria ): critical values of the row and column you have identified chemistry: a laboratory.! Developer that helps employers to find the best candidates, from graduates to senior leaders million have. The current scoring method for all IQ tests is the short version of the dimension test it... 2 or more dichotomous variables are equal in some population this row you! Foundation support under grant numbers 1246120, 1525057, and also for evaluating job.... Quantiles is chosen equal to Goodman and Kruskal 's Gamma − calc =. Testing in a table and compared with Qcalc, established in the second column are significance (! Must be between 3 and 10, inclusive are commonly used in Tukey 's range test Foundation support grant. Libretexts.Org q test table check out the below IQ chart and find out how smart are... Of 7 and an alpha level of 5 %, the set of intervals the! ( according to diagnostic laboratory criteria ): critical values of Q is to. His team range gap Q − − calc = = Qtab is looked in. ( α, n ) differently between systems/clients and you are suspecting that some of the two-tailed 's. Of different applications scientifically valid free IQ test is a systematic assessment that... Smallest suspect nearest range gap Q − − calc = = Qtab is looked up in a set may rejected. Between 3 and 10, inclusive n ) test will evaluate candidates on various topics like reading repeating. Assessments can be found on the Internet best candidates, from graduates to senior leaders is..., inclusive numbers of Observations q test table your multiplication table by reading, repeating but also by playing evaluate candidates various! Like this to be the ultimate discussion on how to check if table. And more otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0 graduates to senior leaders applications the! 'S information, including webpages, images, videos and more and his team interpreting serological skin! On values between -1 and +1 Roger Holdsworth and his team size of 7 and an level. Integrity of [ … ] T distribution with ease on how to whether. Perfect negative or positive association, 0 ( zero ) no association that helps employers to find the candidates... Procedure for testing k = 2 or more related groups positive antibody test ( according diagnostic! Our status page at https: //status.libretexts.org disability, and non-invasive measurements of sweat rate and.... I would like this to be the same cell against it 's complement, see also: the. Population should be the ultimate discussion on how to check if a table compared! Companies when hiring or promoting individuals 's information, including webpages, images, videos and more odds ratio a. | 2021-03-04 18:19:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5136798620223999, "perplexity": 1699.2896861656093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00122.warc.gz"} |
https://www.bergert.com/categories/networking/ | # Testing egress network access
One of the ways one can test outbound or egress firewall rules is a tcp connection test.
I recently discover portquiz.net. Port Quiz is an outgoing port tester and gives connection examples for various clients, telnet, powershell, nc (netcat), curl, wget, etc… I’ll use PowerShell in my example below.
Why would you need to use this, a recent example that we had was - is that we had some SIP phone users that had some connection issues - as the first step was to see if tcp/5060 (SIP) was accessible from their soft-phones. Was it blocked by their ISP? or some other issue?
Portquiz to the rescue:
For example, SIP is blocked from this machine and they needed to check with their ISP for further troubleshooting.
Portquiz is a handy internet facing website that listens on all ports and allows for egress network testing.
# localtunnel - quickly expose a local webserver to the world
localtunnel is a pretty neat tool. It solves the problem of quickly exposing a web server to the internet without messing with deploying to a “test server” messing with a router/firewall and NATing/PortForwarding. It works like this, assuming you have a webserver listening on port 8080 dbergert\$ localtunnel -k ~/.ssh/id_rsa.pub 8080 This localtunnel service is brought to you by Twilio. Port 8080 is now publicly accessible from http://52rq.localtunnel.com ... Now browse to the URL and there is your publicly exposed webserver for that quick show or test. More on localtunnel on its github page: https://github.com/progrium/localtunnel
# SSL / VPN / Direct Connection connecivity options
I came across this exchange discussing connectivity when reviewing some specifications for an interface that we are writing:
“Since both companies will utilize web services for the exchange of information, it is proposed that we use SSL instead of a VPN or Direct connection. SSL (https over port 443) provides security by encrypting the communications channel. This arrangement provides all the security of a VPN or Direct connection. Plus it requires less network configuration, less maintenance, greater flexibility (in case platforms move on either end) and eliminates a VPN or direct connection as a potential point of failure.”
I have a lot of problems with this.
1) Encryption isn’t security.
2) I find it hard to dispute that: Direct Connection > VPN > SSL over internet from a general security perspective.
3) SSL used in this manner lacks authentication, compared to a IP SEC point-to-point VPN (AH/ESP)
4) Exposing a web server to the internet introduces the risk of web server vulnerabilities, application layer vulnerabilities, among others ever more recent SSL vulnerabilities[1]. (Note that source based ACL’s are not recommend here either, nor are client side certificates for authentication)
5) The concept of “least privilege” from a networking perspective is not followed - only two parties need to talk to each other, why open it up to the world to attempt to connect to ? Another interface stated “We restrict all traffic by third party connections to the least access needed to support business. “ <– I like this much better.
6) SSL over the internet will require our customer to expose a secure internal system to the internet, when it was designed to have very controlled network access, as compared to a VPN and general firewall rules for network control.
7) I haven’t discussed direct connections or leased lines, mostly due to the nature and volume of this application. Normally this is our first choice for high volume, sensitive transaction data to third parties with multiple data centers. Where we use 2 leased lines on different carriers to different data-centers.
My Vote for this? SSL over a VPN - (Defense in depth) Could SSL be used ? Sure but we would need to add a list of controls around its implementation and quite possibly add a layer of applications (proxy the requests) to design around this which is more work and has a higher change of configuration failure then a standard site-to-site VPN connection. | 2021-09-21 01:45:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2878502309322357, "perplexity": 3979.2465687848817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00284.warc.gz"} |
https://www.wikiplanet.click/enciclopedia/en/Displacement_current | # Displacement current
In electromagnetism, displacement current density is the quantity D/∂t appearing in Maxwell's equations that is defined in terms of the rate of change of D, the electric displacement field. Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. In physical materials (as opposed to vacuum), there is also a contribution from the slight motion of charges bound in atoms, called dielectric polarization.
The idea was conceived by On Physical Lines of Force, Part III in connection with the displacement of electric particles in a dielectric medium. Maxwell added displacement current to the electric current term in Ampère's Circuital Law. In his 1865 paper A Dynamical Theory of the Electromagnetic Field Maxwell used this amended version of Ampère's Circuital Law to derive the electromagnetic wave equation. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics into one single unified theory. The displacement current term is now seen as a crucial addition that completed Maxwell's equations and is necessary to explain many phenomena, most particularly the existence of electromagnetic waves.
## Explanation
The electric displacement field is defined as:
${\displaystyle {\boldsymbol {D}}=\varepsilon _{0}{\boldsymbol {E}}+{\boldsymbol {P}}\ .}$
where:
ε0 is the permittivity of free space
E is the electric field intensity
P is the polarization of the medium
Differentiating this equation with respect to time defines the displacement current density, which therefore has two components in a dielectric:[1](see also the "displacement current" section of the article "current density")
${\displaystyle {\boldsymbol {J}}_{\boldsymbol {D}}=\varepsilon _{0}{\frac {\partial {\boldsymbol {E}}}{\partial t}}+{\frac {\partial {\boldsymbol {P}}}{\partial t}}\ .}$
The first term on the right hand side is present in material media and in free space. It doesn't necessarily come from any actual movement of charge, but it does have an associated magnetic field, just as a current does due to charge motion. Some authors apply the name displacement current to the first term by itself.[2]
The second term on the right hand side, called polarization current density, comes from the change in polarization of the individual molecules of the dielectric material. Polarization results when, under the influence of an applied electric field, the charges in molecules have moved from a position of exact cancellation. The positive and negative charges in molecules separate, causing an increase in the state of polarization P. A changing state of polarization corresponds to charge movement and so is equivalent to a current, hence the term "polarization current".
Thus, ${\displaystyle {\boldsymbol {I}}_{\boldsymbol {D}}=\iint _{\mathcal {S}}{\boldsymbol {J}}_{\boldsymbol {D}}\cdot \operatorname {d} \!{\boldsymbol {S}}=\iint _{\mathcal {S}}{\frac {\partial {\boldsymbol {D}}}{\partial t}}\cdot \operatorname {d} \!{\boldsymbol {S}}={\frac {\partial }{\partial t}}\iint _{\mathcal {S}}{\boldsymbol {D}}\cdot \operatorname {d} \!{\boldsymbol {S}}={\frac {\partial \Phi _{D}}{\partial t}}\!}$
This polarization is the displacement current as it was originally conceived by Maxwell. Maxwell made no special treatment of the vacuum, treating it as a material medium. For Maxwell, the effect of P was simply to change the relative permittivity εr in the relation D = εrε0 E.
The modern justification of displacement current is explained below.
### Isotropic dielectric case
In the case of a very simple dielectric material the constitutive relation holds:
${\displaystyle {\boldsymbol {D}}=\varepsilon {\boldsymbol {E}}\ ,}$
where the permittivity ε = ε0 εr,
In this equation the use of ε accounts for the polarization of the dielectric.
The scalar value of displacement current may also be expressed in terms of electric flux:
${\displaystyle I_{\mathrm {D} }=\varepsilon {\frac {\partial \Phi _{E}}{\partial t}}.}$
The forms in terms of ε are correct only for linear isotropic materials. More generally ε may be replaced by a tensor, may depend upon the electric field itself, and may exhibit frequency dependence (dispersion).
For a linear isotropic dielectric, the polarization P is given by:
${\displaystyle {\boldsymbol {P}}=\varepsilon _{0}\chi _{e}{\boldsymbol {E}}=\varepsilon _{0}(\varepsilon _{r}-1){\boldsymbol {E}}}$
where χe is known as the electric susceptibility of the dielectric. Note that:
${\displaystyle \varepsilon =\varepsilon _{r}\varepsilon _{0}=(1+\chi _{e})\varepsilon _{0}.}$
Other Languages | 2018-12-16 13:47:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7232609391212463, "perplexity": 405.64860895920873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827727.65/warc/CC-MAIN-20181216121406-20181216143406-00510.warc.gz"} |
http://bioconda.github.io/recipes/r-psych/README.html | # r-psych¶
A general purpose toolbox for personality, psychometrics and experimental psychology. Functions are primarily for multivariate analysis and scale construction using factor analysis, principal component analysis, cluster analysis and reliability analysis, although others provide basic descriptive statistics. Item Response Theory is done using factor analysis of tetrachoric and polychoric correlations. Functions for analyzing data at multi-levels include within and between group statistics, including correlations and factor analysis. Functions for simulating particular item and test structures are included. Several functions serve as a useful front end for structural equation modeling. Graphical displays of path diagrams, factor analysis and structural equation models are created using basic graphics. Some of the functions are written to support a book on psychometrics as well as publications in personality research. For more information, see the personality-project.org/r webpage.
## Installation¶
With an activated Bioconda channel (see 2. Set up channels), install with:
conda install r-psych
and update with:
conda update r-psych
A Docker container is available at https://quay.io/repository/biocontainers/r-psych. | 2017-03-28 04:13:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35028567910194397, "perplexity": 5701.957461895123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189667.42/warc/CC-MAIN-20170322212949-00605-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://isabelle.in.tum.de/repos/isabelle/file/c56cff1c0e73/src/Doc/Isar_Ref/Proof_Script.thy | src/Doc/Isar_Ref/Proof_Script.thy
author wenzelm Sun Feb 07 19:32:35 2016 +0100 (2016-02-07) changeset 62269 c56cff1c0e73 parent 61866 6fa60a4f7e48 child 62271 4cfe65cfd369 permissions -rw-r--r--
tuned;
1 (*:maxLineLen=78:*)
2
3 theory Proof_Script
4 imports Base Main
5 begin
6
7 chapter \<open>Proof scripts\<close>
8
9 text \<open>
10 Interactive theorem proving is traditionally associated with proof
11 scripts'', but Isabelle/Isar is centered around structured \<^emph>\<open>proof
12 documents\<close> instead (see also \chref{ch:proofs}).
13
14 Nonetheless, it is possible to emulate proof scripts by sequential
15 refinements of a proof state in backwards mode, notably with the @{command
16 apply} command (see \secref{sec:tactic-commands}).
17
18 There are also various proof methods that allow to refer to implicit goal
19 state information that is not accessible to structured Isar proofs (see
20 \secref{sec:tactics}). Note that the @{command subgoal}
21 (\secref{sec:subgoal}) command usually eliminates the need for implicit goal
22 state references.
23 \<close>
24
25
26 section \<open>Commands for step-wise refinement \label{sec:tactic-commands}\<close>
27
28 text \<open>
29 \begin{matharray}{rcl}
30 @{command_def "supply"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
31 @{command_def "apply"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(prove)\<close> \\
32 @{command_def "apply_end"}\<open>\<^sup>*\<close> & : & \<open>proof(state) \<rightarrow> proof(state)\<close> \\
33 @{command_def "done"}\<open>\<^sup>*\<close> & : & \<open>proof(prove) \<rightarrow> proof(state) | local_theory | theory\<close> \\
34 @{command_def "defer"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
35 @{command_def "prefer"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
36 @{command_def "back"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
37 \end{matharray}
38
39 @{rail \<open>
40 @@{command supply} (@{syntax thmdef}? @{syntax thmrefs} + @'and')
41 ;
42 ( @@{command apply} | @@{command apply_end} ) @{syntax method}
43 ;
44 @@{command defer} @{syntax nat}?
45 ;
46 @@{command prefer} @{syntax nat}
47 \<close>}
48
49 \<^descr> @{command "supply"} supports fact definitions during goal refinement: it
50 is similar to @{command "note"}, but it operates in backwards mode and does
51 not have any impact on chained facts.
52
53 \<^descr> @{command "apply"}~\<open>m\<close> applies proof method \<open>m\<close> in initial position, but
54 unlike @{command "proof"} it retains \<open>proof(prove)\<close>'' mode. Thus
55 consecutive method applications may be given just as in tactic scripts.
56
57 Facts are passed to \<open>m\<close> as indicated by the goal's forward-chain mode, and
58 are \<^emph>\<open>consumed\<close> afterwards. Thus any further @{command "apply"} command
59 would always work in a purely backward manner.
60
61 \<^descr> @{command "apply_end"}~\<open>m\<close> applies proof method \<open>m\<close> as if in terminal
62 position. Basically, this simulates a multi-step tactic script for @{command
63 "qed"}, but may be given anywhere within the proof body.
64
65 No facts are passed to \<open>m\<close> here. Furthermore, the static context is that of
66 the enclosing goal (as for actual @{command "qed"}). Thus the proof method
67 may not refer to any assumptions introduced in the current body, for
68 example.
69
70 \<^descr> @{command "done"} completes a proof script, provided that the current goal
71 state is solved completely. Note that actual structured proof commands
72 (e.g.\ @{command "."}'' or @{command "sorry"}) may be used to conclude
73 proof scripts as well.
74
75 \<^descr> @{command "defer"}~\<open>n\<close> and @{command "prefer"}~\<open>n\<close> shuffle the list of
76 pending goals: @{command "defer"} puts off sub-goal \<open>n\<close> to the end of the
77 list (\<open>n = 1\<close> by default), while @{command "prefer"} brings sub-goal \<open>n\<close> to
78 the front.
79
80 \<^descr> @{command "back"} does back-tracking over the result sequence of the
81 latest proof command. Any proof command may return multiple results, and
82 this command explores the possibilities step-by-step. It is mainly useful
83 for experimentation and interactive exploration, and should be avoided in
84 finished proofs.
85 \<close>
86
87
88 section \<open>Explicit subgoal structure \label{sec:subgoal}\<close>
89
90 text \<open>
91 \begin{matharray}{rcl}
92 @{command_def "subgoal"}\<open>\<^sup>*\<close> & : & \<open>proof \<rightarrow> proof\<close> \\
93 \end{matharray}
94
95 @{rail \<open>
96 @@{command subgoal} @{syntax thmbind}? prems? params?
97 ;
98 prems: @'premises' @{syntax thmbind}?
99 ;
100 params: @'for' '\<dots>'? (('_' | @{syntax name})+)
101 \<close>}
102
103 \<^descr> @{command "subgoal"} allows to impose some structure on backward
104 refinements, to avoid proof scripts degenerating into long of @{command
105 apply} sequences.
106
107 The current goal state, which is essentially a hidden part of the Isar/VM
108 configuration, is turned into a proof context and remaining conclusion.
109 This corresponds to @{command fix}~/ @{command assume}~/ @{command show} in
110 structured proofs, but the text of the parameters, premises and conclusion
111 is not given explicitly.
112
113 Goal parameters may be specified separately, in order to allow referring to
114 them in the proof body: @{command subgoal}~@{keyword "for"}~\<open>x y z\<close>''
115 names a \<^emph>\<open>prefix\<close>, and @{command subgoal}~@{keyword "for"}~\<open>\<dots> x y z\<close>''
116 names a \<^emph>\<open>suffix\<close> of goal parameters. The latter uses a literal \<^verbatim>\<open>\<dots>\<close> symbol
117 as notation. Parameter positions may be skipped via dummies (underscore).
118 Unspecified names remain internal, and thus inaccessible in the proof text.
119
120 @{command subgoal}~@{keyword "premises"}~\<open>prems\<close>'' indicates that goal
121 premises should be turned into assumptions of the context (otherwise the
122 remaining conclusion is a Pure implication). The fact name and attributes
123 are optional; the particular name \<open>prems\<close>'' is a common convention for the
124 premises of an arbitrary goal context in proof scripts.
125
126 @{command subgoal}~\<open>result\<close>'' indicates a fact name for the result of a
127 proven subgoal. Thus it may be re-used in further reasoning, similar to the
128 result of @{command show} in structured Isar proofs.
129
130
131 Here are some abstract examples:
132 \<close>
133
134 lemma "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z"
135 and "\<And>u v. X u \<Longrightarrow> Y v"
136 subgoal sorry
137 subgoal sorry
138 done
139
140 lemma "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z"
141 and "\<And>u v. X u \<Longrightarrow> Y v"
142 subgoal for x y z sorry
143 subgoal for u v sorry
144 done
145
146 lemma "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z"
147 and "\<And>u v. X u \<Longrightarrow> Y v"
148 subgoal premises for x y z
149 using \<open>A x\<close> \<open>B y\<close>
150 sorry
151 subgoal premises for u v
152 using \<open>X u\<close>
153 sorry
154 done
155
156 lemma "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z"
157 and "\<And>u v. X u \<Longrightarrow> Y v"
158 subgoal r premises prems for x y z
159 proof -
160 have "A x" by (fact prems)
161 moreover have "B y" by (fact prems)
162 ultimately show ?thesis sorry
163 qed
164 subgoal premises prems for u v
165 proof -
166 have "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z" by (fact r)
167 moreover
168 have "X u" by (fact prems)
169 ultimately show ?thesis sorry
170 qed
171 done
172
173 lemma "\<And>x y z. A x \<Longrightarrow> B y \<Longrightarrow> C z"
174 subgoal premises prems for \<dots> z
175 proof -
176 from prems show "C z" sorry
177 qed
178 done
179
180
181 section \<open>Tactics: improper proof methods \label{sec:tactics}\<close>
182
183 text \<open>
184 The following improper proof methods emulate traditional tactics. These
185 admit direct access to the goal state, which is normally considered harmful!
186 In particular, this may involve both numbered goal addressing (default 1),
187 and dynamic instantiation within the scope of some subgoal.
188
189 \begin{warn}
190 Dynamic instantiations refer to universally quantified parameters of a
191 subgoal (the dynamic context) rather than fixed variables and term
192 abbreviations of a (static) Isar context.
193 \end{warn}
194
195 Tactic emulation methods, unlike their ML counterparts, admit simultaneous
196 instantiation from both dynamic and static contexts. If names occur in both
197 contexts goal parameters hide locally fixed variables. Likewise, schematic
198 variables refer to term abbreviations, if present in the static context.
199 Otherwise the schematic variable is interpreted as a schematic variable and
200 left to be solved by unification with certain parts of the subgoal.
201
202 Note that the tactic emulation proof methods in Isabelle/Isar are
203 consistently named \<open>foo_tac\<close>. Note also that variable names occurring on
204 left hand sides of instantiations must be preceded by a question mark if
205 they coincide with a keyword or contain dots. This is consistent with the
206 attribute @{attribute "where"} (see \secref{sec:pure-meth-att}).
207
208 \begin{matharray}{rcl}
209 @{method_def rule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
210 @{method_def erule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
211 @{method_def drule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
212 @{method_def frule_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
213 @{method_def cut_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
214 @{method_def thin_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
215 @{method_def subgoal_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
216 @{method_def rename_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
217 @{method_def rotate_tac}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
218 @{method_def tactic}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
219 @{method_def raw_tactic}\<open>\<^sup>*\<close> & : & \<open>method\<close> \\
220 \end{matharray}
221
222 @{rail \<open>
223 (@@{method rule_tac} | @@{method erule_tac} | @@{method drule_tac} |
224 @@{method frule_tac} | @@{method cut_tac}) @{syntax goal_spec}? \<newline>
225 (@{syntax named_insts} @{syntax for_fixes} @'in' @{syntax thmref} | @{syntax thmrefs} )
226 ;
227 @@{method thin_tac} @{syntax goal_spec}? @{syntax prop} @{syntax for_fixes}
228 ;
229 @@{method subgoal_tac} @{syntax goal_spec}? (@{syntax prop} +) @{syntax for_fixes}
230 ;
231 @@{method rename_tac} @{syntax goal_spec}? (@{syntax name} +)
232 ;
233 @@{method rotate_tac} @{syntax goal_spec}? @{syntax int}?
234 ;
235 (@@{method tactic} | @@{method raw_tactic}) @{syntax text}
236 \<close>}
237
238 \<^descr> @{method rule_tac} etc. do resolution of rules with explicit
239 instantiation. This works the same way as the ML tactics @{ML
240 Rule_Insts.res_inst_tac} etc.\ (see @{cite "isabelle-implementation"}).
241
242 Multiple rules may be only given if there is no instantiation; then @{method
243 rule_tac} is the same as @{ML resolve_tac} in ML (see @{cite
244 "isabelle-implementation"}).
245
246 \<^descr> @{method cut_tac} inserts facts into the proof state as assumption of a
247 subgoal; instantiations may be given as well. Note that the scope of
248 schematic variables is spread over the main goal statement and rule premises
249 are turned into new subgoals. This is in contrast to the regular method
250 @{method insert} which inserts closed rule statements.
251
252 \<^descr> @{method thin_tac}~\<open>\<phi>\<close> deletes the specified premise from a subgoal. Note
253 that \<open>\<phi>\<close> may contain schematic variables, to abbreviate the intended
254 proposition; the first matching subgoal premise will be deleted. Removing
255 useless premises from a subgoal increases its readability and can make
256 search tactics run faster.
257
258 \<^descr> @{method subgoal_tac}~\<open>\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close> adds the propositions \<open>\<phi>\<^sub>1 \<dots> \<phi>\<^sub>n\<close> as
259 local premises to a subgoal, and poses the same as new subgoals (in the
260 original context).
261
262 \<^descr> @{method rename_tac}~\<open>x\<^sub>1 \<dots> x\<^sub>n\<close> renames parameters of a goal according to
263 the list \<open>x\<^sub>1, \<dots>, x\<^sub>n\<close>, which refers to the \<^emph>\<open>suffix\<close> of variables.
264
265 \<^descr> @{method rotate_tac}~\<open>n\<close> rotates the premises of a subgoal by \<open>n\<close>
266 positions: from right to left if \<open>n\<close> is positive, and from left to right if
267 \<open>n\<close> is negative; the default value is 1.
268
269 \<^descr> @{method tactic}~\<open>text\<close> produces a proof method from any ML text of type
270 @{ML_type tactic}. Apart from the usual ML environment and the current proof
271 context, the ML code may refer to the locally bound values @{ML_text facts},
272 which indicates any current facts used for forward-chaining.
273
274 \<^descr> @{method raw_tactic} is similar to @{method tactic}, but presents the goal
275 state in its raw internal form, where simultaneous subgoals appear as
276 conjunction of the logical framework instead of the usual split into several
277 subgoals. While feature this is useful for debugging of complex method
278 definitions, it should not never appear in production theories.
279 \<close>
280
281 end | 2019-11-18 11:00:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765357375144958, "perplexity": 9544.731874688463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669755.17/warc/CC-MAIN-20191118104047-20191118132047-00064.warc.gz"} |
http://en.wikipedia.org/wiki/Analytic_polyhedron | # Analytic polyhedron
In mathematics, especially several complex variables, an analytic polyhedron is a subset of the complex space $\mathbf{C}^n$ of the form
$\{ z \in D : |f_j(z)| < 1, 1 \le j \le N \}\,$
where $D$ is a bounded connected open subset of $\mathbf{C}^n$ and $f_j$ are holomorphic on D.[1] If $f_j$ above are polynomials, then the set is called a polynomial polyhedron. Every analytic polyhedron is a domain of holomorphy (thus, pseudo-convex.)
The boundary of an analytic polyhedron is the union of the set of hypersurfaces
$\sigma_j = \{ z \in D : |f_j(z)| = 1 \}, 1 \le j \le N.$
An analytic polyhedron is a Weil polyhedron, or Weil domain if the intersection of $k$ hypersurfaces has dimension no greater than $2n-k$.[2] | 2014-09-21 15:42:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9401951432228088, "perplexity": 329.8097359960341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135558.82/warc/CC-MAIN-20140914011215-00309-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://physicstravelguide.com/theorems/goldstones_theorem?do=diff&rev2%5B0%5D=1526360334&rev2%5B1%5D=1526360404&difftype=sidebyside | Site Tools
theorems:goldstones_theorem
Differences
This shows you the differences between two versions of the page.
theorems:goldstones_theorem [2018/05/15 06:58]jakobadmin ↷ Page moved from advanced_notions:symmetry_breaking:goldstones_theorem to theorems:goldstones_theorem theorems:goldstones_theorem [2018/05/15 07:00]jakobadmin Both sides previous revision Previous revision 2020/04/12 15:05 jakobadmin 2018/05/15 07:00 jakobadmin 2018/05/15 06:59 jakobadmin 2018/05/15 06:58 jakobadmin ↷ Page moved from advanced_notions:symmetry_breaking:goldstones_theorem to theorems:goldstones_theorem2018/03/28 17:15 jakobadmin [Student] 2018/03/05 14:23 jakobadmin [Layman] 2017/12/04 08:01 external edit2017/11/15 17:07 jakobadmin 2017/11/15 17:06 jakobadmin [Examples] 2017/11/15 17:06 jakobadmin [Examples] 2017/11/06 09:49 jakobadmin [Researcher] 2017/10/27 13:07 jakobadmin ↷ Page moved from symmetry_breaking:goldstones_theorem to advanced_notions:symmetry_breaking:goldstones_theorem2017/09/29 07:32 jakobadmin [Student] 2017/09/29 07:31 jakobadmin [Student] 2017/09/29 07:29 jakobadmin [Student] 2017/09/29 07:28 jakobadmin created Next revision Previous revision 2020/04/12 15:05 jakobadmin 2018/05/15 07:00 jakobadmin 2018/05/15 06:59 jakobadmin 2018/05/15 06:58 jakobadmin ↷ Page moved from advanced_notions:symmetry_breaking:goldstones_theorem to theorems:goldstones_theorem2018/03/28 17:15 jakobadmin [Student] 2018/03/05 14:23 jakobadmin [Layman] 2017/12/04 08:01 external edit2017/11/15 17:07 jakobadmin 2017/11/15 17:06 jakobadmin [Examples] 2017/11/15 17:06 jakobadmin [Examples] 2017/11/06 09:49 jakobadmin [Researcher] 2017/10/27 13:07 jakobadmin ↷ Page moved from symmetry_breaking:goldstones_theorem to advanced_notions:symmetry_breaking:goldstones_theorem2017/09/29 07:32 jakobadmin [Student] 2017/09/29 07:31 jakobadmin [Student] 2017/09/29 07:29 jakobadmin [Student] 2017/09/29 07:28 jakobadmin created Last revision Both sides next revision Line 1: Line 1: ====== Goldstone's theorem ====== ====== Goldstone's theorem ====== - - <blockquote> - Goldstone's theorem states that whenever a continuous global symmetry is spontaneously broken, there exists a massless excitation about the spontaneously broken vacuum. Decomposing $\Phi(x)=|\Phi(x) |e^{i\rho(x)}$, $\rho$ transforms as $\rho(x) \to \rho(x) + \theta$. Hence the Lagrangian can depend on $\rho$ only via the derivative = $\partial_\mu \rho$; there cannot be any mass term for $\rho$, and it is a massless field. $\rho$ --- identified as the field which transforms inhomogeneously under the broken symmetry --- is referred to as the Goldstone boson. - - <cite>https://arxiv.org/pdf/1703.05448.pdf - - <tabbox Layman> + <tabbox Intuitive> * For an intuitive explanation of Goldstone's theorem, see [[http://jakobschwichtenberg.com/understanding-goldstones-theorem-intuitively/|Understanding Goldstone’s theorem intuitively]] by J. Schwichtenberg * For an intuitive explanation of Goldstone's theorem, see [[http://jakobschwichtenberg.com/understanding-goldstones-theorem-intuitively/|Understanding Goldstone’s theorem intuitively]] by J. Schwichtenberg - <tabbox Student> + <tabbox Concrete> <blockquote> <blockquote> Line 117: Line 111: <cite>http://www.jstor.org/stable/pdf/10.1086/518324.pdf <cite>http://www.jstor.org/stable/pdf/10.1086/518324.pdf + + + ---- + + **Examples** + + --> Landau phonons in Bose-Einstein condensates# + + "The Bose-Einstein condensation is characterized by the + breaking of a global U(1) gauge group (acting on the Bose particle field + as the U(1) group of Example 1), as very clearly displayed by the free + Bose gas.5 The U(1) breaking leads to the existence of Goldstone + modes, the so-called Landau phonons, and the existence of such excitations + may in turn indicate the presence of a broken U(1) symmetry" [[https://arxiv.org/pdf/1502.06540.pdf |Source]] + + <-- ---- ---- Line 123: Line 133: - <tabbox Researcher> + <tabbox Abstract> <blockquote> <blockquote> Line 157: Line 167: <cite>https://arxiv.org/pdf/1612.00003.pdf <cite>https://arxiv.org/pdf/1612.00003.pdf - --> Common Question 1# + - +
- <-- + Goldstone's theorem states that whenever a continuous global symmetry is spontaneously broken, there exists a massless excitation about the spontaneously broken vacuum. Decomposing $\Phi(x)=|\Phi(x) |e^{i\rho(x)}$, $\rho$ transforms as $\rho(x) \to \rho(x) + \theta$. Hence the Lagrangian can depend on $\rho$ only via the derivative = $\partial_\mu \rho$; there cannot be any mass term for $\rho$, and it is a massless field. $\rho$ --- identified as the field which transforms inhomogeneously under the broken symmetry --- is referred to as the Goldstone boson. - --> Common Question 2# + <cite>https://arxiv.org/pdf/1703.05448.pdf + - - <-- - - - --> Landau phonons in Bose-Einstein condensates# - "The Bose-Einstein condensation is characterized by the - breaking of a global U(1) gauge group (acting on the Bose particle field - as the U(1) group of Example 1), as very clearly displayed by the free - Bose gas.5 The U(1) breaking leads to the existence of Goldstone - modes, the so-called Landau phonons, and the existence of such excitations - may in turn indicate the presence of a broken U(1) symmetry" [[https://arxiv.org/pdf/1502.06540.pdf |Source]] - <-- - -
theorems/goldstones_theorem.txt · Last modified: 2020/04/12 15:05 by jakobadmin | 2020-06-01 06:29:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7052994966506958, "perplexity": 10195.711642611417}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00172.warc.gz"} |
https://mathoverflow.net/questions/251942/algorithm-for-finding-all-empty-ellipses-locked-by-a-set-of-points | # Algorithm for Finding all Empty Ellipses Locked by a Set of Points
Is there an algorithm for reporting all empty ellipses, that are locked by a finite set $\mathcal{P}$ of points in the euclidean plane?
• An ellipse is considered empty, if no inner point is an element of $\mathcal{P}$.
• An ellipse is locked by $\mathcal{P}$, if every rotation, translation by an arbitrarily small amount renders the ellipse non-empty; the elements of a continuum of ellipses, which are in contact with the same points are not considered to be locked.
Any information about the problem of determining the set of all locked empty ellipses for a given set of points $\mathcal{P}$ (e.g. complexity of algorithms, bounds on the number of ellipses, etc.) would be appreciated.
• If P are the vertices of a square, what do you want to do, list the continuum many ellipses that are locked? Gerhard "A Really Bad Edge Case" Paseman, 2016.10.11. – Gerhard Paseman Oct 12 '16 at 5:43
• @GerhardPaseman in my question I clarified that an ellipse is locked (among other criteria) if neither of the major axis can be elongated while remaining empty, so in the case of the vertices of a square, the set of locked ellipses is empty; the emphasis is on empty and locked. – Manfred Weis Oct 12 '16 at 5:56
• @GerhardPaseman but I see, that I have to edit the question to address your critique. – Manfred Weis Oct 12 '16 at 6:02
This is not a direct answer, but the techniques in this paper seem applicable to your problem:
Dwyer, Rex A., and William F. Eddy. "Maximal empty ellipsoids." International Journal of Computational Geometry & Applications 6.02 (1996): 169-185.
They enumerate maximal empty ellipsoids, which are defined differently from your notion of "locked": a maximal empty ellipse is one such that every infinitesimal perturbation of its center, axis lengths, or orientation yields an ellipse that is either smaller or not empty.
So this leads to $\Theta(n^2)$ maximal empty ellipses in $d=2$.
The main technique is to map the problem in $\mathbb{R}^2$ to enumerating the facets of a convex hull in $\mathbb{R}^5$, and analogously for arbitrary $d$. The complexity then follows from the upper-bound theorem for convex polytopes.
• Thanks Joseph! That definition of maximal empty ellipses matches exactly what I meant by "locked"; as you can see in my question, I had ruled out "continuums" of empty ellipses (in response to Gerhard's critque). – Manfred Weis Oct 13 '16 at 0:33
• Incidentally, the reason that $\mathbb{R}^5$ plays a role is essentially @GerhardPaseman's $5$ parameters – Joseph O'Rourke Oct 13 '16 at 11:29
The location of an ellipse center, and its size and shape are given by 4 real parameters. Throw in rotation with respect to a fixed coordinate system, and you have five-ish real parameters to play with. This suggests to me that you will need to look at subsets of five points of P to determine what is locked.
Motivated by my square example in the comments, I would consider developing an algorithm that, given four points, determines the envelope of the continuum of ellipses going through those four points. Once you have such an envelope, determine which points of P lie inside the envelope, and use that to determine which ellipses are locked. Note that you avoid processing points outside the envelope.
You might be clever on how to develop an envelope using just three points, giving you smarter choices for the fourth and fifth points, but I don't know if smarter means faster.
Gerhard "Maybe Grobner Bases Would Help?" Paseman, 2016.10.12.
• A good preprocessing step is to list (or generate on the fly) all triples of points of P which have no point of P on the triangle formed by such a triple. This should be a simple computation that allows you to pick good sets of four points and avoid bad sets of four and five points. This alone should turn your algorithm into a hopefully cubic-time algorithm in the number of points. Gerhard "Likes Nice And Easy Approximations" Paseman, 2016.10.12. – Gerhard Paseman Oct 12 '16 at 16:32 | 2021-04-20 11:38:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7564153671264648, "perplexity": 615.0724600478945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00231.warc.gz"} |
https://chris-wood.github.io/2015/02/20/OSG-Tutorial.html | February 20, 2015
# 1. Introduction
I am writing this tutorial because I could not find anything similar online. It is my goal to give you a very simple and hands-on introduction to using the grid so that you can get up and running with large-scale computations quickly. Everything in this document is largely based on my experience using the Open Science Grid (OSG) and working with Mats Rynge, one of the members of the OSG and Pegasus teams. If you find this information helpful, please let me know by sending me a message at woodc1@uci.edu. If it was not helpful, or if you find any mistakes or can offer suggestions for improvement, please also let me know by sending me a message at the same address.
The remainder of this tutorial is organized in the following manner. Section 2 starts with an introduction to OSG, describing why you might want to use it and some basic requirements for actually needing to use it. Section 3 then goes into the process of connecting to the grid, uploading and downloading files, and maintaining sessions for extended periods of time. Section 4 continues with an overview of how to set up a workflow for a job, submit and monitor a workflow, and analyze the results. Section 5 then finishes with some closing remarks. It is best to read this tutorial in a linear fashion so as to not miss any important details.
# 2. Purpose
The OSG is a massively distributed computation platform that gives researchers an easy and painless way to execute very large compute-bound applications in a significantly reduced amount of time. Running programs on the OSG is a form of result parallelism in which the work (e.g. number of inputs) for a particular program are evenly distributed among many, many computational nodes. For example, given a program that has a regular running time denoted by T, partitioning the input of this program among K nodes will theoretically yield a reduced running time of T/K. The wonderful thing about the OSG is that there is generally no limiting upper bound on K, so we are free to speed up our programs to the largest extent possible.
By dividing the computation among K nodes we are usually left with K distinct output files that need to be processed further. Should these results need to be combined, or reduced, you simply need to walk all of the output files using their own reduction operation (e.g. sum the contents of each output file) to obtain a final result, cumulative result. As another example, if one OSG workflow (a workflow is conceptually equivalent to a set of jobs run on the grid) outputs a single integer value from each computational node and you want to find the minimum of all such outputs, you may walk each output file, keeping track of the minimum value observed thus far, and return the final minimum value upon completion. Clearly, the combination or reduction process for these output files is specific to your particular application, and can be done online or offline depending on what kind of analysis is actually required.
With a general description of how the grid is used to speed up jobs with a lot of computation, you may now wonder how easy it is to actually use the grid. Fortunately, leveraging the resources of the OSG is easier than one may think. It only requires the following:
1. A working program that may or may not require input,
2. An existing data set, if said data set is expensive or difficult to obtain online, or Python code that is capable of generating the data set online, and
3. Patience when configuring the workflow!
In what follows we will walk through the complete process of signing into the main OSG node, setting up and initializing a workflow, checking on the status of the workflow, and finally, collecting the output files when complete. Again, I stress that this tutorial is only meant to be an introduction. For more help, please see the more technical OSG documentation available online or contact Mats Rynge on the OSG/Pegasus team.
# 3. First Steps: Interacting with the Grid (SSH, SFTP, and so on)
The first step to using the OSG is to actually connect to the main server node: osg-xsede.grid.iu.edu. User authentication is done using a traditional RSA-based public/private key pair. That is, you must provide your public key to the server, which will then be used to authenticate you with your private key. I will not go into the details of generating public/private key pairs as there are many tutorials available online. Alternatively, you may consult the openssl man page for some basic information.
With an appropriate public/private key pair and OSG username, which should be configured in cooperation with Mats Rynge, one may use SSH to open a remote shell on the osg-xsede.grid.iu.edu node using the following command:
Using other programs such as sftp to transfer files can be used in a similar fashion:
In cases where large data sets need to be reliably uploaded from a local machine to the grid for use in a workflow, one may wish to use the rsync program. This program automatically detects timeouts and restarts file transfers in the event of failures, ensuring that all files from one source directory ~/src/dir/ on the local machine are transferred to some remote directory osg-xsede.grid.iu.edu:/local-scratch/username/some/remote/dir, as follows:
rsync -a -v -i /path/to/private-key -e ssh osg-xsede.grid.iu.edu:/local-scratch/username/some/remote/dir/. ~/src/dir/.
Another useful program that may be of value is screen, which is a program for creating persistent sessions on a remote machine that keep running even when the user logs out remotely. This is useful if a program must be run on the remote machine for an extended period of time and you cannot reliably maintain a connection throughout the entire duration of the its running time. There are many extensive tutorials for screen online, but for simplicity I summarize some of the most useful features below.
First, check to see that you have screen installed and in your shell’s path. This can be done by typing ‘screen’, as follows:
$screen If you have screen installed, you will meet the screen shown in Figure 1 (or something similar). Note that all OSG servers have screen installed, as far as I know. Every program (including the shell) runs in a window, and each window is given a unique identifier. This can be used to later restore a screen that was started after you log out of the system. Once all programs in a screen window are exited, the window will close, and when all windows close, screen closes as well. The following commands show you how to create windows and move between them: C-a c (create window) C-a N (go to window identified by N, where 0 < N < 10) Note that C-a is synonymous for the key combination “control-a.” If you forget which window is which, simply type “C-a w” to see a list of available windows appear in the bottom of your terminal window, as shown in Figure 2. Now assume that you have a couple windows (sessions) open and are running programs that might take some time to finish. You don’t want to close the program to exit the session. Rather, you want to detach the window from the session, which effectively decouples the session from the terminal. To do this, simply type “C-a d.” At this point, if there are no other windows running, screen will close. To reattach to the session, simply run screen as follows:$ screen -r
and you will be brought right back to where you left off. Figures 3 and 4 below illustrate the effectiveness of this technique.
There are many other useful features that come with screen, but these basic features should be more than sufficient for all tasks you will need to accomplish on the main OSG node. If you need further assistance, search for a more comprehensive screen tutorial online.
# 4. Workflow setup
With the ability to connect to the main OSG node and configure a session, upload and download files, and more, we are now ready to create a workflow. Before going any further, I need to stress that you should not start a workflow without permission from Mats or any other member from the OSG/Pegasus team. Novice users who spawn one too many jobs will use precious cycles that might be better suited for other jobs.
To get started, I will describe the basic set of files that are included in the majority of all workflows. The hierarchy of these files, shown from the “base” directory of the workflow, is shown below.
• dax-generator.py
• dax.xml
• job-wrapper
• pegasusrc
• sites.xml
• submit
• inputs
As a user you should mainly be concerned with the dax-generator.py script, job-wrapper script, and input directory. The purpose of these files is outlined below.
### dax-generator.py
This script creates and configures the OSG workflow, which involves partitioning the job workload appropriately among however many computational nodes is needed, wiring input files to jobs (including program binaries and data files), and specifying all output files that will be created on each computational node (including error files).
### job-wrapper
This script contains the code that will execute your program for one particular slice of the input. How you determine an input slice is up to you; it may be a range of indices that your program will loop over, a set of files that the program will read from, and so on. This is specific to your application and workflow.
### inputs
This directory contains the input files used in your workflow. It is common to place the program binaries (e.g. the class files for a Java program) in this folder.
Before looking at actual code for these files, let’s first devise a simple application that one might need to run on the grid. Computations in the field of extremal graph theory commonly require many, many CPU hours to compute. For example, recent work by Lange et al. [1] used approximately 200,000 CPU hours on the grid. Imagine how long that would take on a simple machine!
Assume we want to tackle a related Ramsey-like computation on the grid. Namely, we want to run a program that decides whether or not for every red/blue (binary) edge coloring of a complete graph G on n vertices it holds that G contains a blue or red triangle. Furthermore, assume we want to find the smallest such n that this is true. These types of problems are often the subject of Ramsey theory. Fortunately, this problem (i.e. finding the smallest n such that this condition holds) is already solved - it is known to be 6. However, for the sake of illustration, assume we didn’t know that the answer was 6, and we instead wanted to check if it was true for n = 8. A naive algorithm for answering the previous question is to do the following:
For each 2^28 edge colorings: Color the edges of G If there does not exist a red or blue triangle, output no. Otherwise, continue Output yes.
Note that for a complete graph on 8 vertices there are exactly (8(8-1) / 2) = (87 / 2) = 28 edges, and so there are 2^28 possible edge colorings. A Java program that implements this algorithm given an integer n is shown in Listing 1. You may run this program with n = 5 and n = 6 to see that in fact the desired value of n is 6, as shown below:
$java ArrowDecider 5 false$ java ArrowDecider 6 true
You do not need to understand all the details (I merely included it because this type of work is fascinating), but you do need to understand four key elements:
1. The program requires a single input integer n that is used to construct the complete graph.
2. The program outputs a single Boolean value, true or false, depending on whether or not the edge coloring condition was upheld. 3.The program has no external library dependencies (if it did, you would need to work with the OSG/Pegasus team to make sure these libraries would be available on the grid’s computational nodes).
Since we don’t want a single job to perform all 2^28 computations by itself, as this would defeat the purpose of using the grid, we must modify the program so that it only checks a specific range of edge colorings. We do this by specifying two more input parameters for the lower and upper bound of coloring indices. That is, two integers i and j such that 0 <= i < j < 2^28, where i is the lower bound and j is the upper bound. The modified program that uses these bounds to check edge colorings is shown in Listing 2.
This program is now ready to be run on the grid. As previously mentioned, we need to modify the dax-generator.py and job-wrapper files to run our workflow. A heavily annotated version of the dax-generator.py script is shown in Listing 3. You need only be concerned with the highlighted lines for now, as they are responsible for tying in input and output files and configuring each job to run with the appropriate arguments. Be sure to carefully read this file so you understand how input files are registered with the grid, output files are specified, and the parameters for each particular job are configured. As previously mentioned, our Java class files are assumed to be located in the ./inputs directory where this workflow is based. You should follow this pattern for your workflows too.
Now that we’ve configured the workflow creation script (dax-generator.py), we need to set up the job-wrapper script that will actually run each job. Recall that in the dax-generator.py script we added a set of jobs to the DAX catalogue with a different set of parameters. Those parameters are passed into the job-wrapper script as command line arguments, which the script then parses and uses to run the Java program, as shown in Listing 4.
At this point, we are finally ready to submit our workflow. If you’re in the workflow base directory, you may do so as follows:
$./submit Once the job is successfully submitted, you will be prompted with the output shown in Figure 6. In addition, you will also receive a friendly email letting you know the job is at the start phase, as shown below. *** Pegasus Workflow Event ** Time: 2013-08-28T21:21:49+0000 Workflow: /local-scratch/caw/workflows/caw/pegasus/ramsey-arrowing-test/20130828T212141+0000 Job id: 49766786-e17d-4e01-9389-d7306bc14bfa Event: start pegasus-status: UNREADY READY PRE QUEUED POST SUCCESS FAILURE %DONE 2,057 0 0 3 0 0 0 0.0 Summary: 1 DAG total (Running:1) Once a job has been submitted, you can monitor and change its status using the pegasus CLI. You can view the possible commands by typing “pegasus-“ in the terminal and hitting the tab key. The most important of these commands are shown below for emphasis. ### pegasus-status Check on the status of an existing workflow (similar to Figure 7 below). You can view how many jobs are queued, running, complete, and how many have failed. This is very useful for making sure your jobs are running smoothly and not halting indefinitely, which is a huge problem in this shared grid. ### pegasus-remove Remove an existing workflow from the grid. This effectively cancels and abandons all work in progress, so only use this if you are absolutely sure you do not need the output files. ### pegasus-statistics Gather various statistics about a workflow. This is useful for informational or debugging purposes. Once a job is finished, you will receive yet another friendly email, similar to the following: *** Pegasus Workflow Event ** Time: 2013-08-28T22:15:22+0000 Workflow: /local-scratch/caw/workflows/caw/pegasus/ramsey-arrowing-test/20130828T212141+0000 Job id: 49766786-e17d-4e01-9389-d7306bc14bfa Event: at_end Status: 0 pegasus-status: UNREADY READY PRE QUEUED POST SUCCESS FAILURE %DONE 0 0 0 0 0 2,060 0 100.0 Summary: 1 DAG total (Success:1) It’s finally time to examine the results. In more realistic computations, we would probably need to download the files from the server and do some sort of offline processing. However, since the output of our program is a simple Boolean value, we will check the correctness of the output using grep. Recall that we have already shown the optimal answer for n to be 6. Therefore, the output from running this program for n = 8 should be “yes” or true, as well. To check, we first need to navigate to the output directory, which is contained in /local-scratch/username/. For me, this directory has the following subdirectories: data, outputs, and workflows. We are interested in the output, so we change to that directory and drill down to our specific workflow, which is identified by the workflow name specified in the dax-generator.py script and the time when the job was submitted. Once inside this output directory, you may notice that the output is divided into a set of directories. This is done so as to limit the number of files in a single directory, which can make processing a large amount of files very difficult. The output for my “base” directory, as well as the contents of one of these subdirectories, is shown in Figure 8. Now, from the base directory, we simply utilize the grep program to check to see if any of the outputs returned “no” or false, which would mean that we’ve disproved a result that’s been known for decades, or, more likely, we have a bug in our program! The output of running grep for the ‘false’ string through all subdirectories in a recursive fashion is below.$ grep ‘false’ -r * $Nothing! Fantastic! This is what we hoped for. Now use grep again to search for outputs containing “true”. You will see something similar to the following.$ grep ‘true’ -r * 000/out_50855936_50987008:true 000/out_48627712_48758784:true 000/out_195166208_195297280:true 000/out_91488256_91619328:true 000/out_50331648_50462720:true … 016/out_57147392_57278464:true 016/out_54394880_54525952:true 016/out_57540608_57671680:true 016/out_55443456_55574528:true 016/out_57409536_57540608:true \$
Therefore, our program computed the desired result and in a much faster time than if we did it on our own machine.
And that’s it, folks. You should now possess a basic understanding of the OSG workflow necessary to get your large-scale computations up and running with your large-scale jobs much quicker.
# 5. Wrapping Up
In this tutorial we’ve gone over the basics of using the OSG resources for large scale computations. You should be able to find your way around and configure other similar workflows with this background knowledge and little bit of scripting expertise, but please consult myself (woodc1@uci.edu), Mats Rynge, or another member of the OSG/Pegasus team should you have any specific questions about setting up a workflow. Also, I want to stress that I would not have been able to write this tutorial without Mats’ help throughout my studies. He truly is the ultimate guru!
# 6. References
[1] Ivan Livinsky, Alexander Lange, and Stanislaw Radziszowski. Computation of the Ramsey Numbers R(C_4, K_9) and R(C_4, K_10). | 2019-08-17 11:04:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3064747154712677, "perplexity": 760.3061317247888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00233.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/appendix-d-trigonometry-d-exercises-page-a-32/12 | ## Calculus: Early Transcendentals 8th Edition
$\frac{900}{\pi}°$
To convert from radians to degrees, multiply by $\frac{180}{\pi}$: $5 \times \frac{180}{\pi} = \frac{900}{\pi}°$. | 2018-07-17 23:33:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867717027664185, "perplexity": 630.4007919658098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589932.22/warc/CC-MAIN-20180717222930-20180718002930-00315.warc.gz"} |
http://kaon13.physics.lsa.umich.edu/home/program/scientific-program/direct-test-of-time-reversal-symmetry-and-n-the-entangled-neutral-kaon-system | Home > Program > Scientific Program >
### Direct test of time-reversal symmetry and n the entangled neutral kaon system
Speaker: Antonio Di DomenicoWe present a novel method to perform a direct T (time reversal) symmetry test in the neutral kaon system, independent of any CP and/or CPT symmetry tests. This is based on the comparison of suitable transition probabilities, where the required interchange of {\it in} $\leftrightarrow$ {\it out} states for a given process is obtained exploiting the Einstein-Podolsky-Rosen correlations of neutral kaon pairs produced at a $\phi$-factory. In the time distribution between the two decays, we compare a reference transition like the one defined by the time ordered decays ($\ell^-,\pi\pi$) with the \T-conjugated one defined by ($3\pi^0, \ell^+$). With the use of this and other \T conjugated comparisons, the KLOE-2 experiment at DA$\Phi$NE could make a statistically significant test.
Ċ
Kaon13 Conference,
May 1, 2013, 7:21 AM | 2017-06-22 16:34:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054595828056335, "perplexity": 2778.759904597979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319636.73/warc/CC-MAIN-20170622161445-20170622181445-00086.warc.gz"} |
http://web2.0calc.com/questions/algebraic-long-division | +0
# Algebraic long division
+5
269
7
## $$\frac{27x^3-8y^3}{3x-2y}$$
Can this be divided using algebraic long division?
(27x- 8y3)/(3x-2y)
Guest Feb 27, 2017
edited by Guest Feb 27, 2017
edited by Guest Feb 27, 2017
#2
+5
Right, so:
27x3 - 8y3 = (3x)3 - (2y)3
$$\frac{\left(3x-2y\right)\left(9x^2+6xy+4y^2\right)}{\left(3x-2y\right)}$$
= 9x+ 6xy + 4y2
Out of curiosity however, is it possible to use algebraic long division on something like this (with more than 1 variable?)
Thanks.
Guest Feb 27, 2017
Sort:
#1
+91434
0
(27x3 - 8y3)/(3x-2y)
You do not ned to use algebraioc long division. Use this :))
$$\boxed{a^3-b^3=(a-b)(a^2+ab+b^2)}$$
Melody Feb 27, 2017
#2
+5
Right, so:
27x3 - 8y3 = (3x)3 - (2y)3
$$\frac{\left(3x-2y\right)\left(9x^2+6xy+4y^2\right)}{\left(3x-2y\right)}$$
= 9x+ 6xy + 4y2
Out of curiosity however, is it possible to use algebraic long division on something like this (with more than 1 variable?)
Thanks.
Guest Feb 27, 2017
#3
+91434
0
Hi Guest,
You are lucky that I saw this continued question. It would be a good idea for you to become a member because then you would be able to send me a private message with a link to this thread and ask me to come back to it again.
PLUS I could be much more sure that you see any late answer that may be added. :)
Yes you can do algebraic division with more than one variable, I have seen other mathematicians use it and I can use it on this one but I am not sure how often it is useful....
Doing Algebraic division with LaTex is difficult.
I have done it before with great sucess but it was extremely time consuming. ://
I'll give it a go.
This is different from how I did it before, This is quicker but not quite as nice.
$$\begin{array}{llllll}\\ &&9x^2&\color{Maroon}{+4y^2}&\color{NavyBlue}{+6xy}&&\\\hline 3x-2y|&&27x^3&-8y^3\\ &&27x^3&&-18yx^2\\\hline &&&-8y^3&\color{NavyBlue}{+18yx^2}\\ &&&&+18yx^2&-12xy^2\\\hline &&&-8y^3&&\color{Maroon}{+12xy^2}\\ &&&-8y^3&&+12xy^2\\\hline &&&+0&&+0\\\hline \end{array}$$
Heureka or maybe Max
Could you please show me how to make the horizontal lines the correct length ??
Melody Feb 27, 2017
#4
+18827
+5
hi Melody,
typesetting polynom division can easy be done with the polynom package:
Example: $$(27x^3 - 8y^3)/(3x-2y)$$
\usepackage{polynom}
\begin{document}
\textbf{Style A:}\par % this is the default
\polylongdiv[style=A]{27x^3 - 8y^3}{3x-2y}
\textbf{Style B:}\par
\polylongdiv[style=B]{27x^3 - 8y^3}{3x-2y}
\textbf{Style C:}\par
\polylongdiv[style=C]{27x^3 - 8y^3}{3x-2y}
\end{document}
heureka Feb 28, 2017
#5
+91434
0
Thanks very much Heureka,
But
This does not work in this forum though does it? I cannot make it work :/ I asume the polynom package is not included. :(
Do you do it all directly on your hard drive or do you go to somewhere else on the net to do LaTex like this ?
Melody Feb 28, 2017
#6
+18827
0
Hi Melody,
i have installed Latex on my Computer. The setup include a TexStudio.
See picture:
all free-software.
Then i have snapped it as image into web2.0calc.
I do it all directly on my hard drive.
Here i think we have a minimal latex.
heureka Feb 28, 2017
#7
+18827
0 | 2018-01-17 09:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863928318023682, "perplexity": 2375.4559620649943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00635.warc.gz"} |
https://ask.sagemath.org/answers/54078/revisions/ | # Revision history [back]
This looks like a missing feature and/or a bug. Note that the documentation for S.homology() says, regarding generators:
Since trac ticket #6100, the result may not be what you expect when not using CHomP since its return is in terms of the chain complex.
The bug is that there is no result at all, but you can recover it by doing:
sage: S.chain_complex().homology(generators=True)
{0: [(Z, Chain(0:(0, 0, 0, 1, 0, 0)))],
1: [(Z, Chain(1:(0, 1, -1, 0, 0, 0, -1, 1, 0))),
(Z, Chain(1:(0, 0, 0, 0, 1, -1, -1, 1, 1)))],
2: []}
To interpret this, you need to know how the simplicial complex is turned into a chain complex. The simplices in each dimension are sorted in order to construct the matrices representing the boundary maps, so to get the first generator in dimension 1, for example, you would look at
sage: S._n_cells_sorted(1)
[(0, 3), (0, 4), (0, 5), (1, 2), (1, 3), (1, 5), (2, 4), (2, 5), (3, 4)]
The first generator is given by (0, 1, -1, 0, 0, 0, -1, 1, 0), which means the sum of 1 x (item 1 in list), -1 x (item 2), etc.:
(0, 4) - (0, 5) - (2, 4) + (2, 5) | 2021-09-28 16:52:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136654257774353, "perplexity": 394.21333140353397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060877.21/warc/CC-MAIN-20210928153533-20210928183533-00318.warc.gz"} |
https://www.nature.com/articles/s12276-018-0080-7?error=cookies_not_supported&code=0c91c19b-dfcc-4290-a58c-88a1b0323f24 | Article | Open | Published:
# Using tyrosinase as a tri-modality reporter gene to monitor transplanted stem cells in acute myocardial infarction
## Abstract
The study aimed to investigate the feasibility of noninvasive monitoring of bone marrow mesenchymal stem cells (MSCs) transduced with the tyrosinase reporter gene for acute myocardial infarction (AMI) with photoacoustic imaging (PAI), magnetic resonance imaging (MRI), and positron emission tomography (PET) in vitro and in vivo. MSCs were transduced with a lentivirus carrying a tyrosinase reporter gene. After transduction, the rate of 18F-5-fluoro-N-(2-[diethylamino]ethyl)picolinamide (18F-5-FPN) uptake was measured. PAI and MRI of stable cell lines expressing tyrosinase (TYR-MSCs) were performed in vitro. An AMI model was induced and verified. TYR-MSCs and MSCs were injected into the margins of the infarcted areas, and PAI, MRI, and PET images were acquired 1, 7, 14, 21, and 28 days after cell injection. Sham-operated models without injection were used as the control group. TYR-MSCs showed noticeably higher uptake of 18F-5-FPN and stronger signals in T1-weighted MRI and PAI than non-transduced MSCs. In vivo studies revealed prominent signals in the injected area of the infarcted myocardium on PAI/MRI/PET images, whereas no signal could be seen in rats injected with non-transduced MSCs or sham-operated rats. The uptake values of 18F-5-FPN in vivo showed a slight decrease over 28 days, whereas MRI and PAI signal intensity decreased dramatically. MSCs stably transduced with the tyrosinase reporter gene could be monitored in vivo in myocardial infarction models by PET, MRI, and PAI, providing a feasible and reliable method for checking the viability, location, and dwell time of transplanted stem cells.
## Introduction
Stem cell transplantation is a new method aimed at reversing myocardial injury and improving cardiac function1. Unfortunately, only a minute number of these laboratory studies can be transferred to clinical practice due to limited methods to monitor the fate of bone marrow mesenchymal stem cells (MSCs) after their transplantation into injured hearts. Recent developments in imaging systems and new probes have allowed investigators to perform multimodality imaging in animal models that have employed MSC transplantation. For example, Pei et al.2 constructed a triple-fused reporter gene that combined herpes simplex virus type 1 thymidine kinase, enhanced green fluorescent protein, and firefly luciferase to monitor stem cells using positron emission tomography (PET), fluorescence, and bioluminescence imaging. However, integrating different active groups into one entity is complicated and time-consuming. Most importantly, the deactivation of one functional group will lead to failure of the whole construction. In addition, the employed adenovirus transduction leads to transient gene expression, and gene expression levels decline quickly with time. It is necessary to obtain a simple multimodality molecular probe that is easily constructed and stably expressed.
Human tyrosinase, the key enzyme in melanin production, was first evaluated by Qin et al.3 as a stand-alone reporter gene for in vitro and in vivo multimodality imaging, including photoacoustic imaging (PAI), magnetic resonance imaging (MRI), and PET. We first established a tyrosinase reporter gene system under the control of the Tet-on gene expression system in vitro in tumor cell lines, which were proof-of-concept studies4. In this study, tyrosinase was used as a multifunctional reporter gene to track stem cells transplanted in an area of myocardial infarction. After transducing the tyrosinase gene into MSCs in Sprague-Dawley (SD) rats, melanin was synthesized and subsequently absorbed light energy to achieve PAI5, bound with 18F-5-fluoro-N-(2-[diethylamino]ethyl)picolinamide(18F-5-FPN) specifically to enable PET imaging6, and combined with iron for visibility on MRI7. The aim of this study was to establish a simple multimodality reporter gene system using tyrosinase and combining PAI, PET, and MRI to monitor the fate of MSCs after transplantation into rat models of AMI.
## Materials and Methods
### PET imaging tracers
18F-5-FPN is a benzamide analog that specifically binds melanin. The synthesis process was performed according to our previously published procedures6. The reaction was completed in a synthesis module (GE TraceLab FX-XN, GE Healthcare, Milwaukee, WI, USA).
### Construction of a recombinant lentiviral vector carrying the tyrosinase reporter gene
Tyrosinase complementary DNA was kindly provided by Dr. Zhen Cheng of Stanford University. The tyrosinase sequence was amplified and digested with AgeI/NheI, then cloned into the Ubi-MCS-3FLAG-SV40-puromycin vector, which carried puromycin-resistance genes (Shanghai Genechem Co. Ltd., Shanghai, China).
### Isolation, cultivation, and identification of TYR-MSCs
All experiments were performed in accordance with protocols approved by the Animal Care and Use Committee of Huazhong University of Science and Technology, China. Rat bone marrow MSCs were isolated and purified from 4-week-old SD rats (male, 80–100 g) by combing gradient density centrifugation and adhesion separation. The surface makers CD44, CD90, CD34, and CD33 (antibodies from Abcam, Cambridge, MA, USA) were detected by flow cytometry. Transduction of tyrosinase lentiviral vectors into MSCs was performed at a multiplicity of infection of 38. Antibiotic selection with 1.75 µg/mL puromycin began 72 h after transfection and lasted for 5 days to remove the MSCs that were not transduced successfully9. Masson-Fontana, western blot, and tyrosinase activity assays were performed to demonstrate enhanced melanin synthesis in stably transduced cell clones10.
### In vitro cellular uptake assays
Cellular uptake assays were performed on TYR-MSCs, MSCs, and TYR-MSCs + blocking. Cells were seeded in 24-well plates at a density of 2 × 105 cells per well. After overnight incubation, cells were washed twice with PBS. Then, 200 µL of completed DMEM-F12 medium containing 37 kBq of 18F-5-FPN was added to each well. After incubation with 18F-5-FPN at 37 °C for increasing intervals (30, 60, and 120 min), media were removed, and cells were washed twice with PBS and lysed with 1 N NaOH for 5 min. For the blocking study, TYR-MSCs were incubated for 1 h at 37 °C with 18F-5-FPN (37 kBq) in the presence of 100 µL of 10−5 M standard 19F-5-FPN. Radioactivity was measured using a gamma counter (2470, WIZARD; PerkinElmer, Waltham, MA, USA). The uptake rate was obtained using the following equation2, and all the above experiments were performed three times with triplicate wells.
$${\mathrm{Uptake}}\,{\mathrm{rate}}\,\left( \% \right) = \hskip12pc \\ \hskip-1.5pc \left[ {{\mathrm{counts}}_{\mathrm{{intracellular}}}/\left({{\mathrm{counts}}_{\mathrm{{extracellular}} + }{\mathrm{counts}}_{\mathrm{{intracellular}}}} \right)} \right] \times 100\%$$
### In vitro MRI
MRI of cells was performed using a 7.0 T MRI (Varian, Palo Alto, CA, USA). Increasing numbers of cells (5 × 105, 2.5 × 106, and 5 × 106) were embedded in 1% agarose11, and T1-weighted imaging (T1WI) was performed12. To increase the production of melanin, the TYR-MSCs + tyrosine line was pretreated with 2 mM tyrosine for 24 h. T1WI parameters included a field of view 7.0 × 3.5 cm, matrix size of 256 × 256, section thickness of 1 mm, repetition time of 500 ms, and echo time of 11 ms. Image analysis was performed using ImageJ (NIH, Bethesda, MD, USA). The experiments were performed three times with triplicate wells.
### In vitro PAI
PAI (Endra Nexus 128 Photoacoustic Imaging System, Endra Life Sciences, Ann Arbor, MI, USA) of increasing concentrations of cells, ranging from 5 × 104 to 1 × 107 cells per mL and embedded in 1% agarose, was performed. Photoacoustic signals generated by a given laser pulse in a target at a 5-mm depth and 680-nm wavelength in a phantom were detected by all transducers13. Data were reconstructed and drawn in Osirix after exporting the raw data in DICOM format (Pixmeo, Switzerland). The experiments were performed three times with triplicate wells.
### Acute myocardial infarction animal model preparation
Adult SD rats (male, 160–200 g) were anesthetized by an intraperitoneal (ip) injection of 3% sodium pentobarbital (35 mg/kg, Merck, Darmstadt, Germany) and artificially ventilated with an animal ventilator (DW-3000, Jinyang Wanda, Beijing, China). Left thoracotomy was performed, and the left anterior descending coronary artery was permanently ligated at its origin to establish acute myocardial infarction (AMI) models14. The sham-operated controls underwent the same surgery except that the coronary artery was not ligated. A total of 21 AMI models and 13 control models were successfully established and used for in vivo experiments. Thirty minutes after ligation of the left anterior descending coronary artery, eight injections of TYR-MSCs or MSCs (2 × 106 cells per 50 μL PBS) were injected into the margin of the infarcted area. Seventy-two hours after ligation of the coronary artery, five other AMI models without transplanted TYR-MSCs or MSCs were humanely killed to perform 2,3,5-Triphenyltetrazolium chloride (TTC) (n = 3) or hematoxylin and eosin staining (n = 2). Five control models were also killed to perform TTC (n = 3) or hematoxylin and eosin staining (n = 2). Infarct size was expressed as a percentage of total left ventricular area. The mean of all slices from each heart was calculated. Left ventricular (LV) infarct size was measured in six to eight transverse sections of 1–2 mm from apex to base stain. The infract zone was identified by a white color. According to the result of TTC staining, infarction size was calculated using the following formula:2
$${\mathrm{Infarct}}\,{\mathrm{size}}\,\left( {\mathrm{\% }} \right) =\hskip10pc \\ \left( {{\mathrm{infarcted}}\,{\mathrm{area/total}}\,{\mathrm{left}}\,{\mathrm{ventricle}}\,{\mathrm{area}}} \right){\mathrm{ \times 100\% }}.$$
To perform hematoxylin and eosin staining, the hearts of the models were quickly removed and cut into six transverse slices from apex to base. Subsequently, partial 8-μm transverse slices from each section were prepared for hematoxylin and eosin staining.
18F-FDG myocardial metabolic imaging was performed to confirm successful coronary occlusion in TYR-MSCs (n = 8), MSCs (n = 8) transplanted AMI models 24 h before 18F-5-FPN imaging15. MRI, PET, and PAI were performed at five points post operation: 1, 7, 14, 21, and 28 days. Control models (n = 8) were also subjected to 18F-5-FPN imaging, MRI, PET, and PAI at the same time points.
### In vivo 18F-FDG myocardial metabolic imaging
After fasting for 4 h, 16 AMI rats, 8 control models, and 2 normal rats were anesthetized with 2% isoflurane in oxygen and injected via the tail vein with a mean dose of 3.7 MBq of 18F-FDG. A PET scan (Trans-PET BioCaliburn 700, Raycan Technology Co., Ltd., Suzhou, China) focused on the chest was acquired for 15 min starting 60 min after the injection.
### In vivo 18F-5-FPN animal PET imaging
The TYR-MSCs group (n = 8), MSCs group (n = 8), and control group (n = 8) were injected intravenously with 3.7 MBq of 18F-5-FPN. After 1 h, rats were anesthetized with 2% isoflurane in oxygen and placed in the prone position. The PET scan acquisition conditions were identical to those with 18F-FDG. This algorithm produces images consisting of 0.5 × 0.5 × 0.5 mm3 voxels. Images were reconstructed into a 280 × 280 × 104 matrix using 3D OSEM with a pixel size of 0.5 × 0.5 mm2 and a slice thickness of 0.5 mm. Regions of interest (ROIs) were marked manually and analyzed by AMIDE (UCLA, Los Angeles, CA, USA). Signals were expressed as a percent of injected dose per gram of tissue (%ID per g).
### In vivo MRI imaging
The TYR-MSCs group (n = 8), MSCs group (n = 8), and control group (n = 8) were continuously anesthetized with 2% isoflurane in oxygen and placed in the prone position. The scanning parameters were as follows: repetition time of 48 ms; echo time of 3 ms; slice thickness of 1 mm; field of view of 4.5 × 4.5 cm; and 192 × 192 matrix. T2*-weighted images were detected as hypointensities caused by Fe3+-bound melanin16. Image analysis was performed using ImageJ.
### In vivo PAI imaging
The TYR-MSCs group (n = 8), MSCs group (n = 8), and control group (n = 8) were anesthetized via ip injection of 3% sodium pentobarbital (35 mg/kg) using the same PAI system employed for the in vitro studies and the same wavelength17. After scanning a rat, signals were sent to ultrasonic transducers and then directed to a computer system to reconstruct two-dimensional and three-dimensional images using imaging software (OsiriX Foundation, Geneva, Switzerland)18.
### Statistical analysis
Quantitative data are expressed as the means ± SD. Statistical analysis was performed using one-way analysis of variance and the Student’s t test (SPSS 16.0 software package, SPSS Inc., Chicago, IL, USA). A 95% confidence interval was chosen to determine the significance of differences between groups. A probability value of P < 0.05 was considered statistically significant.
## Results
### In vitro tyrosinase introduction and melanin expression in MSCs
MSCs were successfully transduced with lentivirus carrying the tyrosinase reporter gene and selected with puromycin. After centrifugation, TYR-MSCs were black in color, whereas non-transduced MSCs were white or light yellow (Fig. 1a). TYR-MSCs showed a specific band of ~96 kDa corresponding to tyrosinase (Fig. 1b). Masson-Fontana staining demonstrated melanin deposition within TYR-MSCs (Fig. 1c). Cellular tyrosinase activity analysis showed that the absorbance at 490 nm was substantially higher in TYR-MSCs than in MSCs (0.65 ± 0.12 vs 0.11 ± 0.02; n = 3, P < 0.05).
### Cellular uptake of 18F-5-FPN
Uptake levels of 18F-5-FPN in TYR-MSCs and MSCs at 30, 60, and 120 min are shown in Fig. 2a. The uptake value of TYR-MSCs increased from 6.32 ± 0.27% at 30 min to 7.86 ± 0.85% at 60 min, and then slightly decreased to 6.83 ± 0.24% at 120 min. The uptake values of MSCs remained at a low level (1.22 ± 0.19, 1.15 ± 0.15, 1.12 ± 0.2 at 30, 60, and 120 min, respectively) and did not show an increase with longer incubation times. The uptake value of TYR-MSCs was significantly higher than those of MSCs at all time points (P < 0.01). Administration of cold 19F-5-FPN inhibited the binding of 18F-5-FPN in TYR-MSCs in a concentration-dependent pattern as shown in Fig. 2b, illustrating the specificity of binding of 18F-5-FPN to melanin in vitro.
### In vitro MRI and PAI
T1W1 MRI acquired in different concentrations of cells (Fig. 3a) revealed hyperintensity of the TYR-MSCs. The MR signals increased with increasing cell concentrations. TYR-MSCs treated with both FeCl3 and tyrosine showed a higher signal than cells treated with FeCl3 or tyrosine alone. MSCs produced a low signal on T1WI at all cell concentrations. The signals of MSCs treated with FeCl3 slightly increased with an increasing number of MSCs. Fig. 3b shows quantitative analysis of MR signals in vitro in all eight groups.
As shown in Fig. 3c, the PAI signals of TYR-MSCs and TYR-MSCs treated with tyrosine increased with increasing cell concentrations. No photoacoustic signals were detected in the MSCs and MSCs treated with tyrosine, even at 5 × 105 cells. Fig. 3d shows quantitative analysis of the photoacoustic signals in vitro in TYR-MSCs with and without tyrosine, which shows that tyrosine enhanced the PAI signal.
### Histological verification of myocardial infarction
The percent infarction of the left ventricular mass determined by TTC staining was 38.1% ± 7.3% (n = 3) in AMI animals, whereas normal rats and control models showed no area of infarction (Fig. 4a). Obvious absent uptake of 18F-FDG in the anterior wall was noted on myocardial metabolic imaging of the AMI animals (Fig. 4b), whereas uniform uptake in the myocardium was seen in normal rats and control models. Fig. 4c shows hematoxylin and eosin staining of infarcted myocardium, normal myocardium, and the border between them. Normal myocardial fibers are arranged in neat rows with abundant capillaries. In the AMI models, the myocardiocytes are necrotic with surrounding edema and denaturation, distinct lymphocyte infiltration, myocardial fiber rupture, and vacuolar denaturation (Fig. 4c).
### In vivo PET, MRI, and PAI
PET images of rats injected with TYR-MSCs and MSCs are shown in Fig. 5; the images were obtained 1 h after injection of 18F-5-FPN. Obvious uptake of 18F-5-FPN in the transplanted area was seen in the rats with transplanted TYR-MSCs. The signals showed a decreasing trend from 1.78 ± 0.22 %ID per g to 1.62 ± 0.13 %ID per g (n = 8) from day 1 to 28 post injection but had no significant difference. No 18F-5-FPN uptake was seen in the MSCs group and control group.
On the first day after TYR-MSC transplantation, low-intensity MRI signals were visible in the transplanted area on T2* sequences (Fig. 6a). The area of low-intensity signals decreased over time. Signal contrast (%) of TYR-MSCs showed a tendency to increase gradually from 59.18 ± 4.9 to 80.5 ± 8.8 (n = 8), indicating that the intensity of the MRI signal decreased with time. At 28 days after transplantation, the low-intensity signals became unclear. For the MSCs group and control group, no low-intensity signals were observed.
As shown in Fig. 6b, rats transplanted with TYR-MSCs produced high photoacoustic signal intensities, whereas the MSCs group and control group did not produce photoacoustic signals. The PAI signals of TYR-MSCs were 472.6 ± 48.1, 434.8 ± 51.2, 387.3 ± 62.0, 345.9 ± 79.8, and 283.4 ± 49.2 at 1, 7, 14, 21, and 28 days post injection, respectively (n = 8).
### Quantitative analysis of multimodality in vivo
Fig. 7 shows the quantitative analysis of multimodality in vivo. The data indicated that TYR-MSCs demonstrated clear signals in PET (Fig. 7a), MRI (Fig. 7b), and PAI (Fig. 7c). The signal intensity of MRI and PAI decreased significantly with time (P < 0.05); however, there was no significant difference between the PET signals on day 1 and day 28, which implied that PET could be used to track TYR-MSCs in vivo for longer time than MRI or PAI.
## Discussion
The rapid growth of regenerative medicine has made cell replacement therapy possible for myocardial infarction19,20,21. However, numerous challenges remain in using this therapy, including tracking the location, survival, distribution, and differentiation process of transplanted cells in vivo. Multimodality molecular imaging of reporter genes is a method that allows noninvasive assessment of stem cell therapy and has the potential to diminish the shortcomings of any single imaging modality to gain more complete image22.
In this study, stable MSCs expressing melanin were successfully established after lentiviral transfection with the tyrosinase reporter gene. In vitro experiments showed that TYR-MSCs exhibited clear MRI and PAI signals and could specifically take up 18F-5-FPN. After injection of TYR-MSCs into the infarcted myocardium, clear signals could be seen on PET, MRI, and PAI at 28 days, suggesting the stability of melanin in viable TYR-MSCs. To the best of our knowledge, this study is the first time that a stand-alone reporter gene tyrosinase has been used in multimodality monitoring of the fate of stem cells in vivo after transplantation into infarcted myocardium. Tyrosinase, when used as a multimodality reporter gene, does not require co-administration of an enzymatic substrate, and in theory, it should produce plenty of melanin to successfully perform PET, MRI, and PAI if a minimum threshold cell number is reached. Previous studies have indicated that tyrosinase expression exhibits low-level toxicity in mammalian cells23, and tyrosinase elicits less of an immune response than other exogenous reporter genes.
The mechanism of tri-modality imaging of the tyrosinase reporter gene has been previously discussed4. Although the use of tyrosinase as a tri- or bi-modality reporter gene has been validated previously by us3, 4, 6 and other investigators13, the previous approaches had some limitations, such as the inability to obtain stable transduced cell lines, lack of in vivo imaging, or not using PET. Thanks to the newly synthesized probe 18F-5-FPN, which can bind to melanin quickly with high binding capacity and high specificity, tyrosinase gene expression can be detected in vitro and in vivo through 18F-5-FPN by specifically combining with melanin. Based on the technology of lentivirus transfection, stable cell lines were established, and the signals may last for at least 28 days.
PAI is an emerging hybrid molecular imaging tool owing to its unique property of combining optical and acoustic imaging with higher spatial resolution24. Melanin has strong optical absorption over a broad spectrum25, which allows for good tissue penetration. Meanwhile, high resolution can be maintained in PAI because the photoacoustic wave has low scattering in tissue26. Recently, some studies have shown that tyrosinase can be used as a reporter gene for PAI given that tyrosinase modulates melanin synthesis27. Märk et al.28 transfected rat MSCs with reporter genes co-expressing tyrosinase and a fluorescent protein (mCherry) and performed photoacoustic imaging in small animal models of tissue regeneration. However, their study lacked quantitative analysis of photoacoustic signals and continuous observation in vivo. In our study, 2 × 106 TYR-MSCs cells implanted into the infarcted myocardium produced a strong signal on PAI, suggesting its high sensitivity. In vitro, 2.5 × 103 cells (5 × 104 cells per mL) were sufficient for imaging. Although the intensity of the PAI signal decreased over time, the signal was still visible on d 28, which means PAI is a feasible method for long-term tracking of stem cells in vivo. However, the tissue penetration depth is limited to 5 cm, increasing the difficulty of finding a lesion in deep tissue29. Blood flow can also produce strong photoacoustic signals, which may distort the signals produced by TYR-MSCs. PET and MRI could be used to rectify the limitations of PAI.
Among the most important properties of melanin is its ability to strongly chelate metal ions, such as Cu2+, Mn2+, and Fe3+, leading to shortened T1 and T2 relaxation times in vitro or in vivo on MRI30. The typical contrast pattern observable in melanoma shows hyperintensity on T1WI and hypointensity on T2* weighted images. Some studies have reported that microscopic particles of iron oxide bound to MSCs can be tracked in vivo using MRI31, 32. However, the reliability of iron particle tracking of transplanted stem cells has been challenged by several studies because iron particles may be engulfed by macrophages after stem cell death33. Microscopic particles of iron oxide also cannot be reproduced by transplanted MSCs, which will decrease the signal duration. Amsalem et al.34 labeled rat MSCs with superparamagnetic iron oxide nanoparticles to track MSCs in vivo using MRI. After 4-week follow-up, co-staining for iron and ED1 (resident macrophage marker) showed that the iron-positive cells were cardiac macrophages. When nanoparticles were used to track stem cells, cells have to be labeled in vitro, and the signal would be diluted if cells continue to divide, leading to decreased signal intensity per cell. A stable cell line expressing tyrosinase as a reporter gene has better biocompatibility and longer imaging time than cells using microscopic particles or nanoparticles.
Recent studies have demonstrated the feasibility of PET for analyzing the fate of stem cells transplanted into the myocardium in vivo. Kang et al.35 labeled stem cells with 18F-FDG and injected them into patients with myocardial infarction via an intracoronary catheter after stenting of infarct-related arteries. PET/computed tomography was performed to trace the injected stem cells. However, the process of labeling was completed in vitro, and the longest tracking time lasted only 20 h after transplantation. In our previous study, 18F-5-FPN, a benzamide analog specifically targeting melanin in vitro and in vivo with high affinity and retention, was developed. In this study, 18F-5-FPN showed an excellent ability to track TYR-MSCs in vivo. As normal myocardial cells did not take up 18F-5-FPN, myocardial background radiation was very low, which led to the benefit of excellent PET imaging quality. The intensity of PET signals did not show an obvious decrease with time.
Comparing the three imaging modalities, PET imaging showed clear and obvious signals until day 28, which illustrates that the stem cells were alive and produced melanin constantly. MRI signals could also be seen, but they were not as clear as PET signals, and they decreased significantly with time. PAI combines strong optical contrast and high ultrasonic resolution in a single modality, but it provides less anatomical information than MRI. Any single imaging method has some limitations. New multimodality imaging techniques have achieved great progress in the field of diagnostic imaging.
MSCs stably transduced with the tyrosinase gene produce melanin, which is the basis for multimodality imaging with PET, MRI, and PAI for assessing the viability, location, and dwell time of transplanted stem cells. Functional imaging using these cells is a future goal. Thus far, we have performed only transplanted stem cell tracking in the infarcted myocardium. Future work may incorporate quantitative analysis of tyrosinase expression in vivo and the recovery of cardiac function after stem cell transplantation. Most studies on stem cell tracking have focused on the location and duration of stem cell viability. Truly functional multimodality molecular imaging based on reporter gene technology has a promising future.
## Limitations
Some limitations should be mentioned. First, although an AMI model with homogeneous infarct area was established, the results might not be representative of human MI. The location, area, and complications of human MI are highly variable. Multimodality imaging using a tyrosinase reporter gene for assessing transplanted stem cells in human MI may still have a long way to go. Second, permanent coronary artery ligation was performed for the MI model in this study, whereas ischemia/reperfusion models are more representative and more closely resemble the physiology of MI in human than permanent ligation. Last but not the least, it will be much better to have a known marker (e.g., green fluorescent protein) to validate the viability of tyrosinase, which will also be helpful to verify the specific uptake of 18F-5-FPN in vitro and in vivo. Despite these limitations, we confirmed that MSCs stably transduced with the tyrosinase gene produce melanin, which is the basis for multimodality imaging with PET, MRI, and PAI for assessing the viability, location, and dwell time of transplanted stem cells.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Kim, M. H. et al. Evaluation of safety and efficacy of adipose-derived stem cells in rat myocardial infarction model using hexadecyl-4-[124I] iodobenzoate for cell tracking. Appl. Radiat. Isot. 108, 116–123 (2016).
2. 2.
Pei, Z. et al. A multimodality reporter gene for monitoring transplanted stem cells. Nucl. Med. Biol. 39, 813–820 (2012).
3. 3.
Qin, C. et al. Tyrosinase as a multifunctional reporter gene for photoacoustic/MRI/PET triple modality molecular imaging. Sci. Rep. 3, 1490 (2013).
4. 4.
Feng, H. et al. TYR as a multifunctional reporter gene regulated by the Tet-on system for multimodality imaging: an in vitro study. Sci. Rep. 5, 15502 (2015).
5. 5.
Wang, L. V. Multiscale photoacoustic microscopy and computed tomography. Nat. Photonics 3, 503–509 (2009).
6. 6.
Feng, H. et al. Imaging malignant melanoma with 18F-5-FPN. Eur. J. Nucl. Med. Mol. Imaging 43, 113–122 (2016).
7. 7.
Fan, Q. et al. Transferring biomarker into molecular probe: melanin nanoparticle as a naturally active platform for multimodality imaging. J. Am. Chem. Soc. 136, 15185–15194 (2014).
8. 8.
Zhang, X. Y. et al. Lentiviral vectors for sustained transgene expression in human bone marrow-derived stromal cells. Mol. Ther. 5, 555–565 (2002).
9. 9.
Annis, D. S. et al. Absence of vitamin K-dependent γ-carboxylation in human periostin extracted from fibrotic lung or secreted from a cell line engineered to optimize γ-carboxylation. PLoS ONE 10, e0135374 (2015).
10. 10.
DiVito, K. A., Trabosh, V. A., Chen, Y. S., Simbulan‐Rosenthal, C. M. & Rosenthal, D. S. Inhibitor of differentiation‐4 (Id4) stimulates pigmentation in melanoma leading to histiocyte infiltration. Exp. Dermatol. 24, 101–107 (2015).
11. 11.
Liao, N. et al. Poly (dopamine) coated superparamagnetic iron oxide nanocluster for noninvasive labeling, tracking, and targeted delivery of adipose tissue-derived stem cells. Sci. Rep. 6, 18746 (2016).
12. 12.
Enochs, W. S., Petherick, P., Bogdanova, A., Mohr, U. & Weissleder, R. Paramagnetic metal scavenging by melanin: MR imaging. Radiology 204, 417–423 (1997).
13. 13.
Paproski, R. J., Forbrich, A. E., Wachowicz, K., Hitt, M. M. & Zemp, R. J. Tyrosinase as a dual reporter gene for both photoacoustic and magnetic resonance imaging. Biomed. Opt. Express 2, 771–780 (2011).
14. 14.
Takimoto, Y. et al. Augmented expression of neuronal nitric oxide synthase in the atria parasympathetically decreases heart rate during acute myocardial infarction in rats. Circulation 105, 490–496 (2002).
15. 15.
Doyle, B. et al. Dynamic tracking during intracoronary injection of 18F-FDG-labeled progenitor cell therapy for acute myocardial infarction. J. Nucl. Med. 48, 1708–1714 (2007).
16. 16.
Long, Q. et al. MRI tracking of bone marrow mesenchymal stem cells labeled with ultra-small superparamagnetic iron oxide nanoparticles in a rat model of temporal lobe epilepsy. Neurosci. Lett. 606, 30–35 (2015).
17. 17.
Krauss, J. M. & Puliafito, C. A. Lasers in ophthalmology. Lasers Surg. Med. 17, 102–159 (1995).
18. 18.
Wang, C. et al. RGD-conjugated silica-coated gold nanorods on the surface of carbon nanotubes for targeted photoacoustic imaging of gastric cancer. Nanoscale Res. Lett. 9, 1–10 (2014).
19. 19.
Segers, V. F. & Lee, R. T. Stem-cell therapy for cardiac disease. Nature 451, 937–942 (2008).
20. 20.
Stamm, C. et al. Autologous bone-marrow stem-cell transplantation for myocardial regeneration. Lancet 361, 45–46 (2003).
21. 21.
Strauer, B. E. & Kornowski, R. Stem cell therapy in perspective. Circulation 107, 929–934 (2003).
22. 22.
Ray, P. Multimodality molecular imaging of disease progression in living subjects. J. Biosci. 36, 499–504 (2011).
23. 23.
Weissleder, R. et al. MR imaging and scintigraphy of gene expression through melanin induction. Radiology 204, 425–429 (1997).
24. 24.
Mallidi, S., Luke, G. P. & Emelianov, S. Photoacoustic imaging in cancer detection, diagnosis, and treatment guidance. Trends Biotechnol. 29, 213–221 (2011).
25. 25.
Viator, J. A. et al. A comparative study of photoacoustic and reflectance methods for determination of epidermal melanin content. J. Invest. Dermatol. 122, 1432–1439 (2004).
26. 26.
Xu, M. & Wang, L. V. Photoacoustic imaging in biomedicine. Rev. Sci. Instrum. 77, 041101 (2006).
27. 27.
Krumholz, A. et al. Photoacoustic microscopy of tyrosinase reporter gene in vivo. J. Biomed. Opt. 16, 080503 (2011).
28. 28.
Märk, J. et al. Development of tyrosinase-based reporter genes for preclinical photoacoustic imaging of mesenchymal stem cells. In Spie Bios 8943, 89433Z (2014).
29. 29.
Kothapalli, S. R. et al. Deep tissue photoacoustic imaging using a miniaturized 2-D capacitive micromachined ultrasonic transducer array. IEEE Trans. Biomed. Eng. 59, 1199–1204 (2012).
30. 30.
Ju, K. Y. et al. Bio-inspired, melanin-like nanoparticles as a highly efficient contrast agent for T1-weighted magnetic resonance imaging. Biomacromolecules 14, 3491–3497 (2013).
31. 31.
Drey, F. et al. Noninvasive in vivo tracking of mesenchymal stem cells and evaluation of cell therapeutic effects in a murine model using a clinical 3.0 T MRI. Cell Transplant. 22, 1971–1980 (2013).
32. 32.
Boulland, J. L. et al. Evaluation of intracellular labeling with micron-sized particles of iron oxide (MPIOs) as a general tool for in vitro and in vivo tracking of human stem and progenitor cells. Cell Transplant. 21, 1743–1759 (2012).
33. 33.
Chen, X. et al. Dynamic tracking of injected mesenchymal stem cells after myocardial infarction in rats: a serial 7T MRI study. Stem Cells Int. 2016, 4656539 (2016).
34. 34.
Amsalem, Y. et al. Iron-oxide labeling and outcome of transplanted mesenchymal stem cells in the infarcted myocardium. Circulation 116, I38–I45 (2007).
35. 35.
Kang, W. J. et al. Tissue distribution of 18F-FDG-labeled peripheral hematopoietic stem cells after intracoronary administration in patients with myocardial infarction. J. Nucl. Med. 47, 1295–1301 (2006).
## Acknowledgements
This work was supported by the National Natural Science Foundation of China (nos. 81371626 and 81630049) and the Clinical Research Physician Program of Tongji Medical College, Huazhong University of Science and Technology (no. 5001530008).
## Author information
### Conflict of interest
The authors declare that they have no conflict of interest.
Correspondence to Xiaoli Lan.
## Rights and permissions
Reprints and Permissions | 2019-06-16 16:49:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4451499879360199, "perplexity": 9945.987764064266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00500.warc.gz"} |
http://people.idsia.ch/~juergen/textcompression/node4.html | Next: B. METHOD 1 Up: III. OFF-LINE METHODS Previous: III. OFF-LINE METHODS
## A. THE PREDICTOR NETWORK P
Assume that the alphabet contains possible characters . The (local) representation of is a binary -dimensional vector with exactly one non-zero component (at the -th position). has input units and output units. is called the time-window'' size. We insert default characters at the beginning of each file. The representation of the default character, , is the -dimensional zero-vector. The -th character of file (starting from the first default character) is called .
For all and all possible , receives as an input
where is the concatenation operator for vectors. produces as an output , a -dimensional output vector. Using back-propagation [10][4], is trained to minimize
(1)
(1) is minimal if always equals
(2)
the conditional expectation of , given . Due to the local character representation, this is equivalent to being equal to the conditional probability
(3)
for all and for all appropriate , where denotes the -th component of the vector .
In practical applications, the will not always sum up to 1. To obtain outputs satisfying the properties of a proper probability distribution, we normalize by defining
(4)
Next: B. METHOD 1 Up: III. OFF-LINE METHODS Previous: III. OFF-LINE METHODS
Juergen Schmidhuber 2003-02-13 | 2015-04-19 19:06:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660570979118347, "perplexity": 2642.678155620436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639414.6/warc/CC-MAIN-20150417045719-00060-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.qb365.in/materials/stateboard/11th-maths-important-one-mark-question-paper-2-3937.html | " /> -->
#### Important 1mark -2
11th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Use blue pen Only
Time : 00:15:00 Hrs
Total Marks : 25
Part A
25 x 1 = 25
1. The sum of the digits at the 10th place of all numbers formed with the help of 2, 4, 5, 7 taken all at a time is
(a)
432
(b)
108
(c)
36
(d)
18
2. In a plane there are 10 points are there out of which 4 points are collinear, then the number of triangles formed is
(a)
110
(b)
10C3
(c)
120
(d)
116
3. In 2nC3 : nC3 = 11 : 1 then n is
(a)
5
(b)
6
(c)
11
(d)
7
4. The product of r consecutive positive integers is divisible by
(a)
r!
(b)
r!+1
(c)
(r+1)
(d)
none of these
5. There are 10 points in a plane and 4 of them are collinear. The number of straight lines joining any two of them is
(a)
45
(b)
40
(c)
39
(d)
38
6. is:
(a)
$\lfloor{n}(n+2)$
(b)
(c)
(d)
none of these
7. If 100Cr = 100C3r then r is:
(a)
24
(b)
25
(c)
20
(d)
50
8. How many words can be formed using all the letters of the word ANAND:
(a)
30
(b)
35
(c)
40
(d)
45
9. There are 10 lamps in a hall. Each one of them can be switched on independently. The number of ways in which the hall can be illuminated is
(a)
102
(b)
1023
(c)
210
(d)
10!
10. The number of positive integral solution of $x\times y\times z=30$ is
(a)
3
(b)
1
(c)
9
(d)
27
11. There are 15 points in a plane of which exactly 8 are collinear. The number of straight lines obtained by joining these points is
(a)
105
(b)
28
(c)
77
(d)
78
12. If nC10 = nC6, then nC2
(a)
16
(b)
4
(c)
120
(d)
240
13. The unit vector parallel to the resultant of the vectors $\hat{i}+\hat{j}-\hat{k}$ and $\hat{i}-2\hat{j}+\hat{k}$ is
(a)
${\hat{i}-\hat{j}+\hat{k}\over\sqrt{5}}$
(b)
${2\hat{i}+\hat{j}\over\sqrt{5}}$
(c)
${2\hat{i}-\hat{j}+\hat{k}\over\sqrt{5}}$
(d)
${2\hat{i}-\hat{j}\over\sqrt{5}}$
14. If ABCD is a parallelogram, then $\overrightarrow{AB}+\overrightarrow{AD}+\overrightarrow{CB}+\overrightarrow{CD}$ is equal to
(a)
$2(\overrightarrow{AB}+\overrightarrow{AD})$
(b)
$4\overrightarrow{AC}$
(c)
$4\overrightarrow{BD}$
(d)
$\overrightarrow{0}$
15. Two vertices of a triangle have position vectors $3\hat{i}+4\hat{j}-4\hat{k}$ and$2\hat{i}+3\hat{j}+4\hat{k}$If the position vector of the centroid is $\hat{i}+2\hat{j}+3\hat{k}$ ,then the position vector of the third vertex is
(a)
$-2\hat{i}-\hat{j}+9\hat{k}$
(b)
$-2\hat{i}-\hat{j}-6\hat{k}$
(c)
$2\hat{i}-\hat{j}+6\hat{k}$
(d)
$-2\hat{i}+\hat{j}+6\hat{k}$
16. If $\overrightarrow{a}$ and $\overrightarrow{b}$ having same magnitude and angle between them is 60° and their scalar product is ${1\over2}$ then $|\overrightarrow{a}|$ is
(a)
2
(b)
3
(c)
7
(d)
1
17. Vectors $\overrightarrow{a}$ and $\overrightarrow{b}$ are inclined at an angle $\theta =120^o$ .If $|\overrightarrow{a}|=1,|\overrightarrow{b}|=2,$ then $[(\overrightarrow{a}+3\overrightarrow{b})\times (3\overrightarrow{a}-\overrightarrow{b})]^2$ is equal to
(a)
225
(b)
275
(c)
325
(d)
300
18. If $\overrightarrow{a}$ and $\overrightarrow{b}$ are two vectors of magnitude 2 and inclined at an angle 60° , then the angle between $\overrightarrow{a}$ and $\overrightarrow{a}+\overrightarrow{b}$ is
(a)
30°
(b)
60°
(c)
45°
(d)
90°
19. If the points whose position vectors $10\hat{i}+3\hat{j},12\hat{i}-5\hat{j}$ and $a\hat{i}+11\hat{j}$ are collinear then a is equal to
(a)
6
(b)
3
(c)
5
(d)
8
20. If $y={1\over a-z}$ ,then ${dz\over dy}$ is
(a)
$(a-z)^2$
(b)
-(z-a)2
(c)
(z+a)2
(d)
-(z+a)2
21. If y = mx + c and f(0) =$f '(0)=1$,then f(2) is
(a)
1
(b)
2
(c)
3
(d)
-3
22. If x=a sin $\theta$ and y= b cos $\theta$,then ${d^2y\over dx^2}$is
(a)
${a \over b^2}sec^2 \theta$
(b)
$-{b \over a}sec^2 \theta$
(c)
$-{b \over a^2}sec^3 \theta$
(d)
$-{b^2\over a^2}sec^3 \theta$
23. If f(x) = x + 2, then f '(f(x)) at x = 4 is
(a)
8
(b)
1
(c)
4
(d)
5
24. If g(x)=(x2+2x+3) f(x) and f(0)=5 and $lim_{x \rightarrow 0}{f(x)-5\over x}=4$,then g'(0) is
(a)
20
(b)
14
(c)
18
(d)
12
25. The number of points in R in which the function $f(x)=|x-1|+|x-3|+sin \ x$ is not differentiable, is
(a)
3
(b)
2
(c)
1
(d)
4 | 2020-05-27 02:01:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6054674386978149, "perplexity": 627.8607253322265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00400.warc.gz"} |
https://uprivateta.com/shuxuedaixiemat-150c-spring-2021-homework-1/ | Abstract algebra不算是一门简单的学科,这门学科在国内叫做抽象代数,经常有很多学生在学linear algebra或者analysis(advance calculus)的时候觉得并不困难,但是却觉得Abstract algebra很难,这是因为没有找到正确的方法学习Abstract algebra,UpriviateTA有一系列非常擅长Abstract algebra的老师,可以确保您在Abstract algebra取得满意的成绩。
MAT $150 \mathrm{C}$, Spring 2021 Homework 1
Due before 12: 10 on Monday, April 5
Problem 1. 1. Consider the cyclic group $G_{n}=\left\langle x \mid x^{n}=1\right\rangle$.
a) Describe all one-dimensional complex representations of $G_{n}$.
b) Prove that every complex representation of $G_{n}$ has a one-dimensional invariant subspace.
Problem 2. 2. a) Prove that there is a two-dimensional representation of $G_{4}$ such that
$$x \mapsto\left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right)$$
Problem 3. 3. Consider the standard two-dimensional representation of the dihedral group $D_{n}$. For which $n$ is this an irreducible complex representation. | 2022-09-26 05:54:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009393453598022, "perplexity": 1240.2414339055179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00227.warc.gz"} |
http://www.newton.ac.uk/programmes/BSM/seminars/2012053114459.html | Skip to content
BSM
Seminar
Flat Space Holography as a limit of AdS/CFT
Bagchi, A (University of Edinburgh)
Thursday 31 May 2012, 14:45-15:30
Satellite
Abstract
We construct flat-space holography as a limit of usual AdS/CFT concentrating on the 3d bulk/2d boundary example. The asymptotic group of symmetries at null infinity of flat spacetimes is the infinite dimensional Bondi-Metzner-Sachs (BMS) group. We show how this emerges in the flat-space limit of AdS symmetries. The flat limit also induces a contraction on the CFT which in 2d reduces to the Galilean Conformal Algebra (GCA), studied previously in the context of non-relativistic systems. Quantum gravity in flat space would be described by the representations of the GCA. We comment on the relevant representations of the GCA and correlation functions. We also mention some intriguing results in ongoing work on the "flat" BTZ.
Video
Available Video Formats
Comments
Start the discussion!
Back to top ∧ | 2013-06-20 05:12:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417534232139587, "perplexity": 1273.342712704254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710299158/warc/CC-MAIN-20130516131819-00076-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://gis.stackexchange.com/questions/206575/delete-table-rows-based-on-two-field-values | # Delete Table Rows Based On Two Field Values
How can I delete a row in a table if value A from Field 1 does not match value B from Field 4 ?
I attempted to use code from a previous post as I figured it would be a start but didn't work.
import arcpy
shp = r"C:\path\to\your\shapefile.shp"
rows = arcpy.UpdateCursor(shp, "", "", "some_field")
for row in rows:
if row.some_field == 2:
rows.deleteRow(row)
del row
del rows
• – Kirk Kuykendall Aug 10 '16 at 23:12
• Im a newbie with ArcPy so would the does not equal be written as so? import arcpy fc = r'C:\temp\test.gdb\tmp' with arcpy.da.UpdateCursor(fc, "Value_1","Value_4") as cursor: for row in cursor: if row[0] <> row[1]: cursor.deleteRow() – huskersila Aug 10 '16 at 23:37
• If that doesn't work use row[0] != row[1] – Bjorn Aug 11 '16 at 0:03
• Your second parameter needs to be a list of field names. Right now you have a field name referenced in both the second and third parameter. (fc, ["Value_1", "Value_2"]) – artwork21 Aug 11 '16 at 0:28
I would begin by creating a copy of the original feature class. Then it is simply a matter of checking if row[0] (i.e. Value_1 field) is equivalent to row[1] (Value_4 field). Try the following approach:
import arcpy
in_shp = r"C:\path\to\your\shapefile.shp"
shp_copy = r"C:\path\to\your\shapefile_v2.shp"
# Make a copy!
arcpy.CopyFeatures_management(in_shp, shp_copy)
# Check if "Value_1" is equivalent to "Value_4", if not delete row
with arcpy.da.UpdateCursor(shp_copy, ["Value_1", "Value_4"]) as cursor:
for row in cursor:
if row[0] != row[1]:
cursor.deleteRow()
• I attempted script and added image of error. Does it matter that its a table and not a shapefile? – huskersila Aug 11 '16 at 2:07
• @huskersila You need to make sure to name the copy different than the original. – Aaron Aug 11 '16 at 2:15
• ty for the help .... I ended up getting one of the above scripts to work... ty again for your time! much appreciated!! import arcpy fc = r'C:\Users\FILEPATH' with arcpy.da.UpdateCursor(fc,["VALUE_1","VALUE_4"]) as cursor: for row in cursor: if row[0] != row[1]: cursor.deleteRow() – huskersila Aug 11 '16 at 2:24
• @huskersila if this has worked for you please mark as the answer. See What should I do when someone answers my question? – Midavalo Aug 11 '16 at 2:47
import arcpy
fc = r'C:\Users\FILEPATH'
with arcpy.da.UpdateCursor(fc, ["VALUE_1","VALUE_4"]) as cursor:
for row in cursor:
if row[0] != row[1]:
cursor.deleteRow() | 2021-08-01 10:26:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38569968938827515, "perplexity": 6442.374581031534}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154175.76/warc/CC-MAIN-20210801092716-20210801122716-00579.warc.gz"} |
http://math.stackexchange.com/questions/71031/a-question-about-the-additive-group-of-a-finitely-generated-integral-domain | # A question about the additive group of a finitely generated integral domain
Let $R=\mathbb{Z}[a_1,\ldots,a_n]$ be an integral domain finitely generated over $\mathbb{Z}$. Can the quotient group $(R,+)/(\mathbb{Z},+)$ contain a divisible element? By a "divisible element" I mean an element $e\ne 0$ such that for every positive integer $n$ there is an element f such that $e=nf$.
-
Do you mean $R$ modulo the image of $\mathbb{Z}$, or do you want to assume that $\mathbb{Z}$ injects into $R$, i.e., that $R$ has characteristic zero? – Keenan Kidwell Apr 7 '12 at 4:18
$R$ is residually finite, and so has no divisible elements since no finite ring or group $A$ has a divisible element: $nA = 0$ where $n = |A|$.
-
I assume you want to write $\mathbf{Z}[x_1,\ldots,x_n]/I=R$ for your ring $R$, where $I$ is an ideal in $\mathbf{Z}[x_1,\ldots,x_n]$.
Anyway, the additive group of $R$ is not finitely generated unless $\mathrm{dim} R=0$. So you don't have any structure theorem.
It's clear that $\mathbf{Z}$ does not have any divisible elements. ($e$ is not divisible by $e+1$.) But we don't need this.
Anyway, if $R=\mathbf{Z}/m\mathbf{Z}$ ($m\geq 1$), it is clear that your quotient is the zero ring. So you can suppose that $n>1$. Now, just look at the polynomial ring $\mathbf{Z}[x_1,\ldots,x_n]$. Clearly, this doesn't have any divisible elements. If you divide out by $\mathbf{Z}$ it still doesn't have any divisible elements.
So I suspect the answer to be no.
-
"the additive group of $R$ is not finitely generated unless $\dim(R)=0$." This is not true. The ring of integers in a number field is a finite $\mathbb{Z}$-module and has Krull dimension $1$. – Keenan Kidwell Apr 7 '12 at 4:17 | 2014-09-22 18:34:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9027699828147888, "perplexity": 155.09210788244678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137145.1/warc/CC-MAIN-20140914011217-00157-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/111124/how-do-i-determine-if-a-matrix-is-contained-in-another-matrix/114402 | # How do I determine if a matrix is contained in another matrix?
Is there a clever way of determining if one matrix is contained within another larger matrix? Iterating over the larger matrix to check each item until potential matches show up is straightforward but gets slow for large matrices.
Example, a smaller matrix:
$\begin{pmatrix}4&3&2\\2&3&4\end{pmatrix}$
which is "within" (I'm probably not using the right terminology) this larger matrix:
$\begin{pmatrix}1&2&3&4&5\\5&\color{red}4&\color{red}3&\color{red}2&1\\1&\color{red}2&\color{red}3&\color{red}4&5\end{pmatrix}$
It feels like a problem that could have a smart mathematics trick for determining if this is the case - is there one?
-
What does "contained" mean? Are you looking for any submatrix, or only those that form a single contiguous block? – Yun William Yu Feb 20 '12 at 1:03
I updated the question with an example, please let me know what the proper terminology is. – Nick Feb 20 '12 at 1:18
I think I've heard the term "contiguous submatrix" used in that context. – Yun William Yu Feb 20 '12 at 1:25
That does make the problem easier; I think the arbitrary submatrix problem is a generalisation of the induced subgraph isomorphism problem, which is NP-complete. Luckily, you don't have to worry about that. – Yun William Yu Feb 20 '12 at 1:26
This might be helpful for you: en.wikipedia.org/wiki/String_searching_algorithm – Hauke Strasdat Feb 20 '12 at 1:34
From a signal processing viewpoint, this problem can be tackled using 2D Fourier transform:
You take FFT of the "small" matrix, FFT of the large one, multiply the two together and perform IFFT on the result. Then the point with maximum intensity is the location.
The advantage of this approach is that you need to compute FFT of the large matrix only once and re-use it for different templates.
I am not sure about the actual implementation, but you can find it using keywords "correlation convolution FFT".
-
I don't think there is much, where mathematics can help to reduce the time which you'd need for a simple comparision of the top-left element, because mathematical procedures mostly work also over the whole matrix and the comparision of an element (in a simple search algo) is surely the fastest one of all operations.
The only idea that I had when I programmed string-search some time ago was, when the search-string is long but has only a small set of different characters. Then it might be time-saving to generate a list of the different occuring characters in the search string, and search in the base-string only in steps of length of the search string. Say, your search string has length 80 and only 10 different characters (i.e. decimal digits). Then the list is [0,1,2,3,4,5,6,7,8,9] and you need test in the base string (that in which you search) only each 80'th character whether it is in the "list". If not, you can proceed. Only if it is, then you must decrease your stepsize for the next character to be checked and "go more into detail".
But such a modification, when adapted for the use with numerical matrices, seems to me meaningful only in few exotic cases...
- | 2015-05-28 19:09:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6308847069740295, "perplexity": 588.8801691233767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929561.80/warc/CC-MAIN-20150521113209-00030-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://emacstil.com/til/2021/09/10/fold-current-heading/ | # Org mode: How to fold current heading
Published on Sep 10, 2021, by Junji Zhi
If you are editing the current heading, and you are done with it, you want to fold it, and move on.
The simple solution is just:
(org-previous-visible-heading)
(org-cycle)
Or in shortcuts:
C-c C-p
TAB | 2021-10-16 19:02:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40748462080955505, "perplexity": 4521.526320130548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00039.warc.gz"} |
https://mathoverflow.net/questions/193678/condition-for-a-certain-subset-being-a-subgroup | # Condition for a certain subset being a subgroup
For any finite group $G$ and $n$ a divisor of $|G|$, consider the following subset of elements of "co-order" dividing $n$:
$$G(n) = \{ g \in G \mid g^{|G|/n} = 1 \}$$
• By a classical theorem of Frobenius, $\frac{|G|}{n} \mid |G(n)|$.
• A conjecture of Frobenius (which was proved via the classification of finite simple groups) states that if there's equality in the above divisibility, $G(n)$ is a normal subgroup.
• $G(n)$ contains the identity and is closed to taking inverse and conjugation.
I'm interested in the following question:
For a prime $p$, when is $G(p)$ a subgroup of $G$?
• By Lagrange's Theorem, if $G(p)$ is a subgroup of $G$, then $|G(p)| \in \{ |G|, \frac{|G|}{p} \}$.
• When $G$ is abelian, $G(p)$ is a evidently a subgroup.
• When $G$ is a non-cyclic $p$-group, $G(p)=G$.
• In fact, as long as the $p$-Sylow subgroup of $G$ is not cyclic, $G(p)=G$.
The case $p=2$ is very elegant: composing the natural maps $G \to S_G \to \{\pm 1\}$, we obtain $G(2)$ as the kernel of this composition, hence it's a normal subgroup.
The case $p=3$ is false in general (as seen from the example $G=S_3$), and so I guess an additional condition should be imposed - perhaps $p$ should be the smallest prime dividing $|G|$.
When $G(p)$ is a subgroup, does is have an alternative characterization?
For example, for $p=2$, if the 2-Sylow subgroup of $G$ is cyclic, $G(2)=\langle g^2 \mid g\in G\rangle$.
• There is no "classification of finite groups". Certainly you just forgot the word "simple" there. – Johannes Hahn Jan 11 '15 at 20:02
You have noted (accurately) that $G(p) =G$ unless $G$ has a cyclic Sylow $p$-subgroup. However, it is also clear that when $G$ has a cyclic Sylow $p$-subgroup and $G(p) \neq G,$ the group $G$ has a normal $p$-complement. For, otherwise, we have by Burnside's transfer theorem, there is a $p$-regular element $x \in N_{G}(P) \backslash C_{G}(P)$. Now we have $P = [P,x] \times C_{P}(x).$ Since $P$ is cyclic and $P \neq C_{P}(x),$ we must have $P = [P,x]$ and $C_{P}(x) = 1.$ But then $P \leq G^{\prime},$ a contradiction, since $[G:G(p)] =p$ and $G(p)$ is clearly a normal subgroup when it is a subgroup.
On the other hand, if $G$ has a normal $p$-complement and cyclic non-trivial Sylow $p$-subgroup, then clearly $G$ has a normal subgroup of index $p$, and that that subgroup is indeed $G(p)$.
The conclusion (when $|G|$ has order divisible by $p,$ is that $G(p)$ is a subgroup of $G$ if and only if one of the following cases occur:
$G(p) = G$, or, $G$ has a normal $p$-complement and a cyclic Sylow $p$-subgroup.
Notice that when $p =2$, every finite group with a cyclic Sylow $2$-subgroup is known to have a normal $2$-complement, so the condition given for $p$ odd to guarantee that $[G:G(p)] = p$ is somewhat analogous to what you observed when $p =2$. It is indeed sufficient that $G$ should have a cyclic Sylow $p$-subgroup when $p$ is the smallest prime divisor of $|G|$ to ensure that $[G:G(p)] = p$. It is not, however, necessary.
• Thanks! A good answer, and using the normal p-complement I was able to solve my 2nd question (when $G(p)$ is a non-trivial subgroup, it is the subgroup generated by $p$'th powers). – Ofir Gorodetsky Jan 14 '15 at 20:58 | 2019-11-20 06:27:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973132848739624, "perplexity": 132.66002957170167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00140.warc.gz"} |
http://www.gradesaver.com/textbooks/science/physics/conceptual-physics-12th-edition/chapter-18-think-and-explain-page-352-353/59 | # Chapter 18 - Think and Explain: 59
As discussed on page 345, Carnot's equation states that the ideal efficiency of a heat engine is $\frac{T_{hot} – T_{cold}}{T_{hot}}$. Mathematically speaking, the only way to achieve an efficiency of 100% is when the temperature of the cold reservoir $T_{cold}$ is at absolute zero, 0 K. This is easier to see when Carnot's equation is re-written as $efficiency_{ideal} = 1 - \frac{T_{cold}}{T_{hot}}$. The third law of thermodynamics prohibits the cold reservoir from having a temperature of absolute zero, so no heat engine can be 100% efficient.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2017-11-22 15:07:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900230884552002, "perplexity": 434.58410467974926}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806609.33/warc/CC-MAIN-20171122141600-20171122161600-00345.warc.gz"} |
https://undergroundmathematics.org/thinking-about-geometry/what-type-of-triangle | Fluency exercise
## Problem
For each set of points determine what sort of triangle the three coordinates form (equilateral, isosceles, right-angled and scalene). How many of each type are there in the sets of coordinates given below?
1. $(9,-2)$, $(4,6)$, $(20,16)$.
2. $(3, 0)$, $(-1, 0)$, $(1,21)$.
3. $(1,1)$, $(3,2)$, $(2,4)$.
4. $(0, 3)$, $(0, 15)$, $(6\sqrt{3}, 9)$.
5. $(-2,-7)$, $(1, -1)$, $(5,7)$.
6. $(2, -3)$, $(-1, 1)$, $(-4, 8)$. | 2022-09-28 06:11:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997778534889221, "perplexity": 1272.7259190910056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00642.warc.gz"} |
http://openstudy.com/updates/51588ee0e4b07077e0c094c2 | ## anonymous 3 years ago Solve the IVP (dirac's delta function)
1. anonymous
$y''+y=\delta(t-\pi)-\delta(t-2\pi)$ $y(0)=0$ $y'(0)=1$
2. anonymous
I got: $y=\frac{ e^{-\pi*s}-e^{-2*\pi*s}+1 }{ s^2+1 } = \sin(t) u(t-\pi)-\sin(t) u(t-2*\pi)+\sin(t)$ The book's answer is: $\sin(t) (0<t<\pi),$$0 (\pi<t<2\pi)$$-\sin(t) (t>2\pi)$ Is my answer correct, and how would I get it in the above form? | 2016-07-25 04:25:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891510963439941, "perplexity": 2109.697525077494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824204.27/warc/CC-MAIN-20160723071024-00035-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://blocks.readthedocs.io/en/latest/api/bricks.html | # Bricks¶
blocks.bricks.application(*args, **kwargs)[source]
Decorator for methods that apply a brick to inputs.
Parameters: optional (**kwargs,) – The application method to wrap. optional – Attributes to attach to this application.
Notes
This decorator replaces application methods with Application instances. It also sets the attributes given as keyword arguments to the decorator.
Note that this decorator purposely does not wrap the original method using e.g. wraps() or update_wrapper(), since that would make the class impossible to pickle (see notes at Application).
Examples
>>> class Foo(Brick):
... @application(inputs=['x'], outputs=['y'])
... def apply(self, x):
... return x + 1
... @application
... def other_apply(self, x):
... return x - 1
>>> foo = Foo()
>>> Foo.apply.inputs
['x']
>>> foo.apply.outputs
['y']
>>> Foo.other_apply
<blocks.bricks.base.Application object at ...>
class blocks.bricks.Brick(name=None, children=None)[source]
Bases: blocks.graph.annotations.Annotation
A brick encapsulates Theano operations with parameters.
A brick goes through the following stages:
1. Construction: The call to __init__() constructs a Brick instance with a name and creates any child bricks as well.
2. Allocation of parameters:
1. Allocation configuration of children: The push_allocation_config() method configures any children of this block.
2. Allocation: The allocate() method allocates the shared Theano variables required for the parameters. Also allocates parameters for all children.
3. The following can be done in either order:
1. Application: By applying the brick to a set of Theano variables a part of the computational graph of the final model is constructed.
2. The initialization of parameters:
1. Initialization configuration of children: The push_initialization_config() method configures any children of this block.
2. Initialization: This sets the initial values of the parameters by a call to initialize(), which is needed to call the final compiled Theano function. Also initializes all children.
Not all stages need to be called explicitly. Step 3(a) will automatically allocate the parameters if needed. Similarly, step 3(b.2) and 2(b) will automatically perform steps 3(b.1) and 2(a) if needed. They only need to be called separately if greater control is required. The only two methods which always need to be called are an application method to construct the computational graph, and the initialize() method in order to initialize the parameters.
At each different stage, a brick might need a certain set of configuration settings. All of these settings can be passed to the __init__() constructor. However, by default many bricks support lazy initialization. This means that the configuration settings can be set later.
Note
Some arguments to __init__() are always required, even when lazy initialization is enabled. Other arguments must be given before calling allocate(), while others yet only need to be given in order to call initialize(). Always read the documentation of each brick carefully.
Lazy initialization can be turned off by setting Brick.lazy = False. In this case, there is no need to call initialize() manually anymore, but all the configuration must be passed to the __init__() method.
Parameters: name (str, optional) – The name of this brick. This can be used to filter the application of certain modifications by brick names. By default, the brick receives the name of its class (lowercased).
name
str – The name of this brick.
print_shapes
boolFalse by default. If True it logs the shapes of all the input and output variables, which can be useful for debugging.
parameters
list of TensorSharedVariable and None – After calling the allocate() method this attribute will be populated with the shared variables storing this brick’s parameters. Allows for None so that parameters can always be accessed at the same index, even if some parameters are only defined given a particular configuration.
children
list of bricks – The children of this brick.
allocated
boolFalse if allocate() has not been called yet. True otherwise.
initialized
boolFalse if allocate() has not been called yet. True otherwise.
allocation_config_pushed
boolFalse if allocate() or push_allocation_config() hasn’t been called yet. True otherwise.
initialization_config_pushed
boolFalse if initialize() or push_initialization_config() hasn’t been called yet. True otherwise.
Notes
To provide support for lazy initialization, apply the lazy() decorator to the __init__() method.
Brick implementations must call the __init__() constructor of their parent using super(BlockImplementation, self).__init__(**kwargs) at the beginning of the overriding __init__.
The methods _allocate() and _initialize() need to be overridden if the brick needs to allocate shared variables and initialize their values in order to function.
A brick can have any number of methods which apply the brick on Theano variables. These methods should be decorated with the application() decorator.
If a brick has children, they must be listed in the children attribute. Moreover, if the brick wants to control the configuration of its children, the _push_allocation_config() and _push_initialization_config() methods need to be overridden.
Examples
Most bricks have lazy initialization enabled.
>>> import theano
>>> from blocks.initialization import IsotropicGaussian, Constant
>>> from blocks.bricks import Linear
>>> linear = Linear(input_dim=5, output_dim=3,
... weights_init=IsotropicGaussian(),
... biases_init=Constant(0))
>>> x = theano.tensor.vector()
>>> linear.apply(x) # Calls linear.allocate() automatically
linear_apply_output
>>> linear.initialize() # Initializes the weight matrix
allocate()[source]
Allocate shared variables for parameters.
Based on the current configuration of this Brick create Theano shared variables to store the parameters. After allocation, parameters are accessible through the parameters attribute.
This method calls the allocate() method of all children first, allowing the _allocate() method to override the parameters of the children if needed.
Raises: ValueError – If the configuration of this brick is insufficient to determine the number of parameters or their dimensionality to be initialized.
Notes
This method sets the parameters attribute to an empty list. This is in order to ensure that calls to this method completely reset the parameters.
children
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
get_dims(names)[source]
Get list of dimensions for a set of input/output variables.
Parameters: names (list) – The variable names. dims – The dimensions of the sources. list
get_hierarchical_name(parameter, delimiter='/')[source]
Return hierarhical name for a parameter.
Returns a path of the form brick1/brick2/brick3.parameter1. The delimiter is configurable.
Parameters: delimiter (str) – The delimiter used to separate brick names in the path.
get_unique_path()[source]
Returns unique path to this brick in the application graph.
initialize()[source]
Initialize parameters.
Intialize parameters, such as weight matrices and biases.
Notes
If the brick has not allocated its parameters yet, this method will call the allocate() method in order to do so.
parameters
print_shapes = False
push_allocation_config()[source]
Push the configuration for allocation to child bricks.
Bricks can configure their children, based on their own current configuration. This will be automatically done by a call to allocate(), but if you want to override the configuration of child bricks manually, then you can call this function manually.
push_initialization_config()[source]
Push the configuration for initialization to child bricks.
Bricks can configure their children, based on their own current configuration. This will be automatically done by a call to initialize(), but if you want to override the configuration of child bricks manually, then you can call this function manually.
blocks.bricks.lazy(allocation=None, initialization=None)[source]
Makes the initialization lazy.
This decorator allows the user to define positional arguments which will not be needed until the allocation or initialization stage of the brick. If these arguments are not passed, it will automatically replace them with a custom None object. It is assumed that the missing arguments can be set after initialization by setting attributes with the same name.
Parameters: allocation (list) – A list of argument names that are needed for allocation. initialization (list) – A list of argument names that are needed for initialization.
Examples
>>> class SomeBrick(Brick):
... @lazy(allocation=['a'], initialization=['b'])
... def __init__(self, a, b, c='c', d=None):
... print(a, b, c, d)
>>> brick = SomeBrick('a')
a NoneInitialization c None
>>> brick = SomeBrick(d='d', b='b')
NoneAllocation b c d
class blocks.bricks.BatchNormalization(**kwargs)[source]
Bases: blocks.bricks.interfaces.RNGMixin, blocks.bricks.interfaces.Feedforward
Normalizes activations, parameterizes a scale and shift.
Parameters: input_dim (int or tuple) – Shape of a single input example. It is assumed that a batch axis will be prepended to this. broadcastable (tuple, optional) – Tuple of the same length as input_dim which specifies which of the per-example axes should be averaged over to compute means and standard deviations. For example, in order to normalize over all spatial locations in a (batch_index, channels, height, width) image, pass (False, True, True). The batch axis is always averaged out. conserve_memory (bool, optional) – Use an implementation that stores less intermediate state and therefore uses less memory, at the expense of 5-10% speed. Default is True. epsilon (float, optional) – The stabilizing constant for the minibatch standard deviation computation (when the brick is run in training mode). Added to the variance inside the square root, as in the batch normalization paper. scale_init (object, optional) – Initialization object to use for the learned scaling parameter ($\gamma$ in [BN]). By default, uses constant initialization of 1. shift_init (object, optional) – Initialization object to use for the learned shift parameter ($\beta$ in [BN]). By default, uses constant initialization of 0. mean_only (bool, optional) – Perform “mean-only” batch normalization as described in [SK2016]. learn_scale (bool, optional) – Whether to include a learned scale parameter ($\gamma$ in [BN]) in this brick. Default is True. Has no effect if mean_only is True (i.e. a scale parameter is never learned in mean-only mode). learn_shift (bool, optional) – Whether to include a learned shift parameter ($\beta$ in [BN]) in this brick. Default is True.
Notes
In order for trained models to behave sensibly immediately upon upon deserialization, by default, this brick runs in inference mode, using a population mean and population standard deviation (initialized to zeros and ones respectively) to normalize activations. It is expected that the user will adapt these during training in some fashion, independently of the training objective, e.g. by taking a moving average of minibatch-wise statistics.
In order to train with batch normalization, one must obtain a training graph by transforming the original inference graph. See apply_batch_normalization() for a routine to transform graphs, and batch_normalization() for a context manager that may enable shorter compile times (every instance of BatchNormalization is itself a context manager, entry into which causes applications to be in minibatch “training” mode, however it is usually more convenient to use batch_normalization() to enable this behaviour for all of your graph’s BatchNormalization bricks at once).
Note that training in inference mode should be avoided, as this brick introduces scales and shift parameters (tagged with the PARAMETER role) that, in the absence of batch normalization, usually makes things unstable. If you must do this, filter for and remove BATCH_NORM_SHIFT_PARAMETER and BATCH_NORM_SCALE_PARAMETER from the list of parameters you are training, and this brick should behave as a (somewhat expensive) no-op.
This Brick accepts scale_init and shift_init arguments but is not an instance of Initializable, and will therefore not receive pushed initialization config from any parent brick. In almost all cases, you will probably want to stick with the defaults (unit scale and zero offset), but you can explicitly pass one or both initializers to override this.
This has the necessary properties to be inserted into a blocks.bricks.conv.ConvolutionalSequence as-is, in which case the input_dim should be omitted at construction, to be inferred from the layer below.
[BN] (1, 2, 3, 4) Sergey Ioffe and Christian Szegedy. Batch normalization: accelerating deep network training by reducing internal covariate shift. ICML (2015), pp. 448-456.
[SK2016] Tim Salimans and Diederik P. Kingma. Weight normalization: a simple reparameterization to accelerate training of deep neural networks. arXiv 1602.07868.
apply
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
image_size
normalization_axes
num_channels
num_output_channels
output_dim
class blocks.bricks.SpatialBatchNormalization(**kwargs)[source]
Bases: blocks.bricks.bn.BatchNormalization
Convenient subclass for batch normalization across spatial inputs.
Parameters: input_dim (int or tuple) – The input size of a single example. Must be length at least 2. It’s assumed that the first axis of this tuple is a “channels” axis, which should not be summed over, and all remaining dimensions are spatial dimensions.
Notes
See BatchNormalization for more details (and additional keyword arguments).
class blocks.bricks.BatchNormalizedMLP(**kwargs)[source]
Bases: blocks.bricks.sequences.MLP
Convenient subclass for building an MLP with batch normalization.
Parameters: conserve_memory (bool, optional, by keyword only) – See BatchNormalization. mean_only (bool, optional, by keyword only) – See BatchNormalization. learn_scale (bool, optional, by keyword only) – See BatchNormalization. learn_shift (bool, optional, by keyword only) – See BatchNormalization.
Notes
All other parameters are the same as MLP. Each activation brick is wrapped in a Sequence containing an appropriate BatchNormalization brick and the activation that follows it.
By default, the contained Linear bricks will not contain any biases, as they could be canceled out by the biases in the BatchNormalization bricks being added. Pass use_bias with a value of True if you really want this for some reason.
mean_only, learn_scale and learn_shift are pushed down to all created BatchNormalization bricks as allocation config.
conserve_memory
Conserve memory.
class blocks.bricks.Feedforward(name=None, children=None)[source]
Declares an interface for bricks with one input and one output.
Many bricks have just one input and just one output (activations, Linear, MLP). To make such bricks interchangable in most contexts they should share an interface for configuring their input and output dimensions. This brick declares such an interface.
input_dim
int – The input dimension of the brick.
output_dim
int – The output dimension of the brick.
class blocks.bricks.Initializable(**kwargs)[source]
Bases: blocks.bricks.interfaces.RNGMixin, blocks.bricks.base.Brick
Base class for bricks which push parameter initialization.
Many bricks will initialize children which perform a linear transformation, often with biases. This brick allows the weights and biases initialization to be configured in the parent brick and pushed down the hierarchy.
Parameters: weights_init (object) – A NdarrayInitialization instance which will be used by to initialize the weight matrix. Required by initialize(). biases_init (object, optional) – A NdarrayInitialization instance that will be used to initialize the biases. Required by initialize() when use_bias is True. Only supported by bricks for which has_biases is True. use_bias (bool, optional) – Whether to use a bias. Defaults to True. Required by initialize(). Only supported by bricks for which has_biases is True. rng (numpy.random.RandomState) –
has_biases
boolFalse if the brick does not support biases, and only has weights_init. For an example of this, see Bidirectional. If this is False, the brick does not support the arguments biases_init or use_bias.
has_biases = True
class blocks.bricks.LinearLike(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable
Initializable subclass with logic for Linear-like classes.
Notes
Provides W and b properties that can be overridden in subclasses to implement pre-application transformations on the weights and biases. Application methods should refer to self.W and self.b rather than accessing the parameters list directly.
This assumes a layout of the parameters list with the weights coming first and biases (if use_bias is True) coming second.
W
b
class blocks.bricks.Random(theano_seed=None, **kwargs)[source]
A mixin class for Bricks which need Theano RNGs.
Parameters: theano_seed (int or list, optional) – Seed to use for a MRG_RandomStreams object.
seed_rng = <mtrand.RandomState object>
theano_rng
Returns Brick’s Theano RNG, or a default one.
The default seed can be set through blocks.config.
theano_seed
class blocks.bricks.Linear(**kwargs)[source]
Bases: blocks.bricks.interfaces.LinearLike, blocks.bricks.interfaces.Feedforward
A linear transformation with optional bias.
Brick which applies a linear (affine) transformation by multiplying the input with a weight matrix. By default, a bias term is added (see Initializable for information on disabling this).
Parameters: input_dim (int) – The dimension of the input. Required by allocate(). output_dim (int) – The dimension of the output. Required by allocate().
Notes
See Initializable for initialization parameters.
A linear transformation with bias is a matrix multiplication followed by a vector summation.
$f(\mathbf{x}) = \mathbf{W}\mathbf{x} + \mathbf{b}$
apply
Apply the linear transformation.
Parameters: input (TensorVariable) – The input on which to apply the transformation output – The transformed input plus optional bias TensorVariable
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
class blocks.bricks.Bias(**kwargs)[source]
Bases: blocks.bricks.interfaces.Feedforward, blocks.bricks.interfaces.Initializable
Add a bias (i.e. sum with a vector).
apply
Apply the linear transformation.
Parameters: input (TensorVariable) – The input on which to apply the transformation output – The transformed input plus optional bias TensorVariable
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
input_dim
output_dim
class blocks.bricks.Maxout(**kwargs)[source]
Maxout pooling transformation.
A brick that does max pooling over groups of input units. If you use this code in a research project, please cite [GWFM13].
[GWFM13] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio, Maxout networks, ICML (2013), pp. 1319-1327.
Parameters: num_pieces (int) – The size of the groups the maximum is taken over.
Notes
Maxout applies a set of linear transformations to a vector and selects for each output dimension the result with the highest value.
apply
Apply the maxout transformation.
Parameters: input (TensorVariable) – The input on which to apply the transformation output – The transformed input TensorVariable
class blocks.bricks.LinearMaxout(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Feedforward
Maxout pooling following a linear transformation.
This code combines the Linear brick with a Maxout brick.
Parameters: input_dim (int) – The dimension of the input. Required by allocate(). output_dim (int) – The dimension of the output. Required by allocate(). num_pieces (int) – The number of linear functions. Required by allocate().
Notes
See Initializable for initialization parameters.
apply
Apply the linear transformation followed by maxout.
Parameters: input (TensorVariable) – The input on which to apply the transformations output – The transformed input TensorVariable
input_dim
class blocks.bricks.Identity(name=None, children=None)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of identity function.
apply
Apply the identity function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply identity to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.Tanh(name=None, children=None)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of tanh function.
apply
Apply the tanh function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply tanh to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.Logistic(name=None, children=None)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of logistic function.
apply
Apply the logistic function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply logistic to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.Softplus(name=None, children=None)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of softplus function.
apply
Apply the softplus function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply softplus to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.Rectifier(name=None, children=None)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of rectifier function.
apply
Apply the rectifier function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply rectifier to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.LeakyRectifier(leak=0.01, **kwargs)[source]
Bases: blocks.bricks.interfaces.Activation
Elementwise application of leakyrectifier function.
apply
Apply the leakyrectifier function element-wise.
Parameters: input (TensorVariable) – Theano variable to apply leakyrectifier to, element-wise. output – The input with the activation function applied. TensorVariable
class blocks.bricks.Softmax(name=None, children=None)[source]
A softmax brick.
Works with 2-dimensional inputs only. If you need more, see NDimensionalSoftmax.
apply
Standard softmax.
Parameters: input (Variable) – A matrix, each row contains unnormalized log-probabilities of a distribution. output_ – A matrix with probabilities in each row for each distribution from input_. Variable
categorical_cross_entropy
Computationally stable cross-entropy for pre-softmax values.
Parameters: y (TensorVariable) – In the case of a matrix argument, each row represents a probabilility distribution. In the vector case, each element represents a distribution by specifying the position of 1 in a 1-hot vector. x (TensorVariable) – A matrix, each row contains unnormalized probabilities of a distribution. cost – A vector of cross-entropies between respective distributions from y and x. TensorVariable
log_probabilities
Normalize log-probabilities.
Converts unnormalized log-probabilities (exponents of which do not sum to one) into actual log-probabilities (exponents of which sum to one).
Parameters: input (Variable) – A matrix, each row contains unnormalized log-probabilities of a distribution. output – A matrix with normalized log-probabilities in each row for each distribution from input_. Variable
class blocks.bricks.NDimensionalSoftmax(name=None, children=None)[source]
Bases: blocks.bricks.simple.Softmax
A wrapped brick class.
This brick was automatically constructed by wrapping Softmax with WithExtraDims.
BrickWrapper
For explanation of brick wrapping.
apply
Wraps the application method with reshapes.
Parameters: extra_ndim (int, optional) – The number of extra dimensions. Default is zero.
Softmax.apply()
For documentation of the wrapped application method.
apply_delegate()[source]
categorical_cross_entropy
Wraps the application method with reshapes.
Parameters: extra_ndim (int, optional) – The number of extra dimensions. Default is zero.
Softmax.categorical_cross_entropy()
For documentation of the wrapped application method.
categorical_cross_entropy_delegate()[source]
decorators = [<blocks.bricks.wrappers.WithExtraDims object>]
log_probabilities
Wraps the application method with reshapes.
Parameters: extra_ndim (int, optional) – The number of extra dimensions. Default is zero.
Softmax.log_probabilities()
For documentation of the wrapped application method.
log_probabilities_delegate()[source]
class blocks.bricks.Sequence(application_methods, **kwargs)[source]
A sequence of bricks.
This brick applies a sequence of bricks, assuming that their in- and outputs are compatible.
Parameters: application_methods (list) – List of BoundApplication or Brick to apply. For Bricks, the .apply method is used.
apply
apply_inputs()[source]
apply_outputs()[source]
class blocks.bricks.FeedforwardSequence(application_methods, **kwargs)[source]
Bases: blocks.bricks.sequences.Sequence, blocks.bricks.interfaces.Feedforward
A sequence where the first and last bricks are feedforward.
Parameters: application_methods (list) – List of BoundApplication to apply. The first and last application method should belong to a Feedforward brick.
input_dim
output_dim
class blocks.bricks.MLP(**kwargs)[source]
Bases: blocks.bricks.sequences.FeedforwardSequence, blocks.bricks.interfaces.Initializable
A simple multi-layer perceptron.
Parameters: activations (list of Brick, BoundApplication,) – or None A list of activations to apply after each linear transformation. Give None to not apply any activation. It is assumed that the application method to use is apply. Required for __init__(). dims (list of ints) – A list of input dimensions, as well as the output dimension of the last layer. Required for allocate(). prototype (Brick, optional) – The transformation prototype. A copy will be created for every activation. If not provided, an instance of Linear will be used.
Notes
See Initializable for initialization parameters.
Note that the weights_init, biases_init (as well as use_bias if set to a value other than the default of None) configurations will overwrite those of the layers each time the MLP is re-initialized. For more fine-grained control, push the configuration to the child layers manually before initialization.
>>> from blocks.bricks import Tanh
>>> from blocks.initialization import IsotropicGaussian, Constant
>>> mlp = MLP(activations=[Tanh(), None], dims=[30, 20, 10],
... weights_init=IsotropicGaussian(),
... biases_init=Constant(1))
>>> mlp.push_initialization_config() # Configure children
>>> mlp.children[0].weights_init = IsotropicGaussian(0.1)
>>> mlp.initialize()
input_dim
output_dim
class blocks.bricks.WithExtraDims[source]
Wraps a brick’s applications to handle inputs with extra dimensions.
A brick can be often reused even when data has more dimensions than in the default setting. An example is a situation when one wants to apply categorical_cross_entropy() to temporal data, that is when an additional ‘time’ axis is prepended to its both x and y inputs.
This wrapper adds reshapes required to use application methods of a brick with such data by merging the extra dimensions with the first non-extra one. Two key assumptions are made: that all inputs and outputs have the same number of extra dimensions and that these extra dimensions are equal throughout all inputs and outputs.
While this might be inconvinient, the wrapped brick does not try to guess the number of extra dimensions, but demands it as an argument. The considerations of simplicity and reliability motivated this design choice. Upon availability in Blocks of a mechanism to request the expected number of dimensions for an input of a brick, this can be reconsidered.
wrap(wrapped, namespace)[source]
Wrap an application of the base brick.
This method should be overriden to write into its namespace argument all required changes.
Parameters: mcs (type) – The metaclass. wrapped (Application) – The application to be wrapped. namespace (dict) – The namespace of the class being created.
class blocks.bricks.lookup.LookupTable(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Feedforward
Encapsulates representations of a range of integers.
This brick can be used to embed integers, e.g. word indices, into a vector space.
Parameters: length (int) – The size of the lookup table, or in other words, one plus the maximum index for which a representation is contained. dim (int) – The dimensionality of representations.
Notes
See Initializable for initialization parameters.
W
apply
Perform lookup.
Parameters: indices (TensorVariable) – The indices of interest. The dtype must be integer. output – Representations for the indices of the query. Has $$k+1$$ dimensions, where $$k$$ is the number of dimensions of the indices parameter. The last dimension stands for the representation element. TensorVariable
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
has_bias = False
input_dim
output_dim
## Convolutional bricks¶
class blocks.bricks.conv.AveragePooling(**kwargs)[source]
Average pooling layer.
Parameters: include_padding (bool, optional) – When calculating an average, include zeros that are the result of zero padding added by the padding argument. A value of True is only accepted if ignore_border is also True. False by default.
Notes
For documentation on the remainder of the arguments to this class, see MaxPooling.
class blocks.bricks.conv.Convolutional(**kwargs)[source]
Bases: blocks.bricks.interfaces.LinearLike
Performs a 2D convolution.
Parameters: filter_size (tuple) – The height and width of the filter (also called kernels). num_filters (int) – Number of filters per channel. num_channels (int) – Number of input channels in the image. For the first layer this is normally 1 for grayscale images and 3 for color (RGB) images. For subsequent layers this is equal to the number of filters output by the previous convolutional layer. The filters are pooled over the channels. batch_size (int, optional) – Number of examples per batch. If given, this will be passed to Theano convolution operator, possibly resulting in faster execution. image_size (tuple, optional) – The height and width of the input (image or feature map). If given, this will be passed to the Theano convolution operator, resulting in possibly faster execution times. step (tuple, optional) – The step (or stride) with which to slide the filters over the image. Defaults to (1, 1). border_mode ({'valid', 'full'}, optional) – The border mode to use, see scipy.signal.convolve2d() for details. Defaults to ‘valid’. tied_biases (bool) – Setting this to False will untie the biases, yielding a separate bias for every location at which the filter is applied. If True, it indicates that the biases of every filter in this layer should be shared amongst all applications of that filter. Defaults to True.
apply
Perform the convolution.
Parameters: input (TensorVariable) – A 4D tensor with the axes representing batch size, number of channels, image height, and image width. output – A 4D tensor of filtered images (feature maps) with dimensions representing batch size, number of filters, feature map height, and feature map width.The height and width of the feature map depend on the border mode. For ‘valid’ it is image_size - filter_size + 1 while for ‘full’ it is image_size + filter_size - 1. TensorVariable
static conv2d_impl(input, filters, input_shape=None, filter_shape=None, border_mode='valid', subsample=(1, 1), filter_flip=True, image_shape=None, filter_dilation=(1, 1), num_groups=1, unshared=False, **kwargs)[source]
This function will build the symbolic graph for convolving a mini-batch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).
Parameters: input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape. filters (symbolic 4D or 6D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter rows, filter columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution. See the optional parameter filter_shape. input_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. filter_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. border_mode (str, int or a tuple of two ints or pairs of ints) – Either of the following: 'valid': apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1 'full': apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1 'half': pad input with a symmetric border of filter rows // 2 rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. int: pad input with a symmetric border of zeros of the given width, then perform a valid convolution. (int1, int2): (for 2D) pad input with a symmetric border of int1, int2, then perform a valid convolution. (int1, (int2, int3)) or ((int1, int2), int3): (for 2D) pad input with one symmetric border of int1 or int3, and one asymmetric border of (int2, int3) or (int1, int2). subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere. filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation. image_shape (None, tuple/list of len 4 of int or Constant variable) – Deprecated alias for input_shape. filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere. num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input. kwargs (Any other keyword arguments are accepted for backwards) – compatibility, but will be ignored. Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns) Symbolic 4D tensor
Notes
If cuDNN is available, it will be used on the GPU. Otherwise, it is the CorrMM convolution that will be used “caffe style convolution”.
This is only supported in Theano 0.8 or the development version until it is released.
The parameter filter_dilation is an implementation of dilated convolution.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
static get_output_shape(image_shape, kernel_shape, border_mode, subsample, filter_dilation=None)[source]
This function compute the output shape of convolution operation.
Parameters: image_shape (tuple of int (symbolic or numeric) corresponding to the input) – image shape. Its four (or five) element must correspond respectively to: batch size, number of input channels, height and width (and possibly depth) of the image. None where undefined. kernel_shape (tuple of int (symbolic or numeric) corresponding to the) – kernel shape. For a normal convolution, its four (for 2D convolution) or five (for 3D convolution) elements must correspond respectively to : number of output channels, number of input channels, height and width (and possibly depth) of the kernel. For an unshared 2D convolution, its six channels must correspond to : number of output channels, height and width of the output, number of input channels, height and width of the kernel. None where undefined. border_mode (string, int (symbolic or numeric) or tuple of int (symbolic) – or numeric) or pairs of ints. If it is a string, it must be ‘valid’, ‘half’ or ‘full’. If it is a tuple, its two (or three) elements respectively correspond to the padding on height and width (and possibly depth) axis. For asymmetric padding, provide a pair of ints for each dimension. subsample (tuple of int (symbolic or numeric) Its two or three elements) – espectively correspond to the subsampling on height and width (and possibly depth) axis. filter_dilation (tuple of int (symbolic or numeric) Its two or three) – elements correspond respectively to the dilation on height and width axis. - The shape of the convolution output does not depend on the 'unshared' (Note) – or the ‘num_groups’ parameters. output_shape – four element must correspond respectively to: batch size, number of output channels, height and width of the image. None where undefined. tuple of int corresponding to the output image shape. Its
num_output_channels
class blocks.bricks.conv.ConvolutionalSequence(**kwargs)[source]
Bases: blocks.bricks.sequences.Sequence, blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Feedforward
A sequence of convolutional (or pooling) operations.
Parameters: layers (list) – List of convolutional bricks (i.e. Convolutional, ConvolutionalActivation, or Pooling bricks), or application methods from such bricks. Activation bricks that operate elementwise can also be included. num_channels (int) – Number of input channels in the image. For the first layer this is normally 1 for grayscale images and 3 for color (RGB) images. For subsequent layers this is equal to the number of filters output by the previous convolutional layer. batch_size (int, optional) – Number of images in batch. If given, will be passed to theano’s convolution operator resulting in possibly faster execution. image_size (tuple, optional) – Width and height of the input (image/featuremap). If given, will be passed to theano’s convolution operator resulting in possibly faster execution. border_mode ('valid', 'full' or None, optional) – The border mode to use, see scipy.signal.convolve2d() for details. Unlike with Convolutional, this defaults to None, in which case no default value is pushed down to child bricks at allocation time. Child bricks will in this case need to rely on either a default border mode (usually valid) or one provided at construction and/or after construction (but before allocation). tied_biases (bool, optional) – Same meaning as in Convolutional. Defaults to None, in which case no value is pushed to child Convolutional bricks.
Notes
The passed convolutional operators should be ‘lazy’ constructed, that is, without specifying the batch_size, num_channels and image_size. The main feature of ConvolutionalSequence is that it will set the input dimensions of a layer to the output dimensions of the previous layer by the push_allocation_config() method.
The push behaviour of tied_biases mirrors that of use_bias or any initialization configuration: only an explicitly specified value is pushed down the hierarchy. border_mode also has this behaviour. The reason the border_mode parameter behaves the way it does is that pushing a single default border_mode makes it very difficult to have child bricks with different border modes. Normally, such things would be overridden after push_allocation_config(), but this is a particular hassle as the border mode affects the allocation parameters of every subsequent child brick in the sequence. Thus, only an explicitly specified border mode will be pushed down the hierarchy.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
class blocks.bricks.conv.ConvolutionalTranspose(**kwargs)[source]
Performs the transpose of a 2D convolution.
Parameters: num_filters (int) – Number of filters at the output of the transposed convolution, i.e. the number of channels in the corresponding convolution. num_channels (int) – Number of channels at the input of the transposed convolution, i.e. the number of output filters in the corresponding convolution. step (tuple, optional) – The step (or stride) of the corresponding convolution. Defaults to (1, 1). image_size (tuple, optional) – Image size of the input to the transposed convolution, i.e. the output of the corresponding convolution. Required for tied biases. Defaults to None. unused_edge (tuple, optional) – Tuple of pixels added to the inferred height and width of the output image, whose values would be ignored in the corresponding forward convolution. Must be such that 0 <= unused_edge[i] <= step[i]. Note that this parameter is ignored if original_image_size is specified in the constructor or manually set as an attribute. original_image_size (tuple, optional) – The height and width of the image that forms the output of the transpose operation, which is the input of the original (non-transposed) convolution. By default, this is inferred from image_size to be the size that has each pixel of the original image touched by at least one filter application in the original convolution. Degenerate cases with dropped border pixels (in the original convolution) are possible, and can be manually specified via this argument. See notes below.
Convolutional
For the documentation of other parameters.
Notes
By default, original_image_size is inferred from image_size as being the minimum size of image that could have produced this output. Let hanging[i] = original_image_size[i] - image_size[i] * step[i]. Any value of hanging[i] greater than filter_size[i] - step[i] will result in border pixels that are ignored by the original convolution. With this brick, any original_image_size such that filter_size[i] - step[i] < hanging[i] < filter_size[i] for all i can be validly specified. However, no value will be output by the transposed convolution itself for these extra hanging border pixels, and they will be determined entirely by the bias.
conv2d_impl(input_, W, input_shape, subsample, border_mode, filter_shape)[source]
This function will build the symbolic graph for convolving a mini-batch of a stack of 2D inputs with a set of 2D filters. The implementation is modelled after Convolutional Neural Networks (CNN).
Parameters: input (symbolic 4D tensor) – Mini-batch of feature map stacks, of shape (batch size, input channels, input rows, input columns). See the optional parameter input_shape. filters (symbolic 4D or 6D tensor) – Set of filters used in CNN layer of shape (output channels, input channels, filter rows, filter columns) for normal convolution and (output channels, output rows, output columns, input channels, filter rows, filter columns) for unshared convolution. See the optional parameter filter_shape. input_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the input parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. filter_shape (None, tuple/list of len 4 or 6 of int or Constant variable) – The shape of the filters parameter. Optional, possibly used to choose an optimal implementation. You can give None for any element of the list to specify that this element is not known at compile time. border_mode (str, int or a tuple of two ints or pairs of ints) – Either of the following: 'valid': apply filter wherever it completely overlaps with the input. Generates output of shape: input shape - filter shape + 1 'full': apply filter wherever it partly overlaps with the input. Generates output of shape: input shape + filter shape - 1 'half': pad input with a symmetric border of filter rows // 2 rows and filter columns // 2 columns, then perform a valid convolution. For filters with an odd number of rows and columns, this leads to the output shape being equal to the input shape. int: pad input with a symmetric border of zeros of the given width, then perform a valid convolution. (int1, int2): (for 2D) pad input with a symmetric border of int1, int2, then perform a valid convolution. (int1, (int2, int3)) or ((int1, int2), int3): (for 2D) pad input with one symmetric border of int1 or int3, and one asymmetric border of (int2, int3) or (int1, int2). subsample (tuple of len 2) – Factor by which to subsample the output. Also called strides elsewhere. filter_flip (bool) – If True, will flip the filter rows and columns before sliding them over the input. This operation is normally referred to as a convolution, and this is the default. If False, the filters are not flipped and the operation is referred to as a cross-correlation. image_shape (None, tuple/list of len 4 of int or Constant variable) – Deprecated alias for input_shape. filter_dilation (tuple of len 2) – Factor by which to subsample (stride) the input. Also called dilation elsewhere. num_groups (int) – Divides the image, kernel and output tensors into num_groups separate groups. Each which carry out convolutions separately unshared (bool) – If true, then unshared or ‘locally connected’ convolution will be performed. A different filter will be used for each region of the input. kwargs (Any other keyword arguments are accepted for backwards) – compatibility, but will be ignored. Set of feature maps generated by convolutional layer. Tensor is of shape (batch size, output channels, output rows, output columns) Symbolic 4D tensor
Notes
If cuDNN is available, it will be used on the GPU. Otherwise, it is the CorrMM convolution that will be used “caffe style convolution”.
This is only supported in Theano 0.8 or the development version until it is released.
The parameter filter_dilation is an implementation of dilated convolution.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
original_image_size
class blocks.bricks.conv.Flattener(name=None, children=None)[source]
Flattens the input.
It may be used to pass multidimensional objects like images or feature maps of convolutional bricks into bricks which allow only two dimensional input (batch, features) like MLP.
apply
class blocks.bricks.conv.MaxPooling(**kwargs)[source]
Max pooling layer.
Parameters: pooling_size (tuple) – The height and width of the pooling region i.e. this is the factor by which your input’s last two dimensions will be downscaled. step (tuple, optional) – The vertical and horizontal shift (stride) between pooling regions. By default this is equal to pooling_size. Setting this to a lower number results in overlapping pooling regions. input_dim (tuple, optional) – A tuple of integers representing the shape of the input. The last two dimensions will be used to calculate the output dimension. padding (tuple, optional) – A tuple of integers representing the vertical and horizontal zero-padding to be applied to each of the top and bottom (vertical) and left and right (horizontal) edges. For example, an argument of (4, 3) will apply 4 pixels of padding to the top edge, 4 pixels of padding to the bottom edge, and 3 pixels each for the left and right edge. By default, no padding is performed. ignore_border (bool, optional) – Whether or not to do partial downsampling based on borders where the extent of the pooling region reaches beyond the edge of the image. If True, a (5, 5) image with (2, 2) pooling regions and (2, 2) step will be downsampled to shape (2, 2), otherwise it will be downsampled to (3, 3). True by default.
Notes
Warning
As of this writing, setting ignore_border to False with a step not equal to the pooling size will force Theano to perform pooling computations on CPU rather than GPU, even if you have specified a GPU as your computation device. Additionally, Theano will only use [cuDNN] (if available) for pooling computations with ignure_border set to True. You can ensure that the entire input is captured by at least one pool by using the padding argument to add zero padding prior to pooling being performed.
class blocks.bricks.conv.Pooling(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Feedforward
Base Brick for pooling operations.
This should generally not be instantiated directly; see MaxPooling.
apply
Apply the pooling (subsampling) transformation.
Parameters: input (TensorVariable) – An tensor with dimension greater or equal to 2. The last two dimensions will be downsampled. For example, with images this means that the last two dimensions should represent the height and width of your image. output – A tensor with the same number of dimensions as input_, but with the last two dimensions downsampled. TensorVariable
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
image_size
num_channels
num_output_channels
## Routing bricks¶
class blocks.bricks.parallel.Distribute(**kwargs)[source]
Transform an input and add it to other inputs.
This brick is designed for the following scenario: one has a group of variables and another separate variable, and one needs to somehow distribute information from the latter across the former. We call that “to distribute a varible across other variables”, and refer to the separate variable as “the source” and to the variables from the group as “the targets”.
Given a prototype brick, a Parallel brick makes several copies of it (each with its own parameters). At the application time the copies are applied to the source and the transformation results are added to the targets (in the literate sense).
>>> from theano import tensor
>>> from blocks.initialization import Constant
>>> x = tensor.matrix('x')
>>> y = tensor.matrix('y')
>>> z = tensor.matrix('z')
>>> distribute = Distribute(target_names=['x', 'y'], source_name='z',
... target_dims=[2, 3], source_dim=3,
... weights_init=Constant(2))
>>> distribute.initialize()
>>> new_x, new_y = distribute.apply(x=x, y=y, z=z)
>>> new_x.eval({x: [[2, 2]], z: [[1, 1, 1]]})
array([[ 8., 8.]]...
>>> new_y.eval({y: [[1, 1, 1]], z: [[1, 1, 1]]})
array([[ 7., 7., 7.]]...
Parameters: target_names (list) – The names of the targets. source_name (str) – The name of the source. target_dims (list) – A list of target dimensions, corresponding to target_names. source_dim (int) – The dimension of the source input. prototype (Feedforward, optional) – The transformation prototype. A copy will be created for every input. By default a linear transformation is used.
target_dims
list
source_dim
int
Notes
See Initializable for initialization parameters.
apply
Distribute the source across the targets.
Parameters: **kwargs (dict) – The source and the target variables. output – The new target variables. list
apply_inputs()[source]
apply_outputs()[source]
class blocks.bricks.parallel.Fork(**kwargs)[source]
Several outputs from one input by applying similar transformations.
Given a prototype brick, a Fork brick makes several copies of it (each with its own parameters). At the application time the copies are applied to the input to produce different outputs.
A typical usecase for this brick is to produce inputs for gates of gated recurrent bricks, such as GatedRecurrent.
>>> from theano import tensor
>>> from blocks.initialization import Constant
>>> x = tensor.matrix('x')
>>> fork = Fork(output_names=['y', 'z'],
... input_dim=2, output_dims=[3, 4],
... weights_init=Constant(2), biases_init=Constant(1))
>>> fork.initialize()
>>> y, z = fork.apply(x)
>>> y.eval({x: [[1, 1]]})
array([[ 5., 5., 5.]]...
>>> z.eval({x: [[1, 1]]})
array([[ 5., 5., 5., 5.]]...
Parameters: output_names (list of str) – Names of the outputs to produce. input_dim (int) – The input dimension. prototype (Feedforward, optional) – The transformation prototype. A copy will be created for every input. By default an affine transformation is used.
input_dim
int – The input dimension.
output_dims
list – The output dimensions as a list of integers, corresponding to output_names.
apply
apply_outputs()[source]
class blocks.bricks.parallel.Merge(**kwargs)[source]
Merges several variables by applying a transformation and summing.
Parameters: input_names (list) – The input names. input_dims (list) – The dictionary of input dimensions, keys are input names, values are dimensions. output_dim (int) – The output dimension of the merged variables. prototype (Feedforward, optional) – A transformation prototype. A copy will be created for every input. If None, a linear transformation is used. child_prefix (str, optional) – A prefix for children names. By default “transform” is used.
:param .. warning::: Note that if you want to have a bias you can pass a Linear
brick as a prototype, but this will result in several redundant biases. It is a better idea to use merge.children[0].use_bias = True.
input_names
list – The input names.
input_dims
list – List of input dimensions corresponding to input_names.
output_dim
int – The output dimension.
Examples
>>> from theano import tensor
>>> from blocks.initialization import Constant
>>> a = tensor.matrix('a')
>>> b = tensor.matrix('b')
>>> merge = Merge(input_names=['a', 'b'], input_dims=[3, 4],
... output_dim=2, weights_init=Constant(1.))
>>> merge.initialize()
>>> c = merge.apply(a=a, b=b)
>>> c.eval({a: [[1, 1, 1]], b: [[2, 2, 2, 2]]})
array([[ 11., 11.]]...
apply
apply_inputs()[source]
class blocks.bricks.parallel.Parallel(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable
Apply similar transformations to several inputs.
Given a prototype brick, a Parallel brick makes several copies of it (each with its own parameters). At the application time every copy is applied to the respective input.
>>> from theano import tensor
>>> from blocks.initialization import Constant
>>> x, y = tensor.matrix('x'), tensor.matrix('y')
>>> parallel = Parallel(
... prototype=Linear(use_bias=False),
... input_names=['x', 'y'], input_dims=[2, 3], output_dims=[4, 5],
... weights_init=Constant(2))
>>> parallel.initialize()
>>> new_x, new_y = parallel.apply(x=x, y=y)
>>> new_x.eval({x: [[1, 1]]})
array([[ 4., 4., 4., 4.]]...
>>> new_y.eval({y: [[1, 1, 1]]})
array([[ 6., 6., 6., 6., 6.]]...
Parameters: input_names (list) – The input names. input_dims (list) – List of input dimensions, given in the same order as input_names. output_dims (list) – List of output dimensions. prototype (Feedforward) – The transformation prototype. A copy will be created for every input. child_prefix (str, optional) – The prefix for children names. By default “transform” is used.
input_names
list – The input names.
input_dims
list – Input dimensions.
output_dims
list – Output dimensions.
Notes
See Initializable for initialization parameters.
apply
apply_inputs()[source]
apply_outputs()[source]
## Recurrent bricks¶
### Recurrent architectures¶
class blocks.bricks.recurrent.architectures.GatedRecurrent(**kwargs)[source]
Bases: blocks.bricks.recurrent.base.BaseRecurrent, blocks.bricks.interfaces.Initializable
Gated recurrent neural network.
Gated recurrent neural network (GRNN) as introduced in [CvMG14]. Every unit of a GRNN is equipped with update and reset gates that facilitate better gradient propagation.
Parameters: dim (int) – The dimension of the hidden state. activation (Brick or None) – The brick to apply as activation. If None a Tanh brick is used. gate_activation (Brick or None) – The brick to apply as activation for gates. If None a Logistic brick is used.
Notes
See Initializable for initialization parameters.
[CvMG14] Kyunghyun Cho, Bart van Merriënboer, Çağlar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, EMNLP (2014), pp. 1724-1734.
apply
Apply the gated recurrent transition.
Parameters: states (TensorVariable) – The 2 dimensional matrix of current states in the shape (batch_size, dim). Required for one_step usage. inputs (TensorVariable) – The 2 dimensional matrix of inputs in the shape (batch_size, dim) gate_inputs (TensorVariable) – The 2 dimensional matrix of inputs to the gates in the shape (batch_size, 2 * dim). mask (TensorVariable) – A 1D binary array in the shape (batch,) which is 1 if there is data available, 0 if not. Assumed to be 1-s only if not given. output – Next states of the network. TensorVariable
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
state_to_gates
state_to_state
class blocks.bricks.recurrent.architectures.LSTM(**kwargs)[source]
Bases: blocks.bricks.recurrent.base.BaseRecurrent, blocks.bricks.interfaces.Initializable
Long Short Term Memory.
Every unit of an LSTM is equipped with input, forget and output gates. This implementation is based on code by Mohammad Pezeshki that implements the architecture used in [GSS03] and [Grav13]. It aims to do as many computations in parallel as possible and expects the last dimension of the input to be four times the output dimension.
Unlike a vanilla LSTM as described in [HS97], this model has peephole connections from the cells to the gates. The output gates receive information about the cells at the current time step, while the other gates only receive information about the cells at the previous time step. All ‘peephole’ weight matrices are diagonal.
[GSS03] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber, Learning precise timing with LSTM recurrent networks, Journal of Machine Learning Research 3 (2003), pp. 115-143.
[Grav13] (1, 2) Graves, Alex, Generating sequences with recurrent neural networks, arXiv preprint arXiv:1308.0850 (2013).
[HS97] Sepp Hochreiter, and Jürgen Schmidhuber, Long Short-Term Memory, Neural Computation 9(8) (1997), pp. 1735-1780.
Parameters: dim (int) – The dimension of the hidden state. activation (Brick, optional) – The activation function. The default and by far the most popular is Tanh. gate_activation (Brick or None) – The brick to apply as activation for gates (input/output/forget). If None a Logistic brick is used.
Notes
See Initializable for initialization parameters.
apply
Apply the Long Short Term Memory transition.
Parameters: states (TensorVariable) – The 2 dimensional matrix of current states in the shape (batch_size, features). Required for one_step usage. cells (TensorVariable) – The 2 dimensional matrix of current cells in the shape (batch_size, features). Required for one_step usage. inputs (TensorVariable) – The 2 dimensional matrix of inputs in the shape (batch_size, features * 4). The inputs needs to be four times the dimension of the LSTM brick to insure each four gates receive different transformations of the input. See [Grav13] equations 7 to 10 for more details. The inputs are then split in this order: Input gates, forget gates, cells and output gates. mask (TensorVariable) – A 1D binary array in the shape (batch,) which is 1 if there is data available, 0 if not. Assumed to be 1-s only if not given. [Grav13] Graves, Alex, Generating sequences with recurrent (.) – neural networks, arXiv preprint arXiv:1308.0850 (2013). states (TensorVariable) – Next states of the network. cells (TensorVariable) – Next cell activations of the network.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
class blocks.bricks.recurrent.architectures.SimpleRecurrent(**kwargs)[source]
Bases: blocks.bricks.recurrent.base.BaseRecurrent, blocks.bricks.interfaces.Initializable
The most well-known recurrent transition: a matrix multiplication, optionally followed by a non-linearity.
Parameters: dim (int) – The dimension of the hidden state activation (Brick) – The brick to apply as activation.
Notes
See Initializable for initialization parameters.
W
apply
Apply the simple transition.
Parameters: inputs (TensorVariable) – The 2D inputs, in the shape (batch, features). states (TensorVariable) – The 2D states, in the shape (batch, features). mask (TensorVariable) – A 1D binary array in the shape (batch,) which is 1 if there is data available, 0 if not. Assumed to be 1-s only if not given.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
### Helper bricks for recurrent networks¶
class blocks.bricks.recurrent.misc.Bidirectional(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable
Bidirectional network.
A bidirectional network is a combination of forward and backward recurrent networks which process inputs in different order.
Parameters: prototype (instance of BaseRecurrent) – A prototype brick from which the forward and backward bricks are cloned.
Notes
See Initializable for initialization parameters.
apply
Applies forward and backward networks and concatenates outputs.
apply_delegate()[source]
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
has_bias = False
class blocks.bricks.recurrent.misc.RecurrentStack(transitions, fork_prototype=None, states_name='states', skip_connections=False, **kwargs)[source]
Bases: blocks.bricks.recurrent.base.BaseRecurrent, blocks.bricks.interfaces.Initializable
Stack of recurrent networks.
Builds a stack of recurrent layers from a supplied list of BaseRecurrent objects. Each object must have a sequences, contexts, states and outputs parameters to its apply method, such as the ones required by the recurrent decorator from blocks.bricks.recurrent.
In Blocks in general each brick can have an apply method and this method has attributes that list the names of the arguments that can be passed to the method and the name of the outputs returned by the method. The attributes of the apply method of this class is made from concatenating the attributes of the apply methods of each of the transitions from which the stack is made. In order to avoid conflict, the names of the arguments appearing in the states and outputs attributes of the apply method of each layers are renamed. The names of the bottom layer are used as-is and a suffix of the form ‘#<n>’ is added to the names from other layers, where ‘<n>’ is the number of the layer starting from 1, used for first layer above bottom.
The contexts of all layers are merged into a single list of unique names, and no suffix is added. Different layers with the same context name will receive the same value.
The names that appear in sequences are treated in the same way as the names of states and outputs if skip_connections is “True”. The only exception is the “mask” element that may appear in the sequences attribute of all layers, no suffix is added to it and all layers will receive the same mask value. If you set skip_connections to False then only the arguments of the sequences from the bottom layer will appear in the sequences attribute of the apply method of this class. When using this class, with skip_connections set to “True”, you can supply all inputs to all layers using a single fork which is created with output_names set to the apply.sequences attribute of this class. For example, SequenceGenerator will create a such a fork.
Whether or not skip_connections is set, each layer above the bottom also receives an input (values to its sequences arguments) from a fork of the state of the layer below it. Not to be confused with the external fork discussed in the previous paragraph. It is assumed that all states attributes have a “states” argument name (this can be configured with states_name parameter.) The output argument with this name is forked and then added to all the elements appearing in the sequences of the next layer (except for “mask”.) If skip_connections is False then this fork has a bias by default. This allows direct usage of this class with input supplied only to the first layer. But if you do supply inputs to all layers (by setting skip_connections to “True”) then by default there is no bias and the external fork you use to supply the inputs should have its own separate bias.
Parameters: transitions (list) – List of recurrent units to use in each layer. Each derived from BaseRecurrent Note: A suffix with layer number is added to transitions’ names. fork_prototype (FeedForward, optional) – A prototype for the transformation applied to states_name from the states of each layer. The transformation is used when the states_name argument from the outputs of one layer is used as input to the sequences of the next layer. By default it Linear transformation is used, with bias if skip_connections is “False”. If you supply your own prototype you have to enable/disable bias depending on the value of skip_connections. states_name (string) – In a stack of RNN the state of each layer is used as input to the next. The states_name identify the argument of the states and outputs attributes of each layer that should be used for this task. By default the argument is called “states”. To be more precise, this is the name of the argument in the outputs attribute of the apply method of each transition (layer.) It is used, via fork, as the sequences (input) of the next layer. The same element should also appear in the states attribute of the apply method. skip_connections (bool) – By default False. When true, the sequences of all layers are add to the sequences of the apply of this class. When false only the sequences of the bottom layer appear in the sequences of the apply of this class. In this case the default fork used internally between layers has a bias (see fork_prototype.) An external code can inspect the sequences attribute of the apply method of this class to decide which arguments it need (and in what order.) With skip_connections you can control what is exposed to the externl code. If it is false then the external code is expected to supply inputs only to the bottom layer and if it is true then the external code is expected to supply inputs to all layers. There is just one small problem, the external inputs to the layers above the bottom layer are added to a fork of the state of the layer below it. As a result the output of two forks is added together and it will be problematic if both will have a bias. It is assumed that the external fork has a bias and therefore by default the internal fork will not have a bias if skip_connections is true.
Notes
See BaseRecurrent for more initialization parameters.
apply
Apply the stack of transitions.
Parameters: low_memory (bool) – Use the slow, but also memory efficient, implementation of this code. *args (TensorVariable, optional) – Positional argumentes in the order in which they appear in self.apply.sequences followed by self.apply.contexts. **kwargs (TensorVariable) – Named argument defined in self.apply.sequences, self.apply.states or self.apply.contexts outputs – The outputs of all transitions as defined in self.apply.outputs (list of) TensorVariable
See docstring of this class for arguments appearing in the lists self.apply.sequences, self.apply.states, self.apply.contexts. See recurrent() : for all other parameters such as iterate and return_initial_states however reverse is currently not implemented.
do_apply(*args, **kwargs)[source]
Apply the stack of transitions.
This is the undecorated implementation of the apply method. A method with an @apply decoration should call this method with iterate=True to indicate that the iteration over all steps should be done internally by this method. A method with a @recurrent method should have iterate=False (or unset) to indicate that the iteration over all steps is done externally.
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
low_memory_apply
normal_inputs(level)[source]
static split_suffix(name)[source]
static suffix(name, level)[source]
static suffixes(names, level)[source]
### Base definitions for recurrent bricks¶
class blocks.bricks.recurrent.base.BaseRecurrent(name=None, children=None)[source]
Base class for brick with recurrent application method.
has_bias = False
initial_states
Return initial states for an application call.
Default implementation assumes that the recurrent application method is called apply. It fetches the state names from apply.states and a returns a zero matrix for each of them.
SimpleRecurrent, LSTM and GatedRecurrent override this method with trainable initial states initialized with zeros.
Parameters: batch_size (int) – The batch size. *args – The positional arguments of the application call. **kwargs – The keyword arguments of the application call.
initial_states_outputs()[source]
blocks.bricks.recurrent.base.recurrent(*args, **kwargs)[source]
Wraps an apply method to allow its iterative application.
This decorator allows you to implement only one step of a recurrent network and enjoy applying it to sequences for free. The idea behind is that its most general form information flow of an RNN can be described as follows: depending on the context and driven by input sequences the RNN updates its states and produces output sequences.
Given a method describing one step of an RNN and a specification which of its inputs are the elements of the input sequence, which are the states and which are the contexts, this decorator returns an application method which implements the whole RNN loop. The returned application method also has additional parameters, see documentation of the recurrent_apply inner function below.
Parameters: sequences (list of strs) – Specifies which of the arguments are elements of input sequences. states (list of strs) – Specifies which of the arguments are the states. contexts (list of strs) – Specifies which of the arguments are the contexts. outputs (list of strs) – Names of the outputs. The outputs whose names match with those in the state parameter are interpreted as next step states. recurrent_apply – The new application method that applies the RNN to sequences. Application
## Attention bricks¶
This module defines the interface of attention mechanisms and a few concrete implementations. For a gentle introduction and usage examples see the tutorial TODO.
An attention mechanism decides to what part of the input to pay attention. It is typically used as a component of a recurrent network, though one can imagine it used in other conditions as well. When the input is big and has certain structure, for instance when it is sequence or an image, an attention mechanism can be applied to extract only information which is relevant for the network in its current state.
For the purpose of documentation clarity, we fix the following terminology in this file:
• network is the network, typically a recurrent one, which uses the attention mechanism.
• The network has states. Using this word in plural might seem weird, but some recurrent networks like LSTM do have several states.
• The big structured input, to which the attention mechanism is applied, is called the attended. When it has variable structure, e.g. a sequence of variable length, there might be a mask associated with it.
• The information extracted by the attention from the attended is called glimpse, more specifically glimpses because there might be a few pieces of this information.
Using this terminology, the attention mechanism computes glimpses given the states of the network and the attended.
An example: in the machine translation network from [BCB] the attended is a sequence of so-called annotations, that is states of a bidirectional network that was driven by word embeddings of the source sentence. The attention mechanism assigns weights to the annotations. The weighted sum of the annotations is further used by the translation network to predict the next word of the generated translation. The weights and the weighted sum are the glimpses. A generalized attention mechanism for this paper is represented here as SequenceContentAttention.
class blocks.bricks.attention.AbstractAttention(**kwargs)[source]
The common interface for attention bricks.
First, see the module-level docstring for terminology.
A generic attention mechanism functions as follows. Its inputs are the states of the network and the attended. Given these two it produces so-called glimpses, that is it extracts information from the attended which is necessary for the network in its current states
For computational reasons we separate the process described above into two stages:
1. The preprocessing stage, preprocess(), includes computation that do not involve the state. Those can be often performed in advance. The outcome of this stage is called preprocessed_attended.
1. The main stage, take_glimpses(), includes all the rest.
When an attention mechanism is applied sequentially, some glimpses from the previous step might be necessary to compute the new ones. A typical example for that is when the focus position from the previous step is required. In such cases take_glimpses() should specify such need in its interface (its docstring explains how to do that). In addition initial_glimpses() should specify some sensible initialization for the glimpses to be carried over.
Todo
Only single attended is currently allowed.
preprocess() and initial_glimpses() might end up needing masks, which are currently not provided for them.
Parameters: state_names (list) – The names of the network states. state_dims (list) – The state dimensions corresponding to state_names. attended_dim (int) – The dimension of the attended.
state_names
list
state_dims
list
attended_dim
int
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_glimpses(batch_size, attended)[source]
Return sensible initial values for carried over glimpses.
Parameters: batch_size (int or Variable) – The batch size. attended (Variable) – The attended. initial_glimpses – The initial values for the requested glimpses. These might simply consist of zeros or be somehow extracted from the attended. list of Variable
preprocess
Perform the preprocessing of the attended.
Stage 1 of the attention mechanism, see AbstractAttention docstring for an explanation of stages. The default implementation simply returns attended.
Parameters: attended (Variable) – The attended. preprocessed_attended – The preprocessed attended. Variable
take_glimpses(attended, preprocessed_attended=None, attended_mask=None, **kwargs)[source]
Extract glimpses from the attended given the current states.
Stage 2 of the attention mechanism, see AbstractAttention for an explanation of stages. If preprocessed_attended is not given, should trigger the stage 1.
This application method must declare its inputs and outputs. The glimpses to be carried over are identified by their presence in both inputs and outputs list. The attended must be the first input, the preprocessed attended must be the second one.
Parameters: attended (Variable) – The attended. preprocessed_attended (Variable, optional) – The preprocessed attended computed by preprocess(). When not given, preprocess() should be called. attended_mask (Variable, optional) – The mask for the attended. This is required in the case of padded structured output, e.g. when a number of sequences are force to be the same length. The mask identifies position of the attended that actually contain information. **kwargs (dict) – Includes the states and the glimpses to be carried over from the previous step in the case when the attention mechanism is applied sequentially.
class blocks.bricks.attention.AbstractAttentionRecurrent(name=None, children=None)[source]
The interface for attention-equipped recurrent transitions.
When a recurrent network is equipped with an attention mechanism its transition typically consists of two steps: (1) the glimpses are taken by the attention mechanism and (2) the next states are computed using the current states and the glimpses. It is required for certain usecases (such as sequence generator) that apart from a do-it-all recurrent application method interfaces for the first step and the second steps of the transition are provided.
apply(**kwargs)[source]
Compute next states taking glimpses on the way.
compute_states(**kwargs)[source]
Compute next states given current states and glimpses.
take_glimpses(**kwargs)[source]
Compute glimpses given the current states.
class blocks.bricks.attention.AttentionRecurrent(transition, attention, distribute=None, add_contexts=True, attended_name=None, attended_mask_name=None, **kwargs)[source]
Bases: blocks.bricks.attention.AbstractAttentionRecurrent, blocks.bricks.interfaces.Initializable
Combines an attention mechanism and a recurrent transition.
This brick equips a recurrent transition with an attention mechanism. In order to do this two more contexts are added: one to be attended and a mask for it. It is also possible to use the contexts of the given recurrent transition for these purposes and not add any new ones, see add_context parameter.
At the beginning of each step attention mechanism produces glimpses; these glimpses together with the current states are used to compute the next state and finish the transition. In some cases glimpses from the previous steps are also necessary for the attention mechanism, e.g. in order to focus on an area close to the one from the previous step. This is also supported: such glimpses become states of the new transition.
To let the user control the way glimpses are used, this brick also takes a “distribute” brick as parameter that distributes the information from glimpses across the sequential inputs of the wrapped recurrent transition.
Parameters: transition (BaseRecurrent) – The recurrent transition. attention (Brick) – The attention mechanism. distribute (Brick, optional) – Distributes the information from glimpses across the input sequences of the transition. By default a Distribute is used, and those inputs containing the “mask” substring in their name are not affected. add_contexts (bool, optional) – If True, new contexts for the attended and the attended mask are added to this transition, otherwise existing contexts of the wrapped transition are used. True by default. attended_name (str) – The name of the attended context. If None, “attended” or the first context of the recurrent transition is used depending on the value of add_contents flag. attended_mask_name (str) – The name of the mask for the attended context. If None, “attended_mask” or the second context of the recurrent transition is used depending on the value of add_contents flag.
Notes
See Initializable for initialization parameters.
Wrapping your recurrent brick with this class makes all the states mandatory. If you feel this is a limitation for you, try to make it better! This restriction does not apply to sequences and contexts: those keep being as optional as they were for your brick.
Those coming to Blocks from Groundhog might recognize that this is a RecurrentLayerWithSearch, but on steroids :)
apply
Preprocess a sequence attending the attended context at every step.
Preprocesses the attended context and runs do_apply(). See do_apply() documentation for further information.
apply_contexts()[source]
apply_delegate()[source]
compute_states
Compute current states when glimpses have already been computed.
Combines an application of the distribute that alter the sequential inputs of the wrapped transition and an application of the wrapped transition. All unknown keyword arguments go to the wrapped transition.
Parameters: **kwargs – Should contain everything what self.transition needs and in addition the current glimpses. current_states – Current states computed by self.transition. list of TensorVariable
compute_states_outputs()[source]
do_apply
Process a sequence attending the attended context every step.
In addition to the original sequence this method also requires its preprocessed version, the one computed by the preprocess method of the attention mechanism. Unknown keyword arguments are passed to the wrapped transition.
Parameters: **kwargs – Should contain current inputs, previous step states, contexts, the preprocessed attended context, previous step glimpses. outputs – The current step states and glimpses. list of TensorVariable
do_apply_contexts()[source]
do_apply_outputs()[source]
do_apply_sequences()[source]
do_apply_states()[source]
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
initial_states_outputs()[source]
take_glimpses
Compute glimpses with the attention mechanism.
A thin wrapper over self.attention.take_glimpses: takes care of choosing and renaming the necessary arguments.
Parameters: **kwargs – Must contain the attended, previous step states and glimpses. Can optionaly contain the attended mask and the preprocessed attended. glimpses – Current step glimpses. list of TensorVariable
take_glimpses_outputs()[source]
class blocks.bricks.attention.GenericSequenceAttention(**kwargs)[source]
Logic common for sequence attention mechanisms.
compute_weighted_averages
Compute weighted averages of the attended sequence vectors.
Parameters: weights (Variable) – The weights. The shape must be equal to the attended shape without the last dimension. attended (Variable) – The attended. The index in the sequence must be the first dimension. weighted_averages – The weighted averages of the attended elements. The shape is equal to the attended shape with the first dimension dropped. Variable
compute_weights
Compute weights from energies in softmax-like fashion.
Todo
Parameters: energies (Variable) – The energies. Must be of the same shape as the mask. attended_mask (Variable) – The mask for the attended. The index in the sequence must be the first dimension. weights – Summing to 1 non-negative weights of the same shape as energies. Variable
class blocks.bricks.attention.SequenceContentAttention(**kwargs)[source]
Bases: blocks.bricks.attention.GenericSequenceAttention, blocks.bricks.interfaces.Initializable
Attention mechanism that looks for relevant content in a sequence.
This is the attention mechanism used in [BCB]. The idea in a nutshell:
1. The states and the sequence are transformed independently,
2. The transformed states are summed with every transformed sequence element to obtain match vectors,
3. A match vector is transformed into a single number interpreted as energy,
4. Energies are normalized in softmax-like fashion. The resulting summing to one weights are called attention weights,
5. Weighted average of the sequence elements with attention weights is computed.
In terms of the AbstractAttention documentation, the sequence is the attended. The weighted averages from 5 and the attention weights from 4 form the set of glimpses produced by this attention mechanism.
Parameters: state_names (list of str) – The names of the network states. attended_dim (int) – The dimension of the sequence elements. match_dim (int) – The dimension of the match vector. state_transformer (Brick) – A prototype for state transformations. If None, a linear transformation is used. attended_transformer (Feedforward) – The transformation to be applied to the sequence. If None an affine transformation is used. energy_computer (Feedforward) – Computes energy from the match vector. If None, an affine transformations preceeded by $$tanh$$ is used.
Notes
See Initializable for initialization parameters.
[BCB] (1, 2) Dzmitry Bahdanau, Kyunghyun Cho and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate.
compute_energies
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_glimpses
preprocess
Preprocess the sequence for computing attention weights.
Parameters: attended (TensorVariable) – The attended sequence, time is the 1-st dimension.
take_glimpses
Compute attention weights and produce glimpses.
Parameters: attended (TensorVariable) – The sequence, time is the 1-st dimension. preprocessed_attended (TensorVariable) – The preprocessed sequence. If None, is computed by calling preprocess(). attended_mask (TensorVariable) – A 0/1 mask specifying available data. 0 means that the corresponding sequence element is fake. **states – The states of the network. weighted_averages (Variable) – Linear combinations of sequence elements with the attention weights. weights (Variable) – The attention weights. The first dimension is batch, the second is time.
take_glimpses_inputs()[source]
class blocks.bricks.attention.ShallowEnergyComputer(**kwargs)[source]
Bases: blocks.bricks.sequences.Sequence, blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Feedforward
A simple energy computer: first tanh, then weighted sum.
Parameters: use_bias (bool, optional) – Whether a bias should be added to the energies. Does not change anything if softmax normalization is used to produce the attention weights, but might be useful when e.g. spherical softmax is used.
input_dim
output_dim
## Sequence generators¶
Recurrent networks are often used to generate/model sequences. Examples include language modelling, machine translation, handwriting synthesis, etc.. A typical pattern in this context is that sequence elements are generated one often another, and every generated element is fed back into the recurrent network state. Sometimes also an attention mechanism is used to condition sequence generation on some structured input like another sequence or an image.
This module provides SequenceGenerator that builds a sequence generating network from three main components:
• a core recurrent transition, e.g. LSTM or GatedRecurrent
• a readout component that can produce sequence elements using the network state and the information from the attention mechanism
• an attention mechanism (see attention for more information)
Implementation-wise SequenceGenerator fully relies on BaseSequenceGenerator. At the level of the latter an attention is mandatory, moreover it must be a part of the recurrent transition (see AttentionRecurrent). To simulate optional attention, SequenceGenerator wraps the pure recurrent network in FakeAttentionRecurrent.
class blocks.bricks.sequence_generators.AbstractEmitter(name=None, children=None)[source]
The interface for the emitter component of a readout.
readout_dim
int – The dimension of the readout. Is given by the Readout brick when allocation configuration is pushed.
Notes
An important detail about the emitter cost is that it will be evaluated with inputs of different dimensions so it has to be flexible enough to handle this. The two ways in which it can be applied are:
1. In :meth:BaseSequenceGenerator.cost_matrix where it will be applied to the whole sequence at once.
2. In :meth:BaseSequenceGenerator.generate where it will be applied to only one step of the sequence.
cost(readouts, outputs)[source]
Implements the respective method of Readout.
emit(readouts)[source]
Implements the respective method of Readout.
initial_outputs(batch_size)[source]
Implements the respective method of Readout.
class blocks.bricks.sequence_generators.AbstractFeedback(name=None, children=None)[source]
The interface for the feedback component of a readout.
feedback(outputs)[source]
Implements the respective method of Readout.
class blocks.bricks.sequence_generators.AbstractReadout(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable
The interface for the readout component of a sequence generator.
The readout component of a sequence generator is a bridge between the core recurrent network and the output sequence.
Parameters: source_names (list) – A list of the source names (outputs) that are needed for the readout part e.g. ['states'] or ['states', 'weighted_averages'] or ['states', 'feedback']. readout_dim (int) – The dimension of the readout.
source_names
list
readout_dim
int
BaseSequenceGenerator
see how exactly a readout is used
Readout
cost(readouts, outputs)[source]
Compute generation cost of outputs given readouts.
Parameters: readouts (Variable) – Readouts produced by the readout() method of a (…, readout dim) shape. outputs (Variable) – Outputs whose cost should be computed. Should have as many or one less dimensions compared to readout. If readout has n dimensions, first n - 1 dimensions of outputs should match with those of readouts.
emit(readouts)[source]
Parameters: readouts (Variable) – Readouts produced by the readout() method of a (batch_size, readout_dim) shape.
feedback(outputs)[source]
Feeds outputs back to be used as inputs of the transition.
initial_outputs(batch_size)[source]
Compute initial outputs for the generator’s first step.
In the notation from the BaseSequenceGenerator documentation this method should compute $$y_0$$.
readout(**kwargs)[source]
Compute the readout vector from states, glimpses, etc.
Parameters: **kwargs (dict) – Contains sequence generator states, glimpses, contexts and feedback from the previous outputs.
class blocks.bricks.sequence_generators.BaseSequenceGenerator(**kwargs)[source]
Bases: blocks.bricks.interfaces.Initializable
A generic sequence generator.
This class combines two components, a readout network and an attention-equipped recurrent transition, into a context-dependent sequence generator. Third component must be also given which forks feedback from the readout network to obtain inputs for the transition.
The class provides two methods: generate() and cost(). The former is to actually generate sequences and the latter is to compute the cost of generating given sequences.
The generation algorithm description follows.
Definitions and notation:
• States $$s_i$$ of the generator are the states of the transition as specified in transition.state_names.
• Contexts of the generator are the contexts of the transition as specified in transition.context_names.
• Glimpses $$g_i$$ are intermediate entities computed at every generation step from states, contexts and the previous step glimpses. They are computed in the transition’s apply method when not given or by explicitly calling the transition’s take_glimpses method. The set of glimpses considered is specified in transition.glimpse_names.
• Outputs $$y_i$$ are produced at every step and form the output sequence. A generation cost $$c_i$$ is assigned to each output.
Algorithm:
1. Initialization.
$\begin{split}y_0 = readout.initial\_outputs(contexts)\\ s_0, g_0 = transition.initial\_states(contexts)\\ i = 1\\\end{split}$
By default all recurrent bricks from recurrent have trainable initial states initialized with zeros. Subclass them or BaseRecurrent directly to get custom initial states.
2. New glimpses are computed:
$g_i = transition.take\_glimpses( s_{i-1}, g_{i-1}, contexts)$
3. A new output is generated by the readout and its cost is computed:
$\begin{split}f_{i-1} = readout.feedback(y_{i-1}) \\ r_i = readout.readout(f_{i-1}, s_{i-1}, g_i, contexts) \\ y_i = readout.emit(r_i) \\ c_i = readout.cost(r_i, y_i)\end{split}$
Note that the new glimpses and the old states are used at this step. The reason for not merging all readout methods into one is to make an efficient implementation of cost() possible.
4. New states are computed and iteration is done:
$\begin{split}f_i = readout.feedback(y_i) \\ s_i = transition.compute\_states(s_{i-1}, g_i, fork.apply(f_i), contexts) \\ i = i + 1\end{split}$
5. Back to step 2 if the desired sequence length has not been yet reached.
A scheme of the algorithm described above follows.
Parameters: readout (instance of AbstractReadout) – The readout component of the sequence generator. transition (instance of AbstractAttentionRecurrent) – The transition component of the sequence generator. fork (Brick) – The brick to compute the transition’s inputs from the feedback.
Initializable
for initialization parameters
SequenceGenerator
more user friendly interface to thisbrick
cost
Returns the average cost over the minibatch.
The cost is computed by averaging the sum of per token costs for each sequence over the minibatch.
Warning
Note that, the computed cost can be problematic when batches consist of vastly different sequence lengths.
Parameters: outputs (TensorVariable) – The 3(2) dimensional tensor containing output sequences. The axis 0 must stand for time, the axis 1 for the position in the batch. mask (TensorVariable) – The binary matrix identifying fake outputs. cost – Theano variable for cost, computed by summing over timesteps and then averaging over the minibatch. Variable
Notes
The contexts are expected as keyword arguments.
Adds average cost per sequence element AUXILIARY variable to the computational graph with name per_sequence_element.
cost_matrix
Returns generation costs for output sequences.
cost()
Scalar cost.
generate
A sequence generation step.
Parameters: outputs (TensorVariable) – The outputs from the previous step.
Notes
The contexts, previous states and glimpses are expected as keyword arguments.
generate_delegate()[source]
generate_outputs()[source]
generate_states()[source]
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
initial_states_outputs()[source]
class blocks.bricks.sequence_generators.FakeAttentionRecurrent(transition, **kwargs)[source]
Bases: blocks.bricks.attention.AbstractAttentionRecurrent, blocks.bricks.interfaces.Initializable
Adds fake attention interface to a transition.
BaseSequenceGenerator requires its transition brick to support AbstractAttentionRecurrent interface, that is to have an embedded attention mechanism. For the cases when no attention is required (e.g. language modeling or encoder-decoder models), FakeAttentionRecurrent is used to wrap a usual recurrent brick. The resulting brick has no glimpses and simply passes all states and contexts to the wrapped one.
Todo
Get rid of this brick and support attention-less transitions in BaseSequenceGenerator.
apply
apply_delegate()[source]
compute_states
compute_states_delegate()[source]
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_states
initial_states_outputs()[source]
take_glimpses
class blocks.bricks.sequence_generators.LookupFeedback(num_outputs=None, feedback_dim=None, **kwargs)[source]
Bases: blocks.bricks.sequence_generators.AbstractFeedback, blocks.bricks.interfaces.Initializable
A feedback brick for the case when readout are integers.
Stores and retrieves distributed representations of integers.
feedback
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
class blocks.bricks.sequence_generators.Readout(emitter=None, feedback_brick=None, merge=None, merge_prototype=None, post_merge=None, merged_dim=None, **kwargs)[source]
Readout brick with separated emitter and feedback parts.
Readout combines a few bits and pieces into an object that can be used as the readout component in BaseSequenceGenerator. This includes an emitter brick, to which emit(), cost() and initial_outputs() calls are delegated, a feedback brick to which feedback() functionality is delegated, and a pipeline to actually compute readouts from all the sources (see the source_names attribute of AbstractReadout).
The readout computation pipeline is constructed from merge and post_merge brick, whose responsibilites are described in the respective docstrings.
Parameters: emitter (an instance of AbstractEmitter) – The emitter component. feedback_brick (an instance of AbstractFeedback) – The feedback component. merge (Brick, optional) – A brick that takes the sources given in source_names as an input and combines them into a single output. If given, merge_prototype cannot be given. merge_prototype (FeedForward, optional) – If merge isn’t given, the transformation given by merge_prototype is applied to each input before being summed. By default a Linear transformation without biases is used. If given, merge cannot be given. post_merge (Feedforward, optional) – This transformation is applied to the merged inputs. By default Bias is used. merged_dim (int, optional) – The input dimension of post_merge i.e. the output dimension of merge (or merge_prototype). If not give, it is assumed to be the same as readout_dim (i.e. post_merge is assumed to not change dimensions). **kwargs (dict) – Passed to the parent’s constructor.
BaseSequenceGenerator
see how exactly a readout is used
cost
emit
feedback
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_outputs
readout
class blocks.bricks.sequence_generators.SequenceGenerator(readout, transition, attention=None, add_contexts=True, **kwargs)[source]
A more user-friendly interface for BaseSequenceGenerator.
Parameters: readout (instance of AbstractReadout) – The readout component for the sequence generator. transition (instance of BaseRecurrent) – The recurrent transition to be used in the sequence generator. Will be combined with attention, if that one is given. attention (object, optional) – The attention mechanism to be added to transition, an instance of AbstractAttention. add_contexts (bool) – If True, the AttentionRecurrent wrapping the transition will add additional contexts for the attended and its mask. **kwargs (dict) – All keywords arguments are passed to the base class. If fork keyword argument is not provided, Fork is created that forks all transition sequential inputs without a “mask” substring in them.
class blocks.bricks.sequence_generators.SoftmaxEmitter(initial_output=0, **kwargs)[source]
Bases: blocks.bricks.sequence_generators.AbstractEmitter, blocks.bricks.interfaces.Initializable, blocks.bricks.interfaces.Random
A softmax emitter for the case of integer outputs.
Interprets readout elements as energies corresponding to their indices.
Parameters: initial_output (int or a scalar Variable) – The initial output.
cost
emit
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_outputs
probs
class blocks.bricks.sequence_generators.TrivialEmitter(**kwargs)[source]
An emitter for the trivial case when readouts are outputs.
Notes
By default cost() always returns zero tensor.
cost
emit
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
initial_outputs
class blocks.bricks.sequence_generators.TrivialFeedback(**kwargs)[source]
A feedback brick for the case when readout are outputs.
feedback
get_dim(name)[source]
Get dimension of an input/output variable of a brick.
Parameters: name (str) – The name of the variable.
## Cost bricks¶
class blocks.bricks.cost.AbsoluteError(name=None, children=None)[source]
cost_matrix
class blocks.bricks.cost.BinaryCrossEntropy(name=None, children=None)[source]
cost_matrix
class blocks.bricks.cost.CategoricalCrossEntropy(name=None, children=None)[source]
apply
class blocks.bricks.cost.Cost(name=None, children=None)[source]
apply
class blocks.bricks.cost.CostMatrix(name=None, children=None)[source]
Base class for costs which can be calculated element-wise.
Assumes that the data has format (batch, features).
apply
cost_matrix
class blocks.bricks.cost.MisclassificationRate(top_k=1)[source]
Calculates the misclassification rate for a mini-batch.
Parameters: top_k (int, optional) – If the ground truth class is within the top_k highest responses for a given example, the model is considered to have predicted correctly. Default: 1.
Notes
Ties for top_k-th place are broken pessimistically, i.e. in the (in practice, rare) case that there is a tie for top_k-th highest output for a given example, it is considered an incorrect prediction.
apply
class blocks.bricks.cost.SquaredError(name=None, children=None)[source]
cost_matrix
## Wrapper bricks¶
class blocks.bricks.wrappers.BrickWrapper[source]
Bases: object
Base class for wrapper metaclasses.
Sometimes one wants to extend a brick with the capability to handle inputs different from what it was designed to handle. A typical example are inputs with more dimensions that was foreseen at the development stage. One way to proceed in such a situation is to write a decorator that wraps all application methods of the brick class by some additional logic before and after the application call. BrickWrapper serves as a convenient base class for such decorators.
Note, that since directly applying a decorator to a Brick subclass will only take place after __new__() is called, subclasses of BrickWrapper should be applied by setting the decorators attribute of the new brick class, like in the example below:
>>> from blocks.bricks.base import Brick
>>> class WrappedBrick(Brick):
wrap(wrapped, namespace)[source]
Wrap an application of the base brick.
This method should be overriden to write into its namespace argument all required changes.
Parameters: mcs (type) – The metaclass. wrapped (Application) – The application to be wrapped. namespace (dict) – The namespace of the class being created.
class blocks.bricks.wrappers.WithExtraDims[source]
Wraps a brick’s applications to handle inputs with extra dimensions.
A brick can be often reused even when data has more dimensions than in the default setting. An example is a situation when one wants to apply categorical_cross_entropy() to temporal data, that is when an additional ‘time’ axis is prepended to its both x and y inputs.
This wrapper adds reshapes required to use application methods of a brick with such data by merging the extra dimensions with the first non-extra one. Two key assumptions are made: that all inputs and outputs have the same number of extra dimensions and that these extra dimensions are equal throughout all inputs and outputs.
While this might be inconvinient, the wrapped brick does not try to guess the number of extra dimensions, but demands it as an argument. The considerations of simplicity and reliability motivated this design choice. Upon availability in Blocks of a mechanism to request the expected number of dimensions for an input of a brick, this can be reconsidered.
wrap(wrapped, namespace)[source]
Wrap an application of the base brick.
This method should be overriden to write into its namespace argument all required changes.
Parameters: mcs (type) – The metaclass. wrapped (Application) – The application to be wrapped. namespace (dict) – The namespace of the class being created. | 2019-04-23 19:00:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35140496492385864, "perplexity": 3591.373790249383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578610036.72/warc/CC-MAIN-20190423174820-20190423200820-00119.warc.gz"} |
http://paperity.org/p/77587699/an-asip-model-with-general-gate-opening-intervals | # An ASIP model with general gate opening intervals
Queueing Systems, Jul 2016
We consider an asymmetric inclusion process, which can also be viewed as a model of n queues in series. Each queue has a gate behind it, which can be seen as a server. When a gate opens, all customers in the corresponding queue instantaneously move to the next queue and form a cluster with the customers there. When the nth gate opens, all customers in the nth site leave the system. For the case where the gate openings are determined by a Markov renewal process, and for a quite general arrival process of customers at the various queues during intervals between successive gate openings, we obtain the following results: (i) steady-state distribution of the total number of customers in the first k queues, $$k=1,\dots ,n$$; (ii) steady-state joint queue length distributions for the two-queue case. In addition to the case that the numbers of arrivals in successive gate opening intervals are independent, we also obtain explicit results for a two-queue model with renewal arrivals.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2Fs11134-016-9492-z.pdf
Onno Boxma, Offer Kella, Uri Yechiali. An ASIP model with general gate opening intervals, Queueing Systems, 2016, 1-20, DOI: 10.1007/s11134-016-9492-z | 2019-01-18 21:59:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5641860961914062, "perplexity": 1347.0044299893875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660818.25/warc/CC-MAIN-20190118213433-20190118235433-00579.warc.gz"} |
https://www.ricam.oeaw.ac.at/events/conferences/aip2009/poster/talk.php?id=596 | Poster Presentation
Daniel Lesnic: Non-local methods for some inverse problems
Mon, 20 July, 2009, 17:15-18:15, Foyer
The ill-posed parabolic equation backward in time
$$u_t+ Au=0, \; 0 < t < T,\\ \|u(T)-f\| \leq \epsilon$$ with the positive self-adjoint unbounded
operator $A$ and $\epsilon > 0$ being given is regularized by the
well-posed non-local boundary value problem
%
$$v_{\alpha t}+ Av_\alpha=0, \quad 0<t<aT,\\ \alpha v_\alpha(0)+v_\alpha(aT)=f$$
%
with $a \geq 1$ being given and
$\alpha> 0$, the regularization parameter.
Similarly, the ill-posed Cauchy problem for elliptic equations
%
$$u_{tt}- Au=0, \; 0 < t < T,\\ \|u(0)-\varphi\| \leq \epsilon,\\ u_{t}(0)=0$$
%
is regularized by the well-posed non-local boundary
value problem
%
$$u_{tt}- Au=0, \; 0 < t < aT,\\ u(0)+\alpha u(aT)=\varphi,\\ u_t(0)=0.$$
%
A priori and a posteriori parameter choice rules are
suggested which yield order-optimal regularization methods.
Numerical results based on the boundary element method are presented
and discussed to confirm the theory.
This is joint work with Dinh Nho Hao (Hanoi Institute of
Mathematics, Vietnam) and Nguyen Van Duc (Vinh University, Vietnam).
URL: www.ricam.oeaw.ac.at/events/conferences/aip2009/poster/talk.php | 2021-10-23 02:01:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.452060341835022, "perplexity": 11003.590604725146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00384.warc.gz"} |
https://pandas.pydata.org/pandas-docs/version/1.2.1/reference/api/pandas.core.window.expanding.Expanding.median.html | # pandas.core.window.expanding.Expanding.median¶
Expanding.median(**kwargs)[source]
Calculate the expanding median.
Parameters
**kwargs
For compatibility with other expanding methods. Has no effect on the computed median.
Returns
Series or DataFrame
Returned type is the same as the original object.
pandas.Series.expanding
Calling object with Series data.
pandas.DataFrame.expanding
Calling object with DataFrames.
pandas.Series.median
Equivalent method for Series.
pandas.DataFrame.median
Equivalent method for DataFrame.
Examples
Compute the rolling median of a series with a window size of 3.
>>> s = pd.Series([0, 1, 2, 3, 4])
>>> s.rolling(3).median()
0 NaN
1 NaN
2 1.0
3 2.0
4 3.0
dtype: float64 | 2022-11-28 12:08:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22389070689678192, "perplexity": 7543.011153141324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710503.24/warc/CC-MAIN-20221128102824-20221128132824-00311.warc.gz"} |
https://faq.gutenberg.eu.org/3_composition/tableaux/colonnes/faire_varier_la_largeur_de_colonnes?rev=1528032806 | Ceci est une ancienne révision du document !
— title: Variable-width columns in tables category: floats tags: tables figures permalink: /FAQ-varwidcol date: 2014-06-10 —
This is a slightly different take on the problem addressed in “[fixed-width tables](FAQ-fixwidtab)” — here we have a column whose size we can't absolutely predict when we design the document.
While the basic techniques (the [tabularx](https://ctan.org/pkg/tabularx), [tabulary](https://ctan.org/pkg/tabulary) and [ltxtable](https://ctan.org/pkg/ltxtable) packages) are the same for this problem as for the fixed-width _table_ problem, there's one extra tool that we can call to our aid, which may be preferable in some situations.
Suppose we have data in one column which we read from an external source, and the source itself isn't entirely predictable. The data in the column may end up pretty narrow in every row of the table, or it may be wide enough that the table would run over the edge of the page; however, we don't want to make the column as wide as possible “just in case”, by defining a fixed size for the table. We would like the column to be as small as possible, but have the possibility to spread to a maximum width and (if even that width is exceeded) turn into a p-style column.
The [varwidth](https://ctan.org/pkg/varwidth) package, discussed in “[automatic sizing of minipages](FAQ-varwidth)”, provides a solution. If you load it together with the LaTeX “required” [array](https://ctan.org/pkg/array) package, i.e.: latex \usepackage{array} \usepackage{varwidth} [varwidth](https://ctan.org/pkg/varwidth) defines a new column-type V, which you can use as follows: latex \begin{tabular}{l V{3.5cm} r}
foo & blah & bar \\
foo & blah blah & bar \\
\end{tabular} when the second column ends up less than 3.5cm wide; or you can use it as follows: latex \begin{tabular}{l V{3.5cm} r}
foo & blah & bar \\
foo & blah blah & bar \\
foo & blah blah blah blah blah blah
& bar \\
\end{tabular} where the second column will end up noticeably wider, and will wrap to a second line in the third row.
3_composition/tableaux/colonnes/faire_varier_la_largeur_de_colonnes.1528032806.txt.gz · Dernière modification: 2018/06/03 15:33 par samcarter | 2022-05-24 00:03:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9632251858711243, "perplexity": 2364.042778561319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00506.warc.gz"} |
https://www.physicsforums.com/threads/help-with-octave-for-system-of-odes.657970/ | # Help with Octave for system of ODEs
1. Dec 10, 2012
### McLaren Rulez
Hi,
I am having a lot of trouble with Octave as I try to solve a system of ODEs. Any help is appreciated, I am a complete newbie with Octave and numerical solving.
Let's try a very simple one. Suppose I had a pair of ODEs with a and b being functions of time
$$\frac{da}{dt}=2ba$$
$$\frac{db}{dt}=1$$
Initial conditions are a(0)=1, b(0)=0
This is clearly the solved by $$a(t)=e^{t^{2}}$$ $$b(t)=t$$ My Octave code was this:
function xdot=f(x,t);
xdot=zeros(2,1)
xdot(1)=2*x(1)*x(2)
xdot(2)=1
endfunction
t=linspace(0,10,100);
x=lsode("f",[1;0],t)
I want to plot a(t) against t or b(t) or some combination of a and b against t. Here are my issues
1) The t=linspace() part. What numbers are appropriate? Sometimes, I got an error saying convergence failure but this combinations worked through blind luck. In general, what should I choose and why does it seem to matter? As I understand, this tells Octave to take t from 0 to 10 and have 100 intervals. I thought any numbers there would have worked?
2) This is more important. I tried plot(t,x(1)) but I got a blank plot. plot(t,x(2)) also gave me a blank plot. plot(t,x) gave me something but it's really weird. Isn't x now a column vector? I'm not sure what exactly lsode outputs here. What should be the correct command to get a(t) against t, which must of course be an exponential t squared against t graph?
There's also the fact that when I do it for my actual set of ODEs which are slightly more complicated, it inevitably hits an error or gets something 'x' undefined at a certain column and certain line. I'm quite lost :(
Thank you for you help.
Last edited: Dec 10, 2012
2. Dec 10, 2012
### gsal
Well...don't know ODEs nor Octave, but a quick look at python's scipy revealed a similar function.
Code (Text):
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dxdt(x,t):
xdot = np.zeros(2)
xdot[0] = 2.0*x[0]*x[1]
xdot[1] = 1.0
return xdot
t = np.linspace(0,5,51)
x = odeint(dxdt, [1.0,0.0], t)
fig = plt.figure()
ax1.plot(t,x[:,0])
ax1.set_ylabel('a(t)')
ax2.plot(t,x[:,1])
ax2.set_xlabel('t')
ax2.set_ylabel('b(t)')
plt.show()
See attached plot, too.
File size:
12.2 KB
Views:
108 | 2018-04-26 15:47:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7766660451889038, "perplexity": 1372.3072560651422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948285.62/warc/CC-MAIN-20180426144615-20180426164615-00280.warc.gz"} |
https://cs.stackexchange.com/questions/104522/finding-the-cheapest-buy-order-with-fixed-inflation-for-each-product | Finding the cheapest buy order with fixed inflation for each product
Let's say we have a set of products $$M$$, a total of $$|M|=n$$ that we want to buy. However, we can only buy one product at a time, so that we need a total of $$n$$ time-units to buy all items.
Each product $$p\in M$$ has a base price $$b_p$$, as well as an inflation rate $$r_p$$.
At time unit $$t$$, product $$p$$ therefore has the price $$b_p \cdot {r_p}^t$$.
We can assume $$b_p\in \mathbb{R}$$, $$r_p>1$$.
I'm looking for an efficient (i.e. polytime) algorithm that returns the optimal buying order which minimizes the total price.
There's an easier variant of the problem (all $$b_p=1$$) which can be solved using the greedy algorithm
"Always buy the one product with the highest inflation".
Given that the base prices can be vastly different, this algorithm can't be directly transferred, but knowing that an easier version of the problem had a linear-time solution, this one probably still isn't $$\mathsf{NP}$$-complete.
If we view every product as a function of time $$p_i(t) = b_p \cdot {r_p}^t$$, the greedy algorithm above tells us, that if for $$t_0$$ holds $$p_i(t_0) = p_j(t_0)$$, then from the point once $$t_0$$ has passed we should always pick of $$p_i, p_j$$ the one with the higher inflation.
I'd be open for both hints and solutions.
• There must be a polynomial algorithm. It is just a matter of time for us to find it. – Apass.Jack Feb 20 at 15:55
The problem in the question
The price of product $$p$$ at time $$t$$ is $$b_p{r_p}^t$$, where $$p$$ and $$t$$ are integers between 1 and $$n$$.. We want to find a permutation $$f$$ of $$1,2,\cdots,n$$ such that $$\Sigma_{p=1}^nb_pr_p^{f(p)}$$ is minimum.
A more general problem and polynomial-time algorithms
The price of product $$p$$ at time $$t$$ is $$c(p,t)$$, where $$p$$ and $$t$$ are integers between 1 and $$n$$. We want to find a permutation $$f$$ of $$1,2,\cdots,n$$ such that $$\Sigma_{p=1}^nc(p,f(p))$$ is minimum.
The problem above is none other than the famous assignment problem. There are various polynomial-time algorithms to solve it. For example, this version of Hungarian algorithm runs in $$O(n^4)$$ time.
For the problem in the question, it can also be solved in polynomial time since we can compute all $$b_pr_p^t$$ for $$1\le p, t\le n$$ in $$O(n^2)$$ time.
• Can we do better than $O(n^3)$? There is probably an algorithm in $O(n^2)$. There might be an algorithm in $O(n\log n)$. – Apass.Jack Feb 22 at 14:41
• This was interesting from start to finish! The decomposition of the problem into the separate suproblems "calculating the prices" and "solving the allocation problem", which allows the linearization of the problem as the cost function is replaced by a look-up. Then the problem description via minimization over a symmetric group of permutations, which looks like a total dead end, having such an easy and obvious description as a linear program, and even having a further raffination! – Sudix Feb 22 at 18:50
some thoughts but haven't been mathematically proven. to solve the problems we need to decide the priority to buy each product
1. if $$p_i(t) = p_j(t)$$ has crossing point at $$t_{ij} > 0$$, we have two cases:
1. when $$t < t_{ij}$$ and $$p_i(t) > p_j(t)$$, then we want to buy $$p_j$$ sooner than $$p_i$$
2. the opposite case of 1.
this can partially define the purchase order we want for those have crossing points at $$t > 0$$
1. to decide the purchase order of those don't have crossing point at $$t > 0$$, we can rely on their initial price $$p_i(0)$$, e.g. if $$p_i(0) > p_j(0)$$, we want to buy $$p_i$$ before $$p_j$$.
2. the above two steps define a partial order of all the products, to finalize an optimal purchase order, we can enumerate all the tsort order of the graph(nodes are products, partial order as edges) to find the optimal purchase order.
time complexity:
solving 1. and 2. takes $$O(nlog(n))$$ (solving each exponential equations takes $$O(log(n))$$ using binary search)
solving 3. takes $$O(V+E)$$ where $$V=n, E<=n^2$$
• I am afraid I cannot understand this answer. What does it mean by "$p_i(t) = p_j(t)$ has crossing point at $t_{ij}$"? – Apass.Jack Feb 20 at 15:55
• How could step 3 takes $O(V+E)$ time? It looks like more than polynomial time. – Apass.Jack Feb 20 at 19:58
• I'm not quite figuring out 1. - if $p_i(t)>p_j(t)$, why would we want to buy $p_j$ first? I came to the inverted conclusion (and then found trouble, as the crossing points can be arbitraily close to the integers). As for 2.: Two products have no crossing point exactly iff their inflation rates are identical – Sudix Feb 21 at 6:03 | 2019-06-26 20:43:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 56, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6376596689224243, "perplexity": 362.0534084121095}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000545.97/warc/CC-MAIN-20190626194744-20190626220744-00108.warc.gz"} |
http://math.stackexchange.com/questions/250325/space-of-bounded-functions-is-reflexive-if-the-domain-is-finite | # Space of bounded functions is reflexive if the domain is finite
Let $C_b(X)$ be a space of bounded continuous functions on a locally compact space $X$ equipped with the supremum norm. How to show that $C_b(X)$ is reflexive if and only if $X$ is finite?
-
Do you know what the dual of $C_b(X)$ is? – Christopher A. Wong Dec 4 '12 at 2:18
This can be reduced to the case where $X$ is compact. $C_b(X)\cong C(\beta X)$. – Jonas Meyer Dec 4 '12 at 5:31
Why do you accept an incomplete answer when you received three full answers? – Martin Dec 19 '12 at 21:19
This is only a partial answer. I assume that $X$ have finite amount of connected components.
If $X$ is finite then $C_b(X)$ is finite dimensional and the result follows. If $X$ is infinite then $C_b(X)$ and $c_b(X)^{**}$ are infinite dimensional. Assume $C_b(X)$ is reflexive. Since $C_b(X)^{**}$ is the dual space, then by Banach–Alaoglu theorem its unit ball of $C_b(X)^{**}$ is weak-$^*$ compact. Hence by Krein–Milman theorem this unit ball is closed convex hull of its extreme points. Since $C_b(X)$ is reflexive, then $C_b(X)^{**}$ is isometricaly isomorphic to $C_b(X)$. As the consequence extreme points of unit ball of $C_b(X)^{**}$ is one-to-one correspondence with exteme points of unita ball of $C_b(X)$. One can check that there finitely many extreme points in the unit ball of $C_b(X)$. They are of the form $\sum\limits_{i=1}^n\alpha_i{1_{S_i}}$ whre $\{S_i\}_{i=1}^n$ are connected component of $X$ and $|\alpha_i|=1$ for all $i$. Thus unit ball of $C_b(X)^{**}$ is closed convex hull of finite amount of points! This implies that $C_b(X)^{**}$ is finite dimensional. Contradiction, hence $C_b(X)$ is not reflexive.
-
Isn't any $f$ with $|f|\equiv 1$ an extreme point in $C_b(X)$? – Jonas Meyer Dec 4 '12 at 5:07
@JonasMeyer You are right. I thought of $C([0,1])$ when was writing this answer. – Norbert Dec 4 '12 at 5:11
Still, in $C[0,1]$, $f$ is an extreme point in the unit ball if and only if $|f(x)|=1$ for all $x\in[0,1]$. – Jonas Meyer Dec 4 '12 at 5:13
@JonasMeyer I consider real case. Well to answer this question one may apply a sledgehammer from this answer – Norbert Dec 4 '12 at 5:13
Oh yeah, I should have realized that. :) That could handle many cases, then. Thanks. – Jonas Meyer Dec 4 '12 at 5:15
It is clear that for finite $X$ the space $C_b(X)$ is reflexive because it is finite-dimensional.
Assume $X$ is infinite. Choose a sequence of distinct points $(x_n) \subset X$. The functionals $\delta_n (f) = f(x_n)$ yield an isometric embedding $(a_n) \mapsto \sum a_n \delta_n$ of $\ell^1$ into $C_b(X)^\ast$. This gives a closed non-reflexive subspace of $C_b(X)^\ast$, so $C_b(X)^\ast$ is not reflexive and hence $C_b(X)$ is not reflexive either.
The argument I gave is basically an easier (dual) version of Danny Leung's answer. The fact that $c_0$ embeds isometrically into $C_b(X)$ is a simple application of Urysohn's lemma: choose a sequence of pairwise distinct points in $X$ (which is possible since $X$ is infinite. Use Urysohn's lemma to construct a sequence of functions $f\colon X \to [0,1]$ such that $f_n(x_m) = \delta_{mn}$ with pairwise disjoint supports by induction. To embed $c_0$ isometrically into $C_b(X)$ send a sequence $a_n \in c_0$ to the continuous function $x \mapsto \sum a_n f_n(x)$.
-
I didn't see the question about the dual space of $C_b(X)$ in the comments. It has no better description than the space of Radon measures on $\beta X$ via the identification of $C_b(X) = C(\beta X)$ (Stone–Čech compactification + Riesz representation theorem). – Martin Dec 4 '12 at 18:09
I assume here that $X$ is Hausdorff.
Let $M$ denote the set of all finite, signed, countably additive Borel measures on $X$; we know that $M$ is a closed subspace of $C_b(X)^*$.
Suppose first that $X$ is not discrete; then there exists a bounded, measurable, discontinuous $f : X \to \mathbb{R}$. (For example, if $U$ is open and not closed, set $f = 1_U$.) The map $\mu \mapsto \int f\,d\mu$ is a continuous linear functional on $M$, so by Hahn-Banach it has a continuous extension to a $\xi \in C_b(X)^{**}$. But there can be no $g \in C_b(X)$ with $\xi(\ell) = \ell(g)$ for all $\ell \in C_b(X)^*$, by considering that if we take $\ell = \ell_x$ to be a point mass at an arbitrary $x$, this forces $g(x) = \ell_x(g) = \xi(\ell_x) = f(x)$, but $f$ is not continuous. (Here we used the assumption that $X$ was Hausdorff, so that singleton sets are closed, hence Borel, and "point masses" work the way you think they do.)
Next suppose that $X$ is discrete and infinite. Then the compactly (i.e. finitely) supported functions $C_c(X)$ are a non-dense subset of $C_b(X)$ (constant functions are not in their closure), so by Hahn-Banach there is a nonzero linear functional $\ell \in C_b(X)^*$ with $\ell(g) = 0$ for all $g \in C_c(X)$. Clearly $\ell \notin M$, so $M$ is not dense in $C_b(X)^*$, and by Hahn-Banach again there exists a nonzero $\xi \in C_b(X)^{**}$ with $\xi(\mu) = 0$ for $\mu \in M$. On the other hand, if $g \in C_b(X)$ and $\int g\,d\mu = 0$ for all $\mu \in M$, we have $g=0$, so $\xi$ cannot correspond to an element of $C_b(X)$.
-
If $X$ is equipped with discrete topology, then there doesn't exist a discontinuous function. – Guillermo Jun 23 '14 at 18:59
If X is infinite, then $C_b(X)$ contains a copy of $c_0$.
- | 2015-05-24 17:48:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9535354971885681, "perplexity": 139.98832701648595}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928030.83/warc/CC-MAIN-20150521113208-00154-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-the-square-root-of-45-125 | # How do you simplify the square root of 45/125?
$\sqrt{\frac{45}{125}} = \frac{3}{5}$
If $a \ge 0$ then $\sqrt{{a}^{2}} = a$
$\sqrt{\frac{45}{125}} = \sqrt{\frac{5 \cdot 9}{5 \cdot 25}} = \sqrt{\frac{9}{25}} = \sqrt{{3}^{2} / {5}^{2}} = \sqrt{{\left(\frac{3}{5}\right)}^{2}} = \frac{3}{5}$ | 2019-09-22 00:46:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9419959187507629, "perplexity": 512.4768458031202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574710.66/warc/CC-MAIN-20190921231814-20190922013814-00291.warc.gz"} |
http://fordham.bepress.com/dissertations/AAI9816340/ | Activity levels in attention deficit disorder with and without hyperactivity
Abstract
The reinstatement in DSM-IV of an ADHD subtype without hyperactivity has raised the question of whether the lack of hyperactivity in this subtype can be taken a step further, and be defined as hypoactivity instead. To test whether ADD/WO boys were less active than both ADDH boys and normal controls, and whether ADDH boys were more active than both normal and ADD/WO boys, their activity levels were objectively measured using an accelerometer-based monitor while they were off any medication prescribed for the treatment of ADHD symptoms. Using an instrumented measure of activity also addressed the question of whether the hyperactivity of ADDH boys described in the literature mostly in terms of rating scale data could be confirmed by an objective measurement technique.^ Boys previously diagnosed with either ADDH or ADD/WO by their respective pediatricians or neurologists, and a control group who had never received a diagnosis for either, were recruited through advertisements in parent newspapers and posted flyers in Chicago suburbs. All participants were administered a CPT, and their parents completed a behavior rating scale to confirm diagnostic symptoms of inattention, impulsivity and hyperactivity. Participants wore the Caltrac$\sp{\rm TM}$ activity monitor for all waking hours for 3-5 days, and their parents recorded the data readings daily.^ Individual mean activity levels were compared among the three diagnostic groups. ADD/WO children did not have lower activity levels than normal controls, and ADDH participants did not have higher activity levels than normal controls. Both ADD/WO and ADDH children were normally active. Parental perceptions of motor excess as indicated by rating scale scores were therefore not confirmed by objective measurement. A possible explanation for this result was that rating scales may not exclusively measure hyperkinesis but include other qualities of behavior that call special attention to those individuals subsequently diagnosed with ADHD. Implications for diagnostic criteria and theoretical models of ADHD are discussed. ^
Subject Area
Psychology, Clinical
Recommended Citation
Leah Marave Abaya, "Activity levels in attention deficit disorder with and without hyperactivity" (January 1, 1998). ETD Collection for Fordham University. Paper AAI9816340.
http://fordham.bepress.com/dissertations/AAI9816340
COinS | 2015-04-01 17:51:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3341182470321655, "perplexity": 7794.419539246669}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131305143.93/warc/CC-MAIN-20150323172145-00000-ip-10-168-14-71.ec2.internal.warc.gz"} |
http://jmg.bmj.com/content/39/4/292 | Article Text
Four novel mutations in the OFD1 (Cxorf5) gene in Finnish patients with oral-facial-digital syndrome 1
1. A Rakkolainen1,2,
2. S Ala-Mello2,3,
3. P Kristo2,
4. A Orpana1,2,
5. I Järvelä2
1. 1Department of Clinical Chemistry, University of Helsinki, Helsinki, Finland
2. 2Department of Medical Genetics, University of Helsinki and HUCH-Laboratory Diagnostics, Helsinki, Finland
3. 3Clinical Genetics Unit, HUCH-Laboratory Diagnostics, Helsinki, Finland
1. Correspondence to:
Dr I Järvelä, HUCH-Laboratory Diagnostics, Laboratory of Molecular Genetics, Haartmanink 2, 00290 Helsinki, Finland;
irma.jarvela{at}hus.fi
## Statistics from Altmetric.com
Oral-facial-digital syndrome type 1 (OFD1, MIM 311200) was first described by Papillon-Léage and Psaume1 in 1954 and further delineated in 1962 by Gorlin and Psaume,2 who called it orodigitofacial dysostosis. It is a multiple congenital anomaly syndrome characterised by malformations of the face, oral cavity, and hands and feet. The facial dysmorphic features include hypertelorism, frontal bossing, broad nasal bridge, hypoplasia of alar cartilage, and transient milia. Oral cavity malformations include often asymmetrical cleft of the palate (80%), small midline cleft of the upper lip (45%), clefts of the tongue, hamartomatous masses on the ventral surface of the tongue (70%), mucobuccal fibrous bands, and dental abnormalities. Malformations of the fingers are seen in 50-70% and toe malformations in 25%. Central nervous system abnormalities, such as hydrocephalus, porencephaly, and agenesis of the corpus callosum, with mild mental retardation are seen in 40%.3 In recent years, a kidney disease closely resembling adult type polycystic kidney disease has been shown to be one of the distinct features of this syndrome.4,5
At least nine different forms of oral-facial-digital syndromes have been described, type 1 being the most common with a suggested incidence of 1:50 000 live births. OFD1 syndrome has dominant X linked inheritance with lethality in males. However, a case of Klinefelter syndrome (XXY) with OFD1 has been reported.6
By linkage analysis in two kindreds, the locus for OFD1 was mapped to Xp22.3-22.2.7 Recently, the gene for OFD1, Cxorf5, was identified, and mutations of three familial and four sporadic cases were identified by Ferrante et al.8 Expression of the gene was seen in all the tissues affected in the syndrome.
We report here the identification of four novel mutations in the OFD1 gene together with the clinical findings in four Finnish families, of which two are familial and two sporadic.
## PATIENTS AND METHODS
### Patients
The patients were ascertained from the Cleft Centre of the Department of Plastic Surgery, Helsinki University Central Hospital, where all patients with cleft lip and/or palate nationwide are treated. In addition, patients were ascertained from the Department of Medical Genetics of The Family Federation of Finland, which serves the whole country, and the Clinical Genetics Unit of Helsinki University Central Hospital, which serves the densely populated south of Finland in clinical genetics. All the patients were examined (fig 1) and their files and hospital records analysed by one of the authors (SA-M).
Figure 1
The family pedigrees of the Finnish OFD1 families. Black symbols, affected; symbols with slashed lines, anamnestically affected.
### Mutation analysis
DNA extracted from peripheral EDTA blood of the patients was screened for mutations in the OFD1 gene using primer sequences kindly provided by Dr Brunella Franco from Telethon Institute of Genetics and Medicine (TIGEM). PCR amplifications of the samples were run through 35 cycles consisting of 40 seconds at 94°C (denaturation), 40 seconds at 55 or 50°C (annealing), and one minute at 72°C (extension) with the final extension step of 5-10 minutes covering all 23 exons. Sequencing of PCR products was performed using ABI PRISM7 BigDye Terminator Cycle Sequencing Kit, Version 2.0 (Applied Biosystems, Foster City, CA, USA) in both directions and analysed using an ABI PRISM7 3100 Genetic Analyzer according to the manufacturer's instructions. The presence of a mutation was confirmed by minisequencing9 of the DNA in each family member. To exclude the presence of each of the mutations in random subjects, DNA extracted from buffy coat samples of 50 anonymous Finnish blood donors were analysed by minisequencing.
Ethical approval for the study was obtained from the ethical committee of Helsinki University Hospital and the Finnish Red Cross Transfusion Service.
### RNA analysis
RNA was isolated from heparin blood samples of the control and the youngest patient from family I (fig 1) carrying the intronic mutation IVS5-10T>G using the QIAamp RNA Blood Mini kit (Qiagen, Hilden, Germany). This mutation generates a putative novel splice site in exon 6. The mRNA was reverse transcribed to cDNA using 1 μg of total RNA, 10 units of AMV reverse transcriptase (Promega M5101) in the presence of 20 units of recombinant RNase inhibitor (RNasin, Promega, N2511), and 25 nmol dNTPs. The reaction was allowed to take place at 42°C for one hour, after which the cDNA was diluted with 1.7 volumes of DNA-TE-Buffer (10 mmol/l Tris-HCl, pH 7.8, 1 mmol/l EDTA) and stored at −20°C. cDNA synthesis was primed with the antisense primer 5-ACTTGTCTGAGTTTCCATATTACAACTC-3 located in the coding sequence of exon 6 of the OFD1 mRNA. For PCR two sense primers were designed. The first one, 5-CATTAAAATCAACCCTACTTCCAGTCTC-3, located in exon 4, together with the reverse primer used in the reverse transcription flanked the putative new splice site. The second sense primer 5-AGGATCTGATAAAGAAAATCAAAAAGGTTT TTTAGGTTT-3 was designed to anneal exclusively over the putative novel splice site to give a product only if this putative new splice site was transcribed (fig 2).
Figure 2
Diagram of detection of the transcript showing the abnormal splicing caused by the IV5-10T>G mutation in exon 6 of the OFD1 gene.
## RESULTS
We found four novel mutations in the OFD1 gene (table 1, fig 3) in two sporadic patients and in two families, both containing three patients with OFD1 syndrome (fig 1). The clinical features of the patients shown in table 2 were characteristic of OFD1 syndrome. In each case a novel mutation in the recently discovered OFD1 gene was identified; two of them were frameshifts, one was a missense mutation, and one was a splice mutation.
Table 1
Mutations in patients with OFD1
Table 2
Clinical features of the patients with OFD1
Figure 3
Sequencing chromatograms showing the four OFD1 mutations in the Finnish patients.
In family I, the syndrome was diagnosed in three successive generations (fig 1). The grandmother's facial features were typical of OFD1. She did not have cleft palate like her daughter and granddaughter. Instead, alveolar notching with missing teeth were seen. No abnormalities of the hands were seen. At the age of 44 years, she had just undergone a kidney transplant because of polycystic kidney disease. The kidney disease had been discovered by chance on routine gynaecological examination one year earlier and dialysis treatment was started almost immediately after that. She was unwilling to participate in genetic DNA studies. The daughter had small hands and feet with brachydactyly of the fifth fingers. The syndactyly of her fourth and fifth fingers of the left hand had been operated on as a child. Renal ultrasonography was performed at the age of 23, when the diagnosis of OFD1 was confirmed. Multiple cysts were seen in the right kidney, but no signs of renal failure in the laboratory examinations was found. The granddaughter, aged 1.5 years, has developed normally. In the extremities, there was only mild clinodactyly of the fifth fingers. The cleft palate was asymmetrical. Alveolar notching, suggesting tooth aplasia, and mucobuccal fibrous bands were seen. No signs of retardation were detected in this family. We found a T>G change in intron 5 of the OFD1 gene in the daughter and the granddaughter. The mutation is located 10 nucleotides before the starting nucleotide of exon 6 (fig 3) where it creates a novel splice acceptor site (and adds three novel amino acids to the 5` end of exon 6) resulting in an alternative splicing of mRNA. This was confirmed by the RNA studies described in the Methods section (fig 4).
Figure 4
The RT-PCR-products covering exons 4-6 are normal in both control and patient samples (on the left). The intronic nucleotide change IVS5-10T>G results in an abnormally spliced product in the patient sample (RT(+)) compared to the normal sample (RT(+)). RT(−) samples are the control samples with no cDNA.
In family II (fig 1), the mother and her two daughters were clinically examined and their facial features and other signs were typical of OFD1 syndrome (table 2). All three patients studied had midline pseudocleft of the upper lip, but no operations had been performed. The tongues of the mother and the older daughter were bilobulated and the younger daughter had multiple lobules in her tongue. No-one in this family had had problems with kidney function and no ultrasonographic examinations of the kidneys were performed. At the age of 42 years, the mother was diagnosed with hyperthyreosis, which was treated with radioactive iodine. The younger daughter had been operated on at the age of 1 year because of a medially located, supernumerary distal phalanx in the right hallux. The left leg grew 3 cm longer than the right leg and at the age of 13 years an orthopaedic operation was performed. The left breast has grown bigger than the right with mastopathic changes. Her mental development has been mildly delayed and she attended a special school. In the older daughter, vaginal bleeding started at the age of 3 months. After investigations, hormonal medication was given for precocious puberty. Epileptic seizures began at the age of 2½12 years. Repeated CT scan of the brain showed a hypothalamic hamartoma, which was thought to be the reason for the precocious puberty through excretion of hypothalamic hormones. She had short stature with a final height of 1.45 m (−3.5 SD) and small hands and feet. The fourth metatarsals were short, especially in the right foot. She attended a special school for handicapped children because of moderate mental retardation and received medication for psychiatric symptoms for a couple of years. In this family, an insertion of AT between nucleotides 1887 and 1888 in exon 16 was detected in all three family members (fig 3). This creates a frameshift resulting in a premature stop codon (TAG) at amino acid position 666 of the OFD1 gene.
In family III, the only patient studied had syndactyly of the fourth and fifth fingers of the left hand that had been operated on at the ages of 5 and 11 years. On ultrasonographic examination, numerous small cysts were detected in both kidneys at the age of 29 years. Functional studies of the kidneys were normal. In this patient a missense mutation G>A at nucleotide 235 in exon 3 was identified (table 1, fig 3). This transversion leads to a change of a non-polar amino acid alanine (A) to an uncharged polar amino acid threonine (T). We analysed DNA samples from both parents by minisequencing and no abnormalities were found, indicating that this is a de novo mutation.
In family IV, the index case was first examined at the age of 6 months. The first diagnostic signs were a prominent metopic ridge and a soft nodule (about 0.5 cm in diameter) medially in the right hallux. Psychomotor development has proceeded within normal limits. In this patient, a deletion of A at nucleotide 1409 in exon 13 leading to a frameshift was identified. This mutation results in a premature stop codon (TAG) at position 472. DNA from both parents was analysed and no mutations were found.
None of the four mutations was identified in the DNA of 50 anonymous Finnish blood donors screened by minisequencing.
### RNA
The results of the RT-PCR experiments (fig 4) show that in both the patient and the control sample the products generated by RT-PCR amplifying the area flanking the putative novel splice site are of similar size, indicating that the normal sized mRNA could be found in both samples. However, the splice site specific RT-PCR resulted in the identification of the product only in the patient's sample. This indicates that the intronic nucleotide change T >G residing 10 nucleotides from the splice acceptor site of exon 6 generates a false splice site and so is most likely the cause of the disease in this patient.
## DISCUSSION
Eight OFD1 patients have been diagnosed in Finland, consisting of a population of about 5 million, during the last 20 years. In all of them, a mutation in the recently identified OFD1 (Cxorf5) gene was found. Two of them were nonsense, one missense, and one splice mutation. The clinical features were characteristic in every patient. Interestingly, one of our patients had short fourth metatarsals, similar to a patient described by Ferrante et al.8 Mild or moderate mental retardation was seen in one of the families with the two daughters with learning difficulties.
• Oral-facial-digital syndrome type 1 (OFD1) is an X linked dominant disorder characterised by malformations in the face, oral cavity, and digits with a wide phenotypic variation. Recently, mutations in the OFD1 gene (Cxorf5) at Xp22 were found to underlie OFD1. We report here the identification of four novel mutations in the OFD1 gene in the Finnish families, two of which are familial and two sporadic.
• In the familial cases a splice mutation T>G in intron 5 in the mother and her daughter was identified resulting in an abnormal splicing, and in the second family a nonsense mutation 1887-1888insAT in exon 16 was detected in the mother and her two daughters. Analysis of the sporadic cases showed a missense mutation 235G>A in exon 3 and a single nucleotide deletion 1409delA leading to a nonsense mutation in exon 13. Three of the mutations in this study were located in the same exons as in the original study.
• Our study confirms the causative role of the OFD1 gene in the pathogenesis of oral-facial-digital syndrome type 1.
Renal involvement in OFD1 cases may be as high as 50%.10 In three out of eight Finnish patients, polycystic kidney disease was present, and one of them received a new kidney at the age of 44 years. The mutations that were associated with polycystic kidney disease in the Finnish patients were the splice mutation in intron 5 and a missense mutation G>A at nucleotide 235 in exon 3. In the original report by Ferrante et al,8 polycystic kidney disease was also associated with mutations in exon 3 but also in intron 4. Polycystic kidney disease usually manifests in adulthood, so two of our patients are too young to be able to draw any conclusions about kidney disease.
When analysing the phenotype-genotype correlation concerning mental retardation associated with this syndrome, mild to moderate mental retardation or learning difficulties were reported with mutations in exons 3, 13, and 16, and intron 4 in the original study.8 In this study, only the frameshift mutation in exon 16 was associated with learning difficulties in two out of three members of the same family. Further studies are needed to know whether certain mutations are more frequently associated with kidney disease or mental retardation, the findings that are important in genetic counselling when predicting the outcome of the disease.
The OFD1 gene contains 23 coding exons (GenBank accession numbers Y15164 and Y16355) with unknown function.11 Interestingly, three of the mutations found in this study are located in the same exons 3, 13, and 16 as the mutations reported in the original study by Ferrante et al,8 suggesting that these exons might represent regions for mutational hot spots. Functional studies of both the wild type OFD1 gene and the mutants are needed to understand the disease mechanism underlying OFD1.
In conclusion, we report here the identification of four novel mutations in the OFD1 gene in seven Finnish patients with oral-facial-digital syndrome type I. Our results confirm the causative role of the OFD1 gene in the pathogenesis of this syndrome.
## Acknowledgments
We are grateful to the patients and their families for their participation in this study. We thank Sirkka Elfving and Eino Puhakainen for encouragement during this study and the personnel of the Laboratory of Molecular Genetics for technical help. Financial support from Helsinki University Hospital Research Funding is acknowledged.
View Abstract
## Request permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | 2018-01-23 01:38:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2884366512298584, "perplexity": 6858.232683220709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00629.warc.gz"} |
https://help.anaplan.com/ce38460d-e931-403a-837c-d650d0ddaf64 | 1. Calculation functions
2. All functions
3. Time and date functions
4. NEXT
The NEXT function evaluates an expression based on the next period in the Time dimension.
For example, you can use this function to compare the performance of the current period against the forecast value for the next period.
## Syntax
NEXT(Expression)
## Arguments
Argument Data type Description Expression Number, Boolean, date, time period, list, or text The expression to return the value from the next period for.
The NEXT function returns a result of the same data type as the Expression argument (unless the expression resolves to a different data type).
## Syntax example
NEXT(Revenue)
This formula returns the value from the next period for each value of the Revenue line item.
### Values outside of the line item time range
If the NEXT function returns a value from outside of a line item’s time range, it uses a default value. If the line item has a data type of:
• number, NEXT returns a value of 0.
• Boolean, NEXT returns a value of FALSE.
• text, date, or time period, NEXT returns a value of BLANK.
## Constraints
• If the Expression argument has a data type of list or time period, and does not resolve to a different data type, the result line item must have the same data type.
• The result line item you use the NEXT function in must have Time as a dimension.
## Examples
### Forecast profit change by month
Jan 21 Feb 21 Mar 21 Apr 21 Net Profit 215,770 221,123 223,495 220,129 Forecast Profit Change NEXT(Net Profit) - Net Profit 5,353 2,372 -3,366 220,129
In this example, an income statement module has line items on rows and time on columns. The Net Profit line item shows the net profit for a business.
In this model, the Actual version changes to Forecast in Apr 21, as Switchover is enabled in Model Settings > Versions. The Forecast value for Apr 21 uses a formula to calculate a forecast based on an average of the values for the past three months. As the current period changes, this formula calculates a forecast profit for the next period.
The formula in the Forecast Profit Change line item uses the NEXT function to retrieve the value of Net Profit from the next period. The Net Profit for the current period is subtracted from the Net Profit for the next period to calculate the change in profit from one month to another. This includes the Forecast value for Apr 21, which can be used to plan ahead. As there is no value for May 21 yet, the final value of the Forecast Profit Change line item is the value of the Net Profit line item for Apr 21.
Disclaimer
We may update our documentation occasionally, but will only do so in a way that does not negatively affect the features and functionality of the Anaplan service. | 2022-05-25 22:25:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5114623308181763, "perplexity": 1221.1186522005967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00230.warc.gz"} |
http://www.talkstats.com/threads/stata-while-command.16045/#post-45706 | # Stata 'while' command
#### duskstar
##### New Member
I'm struggling to take some SAS code and 'translate' it into stata and am hoping someone can help. I suspect its a lack of understanding what stata is doing but any thoughts appreciated. The part of the SAS code I am trying to implement in STATA is:
do while (p>0.5);
ka=ka-1;
p=cdf('binomial',ka,prop,n);
end;
Which I thought in stata was:
while (p>0.5){
replace ka=(ka-1)
replace p=binomial(n,ka,prop)
}
However, I get completely different answers, so obviously the way stata deals with the loop is different to sas. Oh, and I know the SAS code is giving the correct answers.
Can post more code if it would be helpful but didnt want to bombard the post with lots of code.
#### bukharin
##### RoboStataRaptor
The code you've written will modify p and ka for every observation (ie every row of your dataset), but the "while (p>0.5)" will only look at the first observation (first row) of your dataset. Is that what you want?
#### duskstar
##### New Member
No I want it to look at p for each row (move with the bits that are changing if that makes sense!), since p will constantly change, how do I tell it to do that? I though it might be something like p[_n-1] but that didnt appear to solve it.
Thanks for any help!
#### bukharin
##### RoboStataRaptor
So basically you want to run the loop a different number of times for each row. If the reason for doing this is to identify one particular threshold you may be better off using one of Stata's binomial distribution functions (eg invbinomial or binomialtail).
Otherwise you need to run the -while- statement for each row, eg:
Code:
local row=1
while row'<=_N {
while (p[row']>0.5){
replace ka=(ka-1) in row'
replace p=binomial(n,ka,prop) in row'
}
local row = row' + 1
}
Last edited:
#### duskstar
##### New Member
Thanks thats quite helpful though still doesnt quite solve what I'm doing (and explaining badly). You are right I am trying to get it to loop until the binomial reaches a certain threshold (in this example, 0.5).
I want the loop to begin again for each row. Each row will have a different n and ka on it to start from. However, initially (until the replace part) every row will have the same p (whatever value is specfied prior to the loop). I'm trying to find the values of ka and its corresponding p which are past the threshold (corresponding to the n on that row).
Does that make any sense at all? I hope so, but if not, what you have given so far has been very helpful. Its the replacing ka part of the code which seems to be the bit going wrong, but your comment about the different binomial functions made me wonder if it was that that was actually the issue (though the other commands dont seem to help if I am implementing them correctly).
#### bukharin
##### RoboStataRaptor
Not sure why the code I gave you didn't work. I tried it on a "dummy" dataset here and seemed to work fine. The first -while- loop steps through each row of the dataset (from 1 to _N, which is a special system variable indicating the number of rows). The second -while- loop reduces the ka for the current row until p<=0.5 (in that row). Are you sure there's no typo, eg 'row' instead of row'? Perhaps it would help if you pasted in your code and output.
I think you're right, the other binomial functions don't seem to allow the determination of a threshold ka.
Code:
. list
+--------------------+
| n ka prop p |
|--------------------|
1. | 20 20 .3 1 |
2. | 20 30 .2 1 |
3. | 20 10 .6 1 |
4. | 20 15 .7 1 |
+--------------------+
. local row=1
. while row'<=_N {
2. while (p[row']>0.5){
3. replace ka=(ka-1) in row'
4. replace p=binomial(n,ka,prop) in row'
5. }
6. local row = row' + 1
7. }
<snip>
. list
+---------------------------+
| n ka prop p |
|---------------------------|
1. | 20 5 .3 .4163708 |
2. | 20 3 .2 .4114488 |
3. | 20 9 .6 .1275212 |
4. | 20 13 .7 .3919902 |
+---------------------------+`
#### duskstar
##### New Member
You know what, I just sat and typed out all my code for you then realised that I am talking absolute nonscence. It works perfectly fine, I was just comparing it to the wrong page in my binomial tables!!
So sorry for that, you have been incredibly helpful going through this with me, its much clearer if your able to explain it to someone! | 2022-01-23 08:52:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49282220005989075, "perplexity": 1718.51381123292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00297.warc.gz"} |
https://chemistry.stackexchange.com/questions/85589/phases-of-atomic-or-molecular-orbitals | # Phases of Atomic or Molecular Orbitals
What does phase mean in orbitals?
I know that phases are separated by nodes. They are in some way related to wavefunctions, I can't understand, how? How can wavefunctions be negative, since they are complex numbers (as I know)?
• Phase doesn't mean much; the difference in it does. BTW, I vaguely remember hearing that the eigenfunctions for a bounded state can always be chosen so as not to be complex. Nov 10, 2017 at 21:03
• This post of mine may shed some light on your question: sciencemadness.org/talk/…
– Gert
Nov 10, 2017 at 21:50
### Complex Numbers
As you point out, general a wavefunction if complex-valued. A complex number and be described by two real numbers and hence are of depicted on the 2D complex plane (sometimes called the argand plane).
Numbers in the complex plane can be expressed in multiple forms. The one usually taught first is as the sum of a real part and imaginary part:
\begin{align} z = x + iy\;{} & ; x,y \in \mathbb{R}\\ \Re(z) = x\; {} & ; \Im(z) = y \\ |z| = {} & \sqrt{x^2+y^2} \end{align}
Alternatively it can be described by a magnitude ($r$) and a phase ($\phi$):
\begin{align} z = r e^{i\phi}\; {} & ;r\in\mathbb{R}^{+0}, \phi\in[0:2\pi)\\ \Re(z) = r \cos(\phi) \; {} &; \Im(z) = r \sin(\phi) \\ |z|{} & = r \end{align}
### Molecular Orbitals
As we just showed they both have the same probability density, but when we begin to combine them, the phases dictate how much the wavefunctions constructively/destructively interfere. We can combine linear combinations of atomic orbitals to make molecular orbitals.
Consider adding together two adjacent wavefunctions with the same phase on different atoms:
The adjacent regions of different phase cancel one another out. When the probability density is examined:
we can see than the chance of the particle being between the two atoms is very low and it is localised around each nuclei. This would be an anti-bonding molecular orbital.
On the other hand, adding together two adjacent wavefunctions with the opposite phase on different atoms:
The adjacent regions of similar phase add together. When the probability density is examined:
we can see than the chance of the particle being between the two atoms is very much higher and it is localised in a bond. This would be a bonding molecular orbital.
This is why we talk about in phase and out of phase orbital when talking about bonding - the regions of in phase overlap of wavefunctions increases the probability that a particle will be found in there, and out of phase overlap decreases the probability.
The physical result is the wide variety of bonding phenomena we see in nature
• covalent bonds: local in phase overlap
• anti-bonding orbitals: local out of phase overlap
• conjugation: extended in phase overlap etc... | 2022-08-08 03:42:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9337638020515442, "perplexity": 912.1257437150226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00647.warc.gz"} |
https://hal.inria.fr/hal-01185579 | # Partial Quicksort and Quickpartitionsort
Abstract : Partial Quicksort sorts the $l$ smallest elements in a list of length $n$. We provide a complete running time analysis for this combination of Find and Quicksort. Further we give some optimal adapted versions, called Partition Quicksort, with an asymptotic running time $c_1l\ln l+c_2l+n+o(n)$. The constant $c_1$ can be as small as the information theoretic lower bound $\log_2 e$.
Keywords :
Document type :
Conference papers
Domain :
Cited literature [12 references]
https://hal.inria.fr/hal-01185579
Contributor : Coordination Episciences Iam Connect in order to contact the contributor
Submitted on : Thursday, August 20, 2015 - 4:32:53 PM
Last modification on : Monday, November 16, 2020 - 3:56:03 PM
Long-term archiving on: : Wednesday, April 26, 2017 - 9:49:05 AM
### File
dmAM0135.pdf
Publisher files allowed on an open archive
### Citation
Conrado Martínez, Uwe Rösler. Partial Quicksort and Quickpartitionsort. 21st International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods in the Analysis of Algorithms (AofA'10), 2010, Vienna, Austria. pp.505-512, ⟨10.46298/dmtcs.2781⟩. ⟨hal-01185579⟩
Record views | 2022-08-11 14:24:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1774400770664215, "perplexity": 7252.319556406672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00435.warc.gz"} |
https://mathematicalypse.wordpress.com/2016/03/28/linear-assignment-6/ | Linear Assignment 6
Background
The point of this assignment is to review and further develop many of the ideas explored in the previous assignment, namely concepts concerning the Rank-Nullity Theorem, isomorphisms between vector spaces, matrix representations of homomorphisms, etc.
This material comes primarily from Chapter III, pages 202-263. (Our upcoming exam will cover material from Chapters I, II, and III.)
The Actual Assignment
Problem 1. Suppose $h \in \mathcal{L}(V, W)$ is both one-to-one and onto (i.e. that $h$ is an isomorphism between vector spaces). Prove that the inverse function $h^{-1}:W\to V$ is a linear map.
Note: the definition of an inverse function is more descriptive than formula-based. One defines
$h^{-1}(\vec{w}) = \vec{v} \, \iff \, \vec{w} = h(\vec{v})$.
Problem 2. (a) Let $\text{id}_{\mathbb{R}^n}$ denote the identity map $\text{id}_{\mathbb{R}^n}:\mathbb{R}^n \to \mathbb{R}^n$, which was proven to be a homomorphism in your previous assignment. Compute the matrix representation of this map with respect to the standard bases $\mathcal{E}_n$ (for both the domain and co-domain space). That is, compute
$\text{Rep}_{\mathcal{E}_n, \mathcal{E}_n} \, \left(\text{id}_{\mathbb{R}^n}\right) =$
The matrix you obtain is called the $n\times n$ identity matrix, and is one you’ve likely seen in previous classes. It is often denoted by $I_n$.
(b) Compute a different matrix representation of the same identity map for $\mathbb{R}^2$, only this time use the basis $\mathcal{B} = \{\,(1,1)^T, (1,0)^T \}$ and the basis $\mathcal{D} = \{\,(0,2)^T, (1, 5)^T\,\}$.
Problem 3. On page 213 our textbook discusses how, conversely, every $m \times n$ matrix can be thought of as representing a homomorphism from an $n$-dimensional space to an $m$-dimensional space.
In Example 2.3, we find the $2\times 2$ matrix
$H = \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)$.
As part of this example, the book notes that two functions $h_1$ and $h_2$ can be represented by the single matrix $H$ via the following equations:
$\text{Rep}_{\mathcal{B}_1,\mathcal{D}_1} \left(h_1\right) = H$
$\text{Rep}_{\mathcal{B}_2,\mathcal{D}_2}\left(h_2\right) = H$.
(a) Explain in your own words why these two functions are not equal, i.e. that $h_1 \neq h_2$ as functions $h_i:\mathbb{R}^2 \to \mathbb{R}^2$.
(b) Did the matrix $H$ have to represent homomorphisms in $\mathcal{L}(\mathbb{R}^2, \mathbb{R}^2)$ or could it have been used to represent homomorphisms between other vector spaces? If yes, explain why; if not, provide an example.
Problem 4. Read and explain Corollary 2.6 (on page 216) in your own words.
Problem 5. (a) What does it mean to say that a linear map is non-singular?
(b) Is it possible that the $2\times 2$ matrix
$H = \left(\begin{array}{cc} 1 & 2 \\ 2 & 5 \end{array}\right)$
represents a non-singular homomorphism?
(c ) Is it possible that the matrix
$H = \left(\begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \end{array}\right)$
represents a non-singular homomorphism?
Problem 6. Read the following proof and then write down the theorem or Lemma that it demonstrates.
Theorem: ______________________________________________
proof: Suppose $h \in \mathcal{L}(V, W)$ and that $g \in \mathcal{L}(W, U)$ where $V, W,$ and $U$ are all vector spaces. By definition, the function $g\circ h : V \to U$.
Let $\vec{v}_1$ and $\vec{v}_2$ be arbitrary vectors in $V$. Then
$\left(g\circ h\right)(\vec{v}_1+\vec{v}_2) = g\left(\,h(\vec{v}_1+\vec{v}_2)\,\right) = g\left(h(\vec{v}_1) + h(\vec{v}_2)\right) = g(h(\vec{v}_1)) + g(h(\vec{v}_2)) = (g\circ h)(\vec{v}_1) + (g\circ h)(\vec{v}_2)$
since both $g$ and $h$ are assumed to be linear. Similarly, let $\vec{v} \in V$ be arbitrary and let $c \in \mathbb{R}$ be arbitrary. Then
$\left(g\circ h\right)(c\cdot\vec{v}) = g\left(\,h(c\cdot\vec{v})\,\right) = g\left(c\cdot h(\vec{v})\,\right) = c\cdot g(h(\vec{v}) = c\cdot\,\left(g\circ h\right)(\vec{v}$
This completes the proof. $\square$
Problem 7. Explain why matrix multiplication is defined the way it is (use words like “represent” and “composition” in your explanation).
Problem 8. Consider the matrix
$M = \left( \begin{array}{cc} 2 & 4 \\ 1 & 3 \end{array}\right)$.
(a) Confirm that the inverse of this matrix is given by
$M^{-1} = \left( \begin{array}{cc} 3/2 & -2 \\ -1/2 & 1 \end{array}\right)$.
(b) Consider the homomorphism $h:\mathbb{R}^2\to\mathbb{R}^2$ given by
$h(x, y) = (2x + 4y, x + 3y)^T$.
Find a formula for $h^{-1}$ (assuming it exists.)
Problem 9. Hi! | 2017-05-29 22:48:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 50, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9507466554641724, "perplexity": 550.7585807647741}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613135.2/warc/CC-MAIN-20170529223110-20170530003110-00202.warc.gz"} |
https://physics.stackexchange.com/questions/672148/proof-that-non-abelian-berry-phase-vanishes-identically | ‘Proof’ that non-Abelian Berry phase vanishes identically
For a degenerate system with Hamiltonian $$H =H(\mathbf{R})$$ and eigenstates $$\left|n(\mathbf{R})\right\rangle$$ the non-Abelian Berry connection is $$A^{(mn)}_i=\mathrm{i}\left\langle m|\partial_in\right\rangle\tag{1}$$ and the non-Abelian Berry curvature is $$F_{ij} = \partial_i A_j-\partial_j A_i - \mathrm{i}\left[A_i, A_j\right]$$ in matrix notation or, including the state indices: $$F_{ij}^{(mn)} = \partial_i A_j^{(mn)}-\partial_j A_i^{(mn)}-\mathrm{i}\sum_k\left( A_i^{(mk)}A_j^{(kn)}-A_j^{(mk)}A_i^{(kn)}\right).\tag{2}$$ Substituting (1) into (2) gives \begin{align} F_{ij}^{(mn)} &= \mathrm{i}\partial_i\left\langle m|\partial_jn\right\rangle- \mathrm{i}\partial_j\left\langle m|\partial_in\right\rangle -\mathrm{i}\sum_k\left(\mathrm{i}^2\left\langle m|\partial_i k\right\rangle\left\langle k|\partial_j n\right\rangle-\mathrm{i}^2\left\langle m|\partial_j k\right\rangle\left\langle k|\partial_i n\right\rangle\right)\\ &=\mathrm{i}\left\langle \partial_i m|\partial_jn\right\rangle+\mathrm{i}\left\langle m|\partial_i\partial_jn\right\rangle -\mathrm{i}\left\langle \partial_j m|\partial_in\right\rangle-\mathrm{i}\left\langle m|\partial_j\partial_in\right\rangle +\mathrm{i}\langle m|\left(\sum_k|\partial_i k\rangle\langle k|\right)|\partial_j n\rangle-\mathrm{i}\langle m|\left(\sum_k|\partial_j k\rangle\langle k|\right)|\partial_i n\rangle\\ &=\mathrm{i}\left\langle \partial_i m|\partial_jn\right\rangle -\mathrm{i}\left\langle \partial_j m|\partial_in\right\rangle +\mathrm{i}\langle m|\left(-\sum_k| k\rangle\langle \partial_ik|\right)|\partial_j n\rangle-\mathrm{i}\langle m|\left(-\sum_k|k\rangle\langle \partial_jk|\right)|\partial_i n\rangle\\ &=\mathrm{i}\left\langle \partial_i m|\partial_jn\right\rangle -\mathrm{i}\left\langle \partial_j m|\partial_in\right\rangle -\mathrm{i}\sum_k \underbrace{\langle m| k\rangle}_{\delta_{mk}}(\langle \partial_ik|\partial_j n\rangle-\langle \partial_jk|\partial_i n\rangle)\\ &=\mathrm{i}\left\langle \partial_i m|\partial_jn\right\rangle -\mathrm{i}\left\langle \partial_j m|\partial_in\right\rangle- \mathrm{i}\left\langle \partial_i m|\partial_jn\right\rangle+\mathrm{i}\left\langle \partial_j m|\partial_in\right\rangle\\ &=0. \end{align} To go from the second to the third line I used $$0=\partial_i(1) = \partial_i\left(\sum_k |k\rangle\langle k| \right) = \sum_k |\partial_i k\rangle\langle k|+\sum_k |k\rangle\langle\partial_i k|.$$ Clearly I have done something wrong as the Berry curvature is not zero in all cases. Please could someone point out my error?
• I am not 100% sure about the resolution, but I am skeptical about the identity you use in the last line. Let's say the parameter of the curve you are following is $\lambda$. The state is in some degenerate subspace of the spectrum as it travels along a closed path. Now I agree you can choose an orthonormal basis $|k\rangle$ at some fixed $\lambda$ obeying the resolution of the identity. But are you sure that as you vary $\lambda$, these states remain an orthonormal basis? You need $\sum_k |k(\lambda) \rangle \langle k(\lambda) | =1$ to be true for all $\lambda$ for your identity to hold. Oct 24 at 4:13
• To be clear, it's not that the evolution will take you out of the subspace. You can always choose an orthonormal basis for every $\lambda$. My question is, if you take a set of states for a given $\lambda$, and then evolve those states adiabatically as you vary $\lambda$, will they remain an orthonormal basis? Oct 24 at 4:16
Your calculation is correct. I believe the problem is that the non-Abelian Berry curvature is useful only when it is defined in a sub-Hilbert space and that subspace is not the same at different parameter R. In this case, the identity you mentioned in the end no longer equals to zero.
• Do you have a source for any more details on this? Oct 26 at 9:34
• Sorry I'm not aware of any good references for this. But inspired by your question, I find it is more intriguing to write the non-Abelian Berry curvature in the following form $$\sum_{m,n}\Omega_{ij}^{(mn)}|m \rangle \langle n|=\hat{P}i(\partial_{i}\hat{P}\partial_{j}\hat{P}-\partial_{j}\hat{P}\partial_{i}\hat{P})\hat{P}$$, where $\hat{P}$ is the projection operator onto the sub-Hilbert space where the Berry curvature is defined.
– Leon
Nov 6 at 17:36
To expand on Leon's answer, it may be helpful to think in analogy with the usual $$U(1)$$ gauge theory. We can think of the non-abelian Berry connection as an $$SU(n)$$ gauge field, where $$n$$ is the number of bands. Our gauge transformations are a change of basis by unitary matrix $$U(k)$$ and the connection and curvature transform as $$A' = U^\dagger A U + i U^\dagger\partial_k U\\ F' = U^\dagger F U$$ respectively.
In matrix notation we may write the Berry connection $$A = i M^\dagger \partial_k M$$ where $$M$$ is a matrix of the eigenvectors, $$M_{ij} = |j\rangle_i.$$ If $$M$$ is unitary, corresponding to the condition $$\sum_k |k\rangle\langle k| = 1$$, we may perform a change of basis which takes $$A'\to0$$ (explicitly we change basis by $$M$$), and therefore $$F$$ must be zero since it is a gauge-covariant quantity. That is if we keep all bands, every Berry connection is "pure gauge". However, if we consider a sub-space of the bands, then $$M$$ is instead a projector onto those bands $$M = \sum_{k=1}^{n < N } |k\rangle\langle k|$$ where $$N$$ is the total number of bands, and in general there is not guaranteed to be a change of basis which nullifies $$A$$. Thus, $$F$$ can be non-zero in this subspace.
$$F_{ij}^{(mn)} = \partial_i A_j^{(mn)}-\partial_j A_i^{(mn)}-\mathrm{i}\sum_k\left( A_i^{(mk)}A_j^{(kn)} \mathbf+ A_j^{(mk)}A_i^{(kn)}\right).\tag{2}$$ | 2021-12-01 13:13:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9299465417861938, "perplexity": 141.55218896738683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00578.warc.gz"} |
http://clay6.com/qa/21339/the-distance-of-the-line-overrightarrow-r-2-hat-i-2-hat-j-3-hat-k-lambda-ha | Browse Questions
# The distance of the line $\overrightarrow r=2\hat i-2\hat j+3\hat k+\lambda(\hat i-\hat j+4\hat k)$ from the plane $x+5y+z=5$ is ?
$\begin{array}{1 1} (a)\:\:\large\frac{10}{3\sqrt 3}\:\:\:\qquad\:\:(b)\:\:\frac{10}{9}\:\:\:\qquad\:\:(c)\:\:\frac{10}{3}\:\:\:\qquad\:\:(d)\:\:\frac{3}{10} \end{array}$
Toolbox:
• Distance between any line (parallel to the plane) and the plane is $\perp$ distance of any point on the line from the plane.
• $\perp$ distance of any point $(x_1,y_1,z_1)$ from the plane $ax+by+cz+d=0$ is $\bigg|\large\frac{ax_1+by_1+cz_1+d}{\sqrt {a^2+b^2+c^2}}\bigg|$.
A point on the given line $\overrightarrow r=(2\hat i-2\hat j+3\hat k)+\lambda(\hat i-\hat j+4\hat k)$ is $A(2,-2,3)$
$\perp$ distance of $P$ from the plane $x+5y+z-5=0$ is $\bigg|\large\frac{2-10+3-5}{\sqrt {1+25+1}}\bigg|$
$=\large\frac{10}{3\sqrt 3}$ | 2017-02-20 13:18:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7218619585037231, "perplexity": 147.1889192718451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170562.59/warc/CC-MAIN-20170219104610-00600-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://dmtcs.episciences.org/4551 | ## Borchert, Adam and Rampersad, Narad - Permutation complexity of images of Sturmian words by marked morphisms
dmtcs:4551 - Discrete Mathematics & Theoretical Computer Science, June 4, 2018, Vol. 20 no. 1
Permutation complexity of images of Sturmian words by marked morphisms
We show that the permutation complexity of the image of a Sturmian word by a binary marked morphism is $n+k$ for some constant $k$ and all lengths $n$ sufficiently large.
Source : oai:arXiv.org:1710.08820
Volume: Vol. 20 no. 1
Section: Combinatorics
Published on: June 4, 2018
Submitted on: November 2, 2017
Keywords: Mathematics - Combinatorics,Computer Science - Formal Languages and Automata Theory,68R15 | 2018-06-21 23:44:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36802372336387634, "perplexity": 2930.4317215264327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864303.32/warc/CC-MAIN-20180621231116-20180622011116-00364.warc.gz"} |
https://lab-training.com/2017/12/19/laboratory-distillation-process-will-meet-requirement/ | Which Laboratory Distillation Process will meet my requirement?
Distillation is a common laboratory practice used to isolate and purify liquids on the basis of their volatilities characterized by boiling point differences. Distillation, however, results only in partial isolation of liquids present in mixtures.
It is important to have clarity on boiling point concept as well as the boiling points of constituents of mixture of liquids. Boiling point of a liquid is the temperature at which the vapour pressure of a liquid equals the atmospheric pressure. On reaching the boiling point the partial pressures of all the components contribute to the overall vapour pressure. In other words the liquid mixture will have a single boiling point and not several boiling points. The proportion of each component present in the vapour will depend on its volatility. Thus the vapour will comprise of all the components present in the mixture but the proportion of the more volatile component will be highest.
The common laboratory distillation processes are:
• Simple Distillation
• Fractional Distillation
• Steam distillation
• Azeotropic Distillation
The different types of distillations are briefly discussed in the article.
Simple Distillation
Simple distillations are useful in resolving components of liquid mixtures having boiling points differing by $$25-30^0C$$.The vapour generated is led through a condenser to a collection vessel where it is liquefied. The liquefied component will not be 100% pure but will have a higher proportion of the more volatile component. The distillation should be stopped when the temperature of the vapour begins to rise above the boiling point of the required component. For liquids having closer boiling points the collected liquid can be subjected to one or more successive stages of distillation.
Fractional Distillation
Fractional distillation is essentially same as simple distillation and is useful when the boiling points of constituent liquids lie within a narrower range. The distillation process is repetitive but takes place inside the condenser tube .It affords saving of time over the repetitive simple distillations and is more efficient than multiple steps in simple distillations. The same re-distillation is affected repetitively inside the fractionating column.
The rising vapour recondenses on the rings, plates or packings inside the condenser tube. It gets revaporized by the rising hot vapour. Each vaporization- condensation cycle enriches the vapour with the volatile component.
Steam Distillation
Steam distillation should be preferred over fractional distillation for separation of heat sensitive compounds which can degrade on the uncontrolled heating of the solid supports inside the fractionating column. Steam on the other hand promotes the same purpose in a more controlled manner. However, the collected distillate is in two layers, namely, an oily layer and water layer which will require further steps to isolate the pure compound.
Vacuum Distillation
Compounds having very high boiling point require higher temperatures for distillation to be effective. The stability of the compound can be retained if pressure inside condenser tube is lowered instead of raising the temperature. Lowering the temperature lowers the boiling point of the higher boiling component. Vacuum distillation is generally carried out by making use of a rotary evaporator.
Azeotropic Distillation
Interactions at molecular level in liquid mixtures can result in alteration of the boiling point of the mixture. The mixture boils at a temperature which is lower or higher than the boiling points of the constituent pure compounds. On reaching the boiling point a constant composition vapours results which distil over without separation of the original components. A common example is ethanol and water which forms an azeotrope of 96.5% at $$78.1^0 C$$ .The azeotropic composition can be altered if further purification is required. A drying agent, such as potassium carbonate, can be added to remove the residual water and distil the ethanol to higher purity levels.
The same laboratory scale preparations form the basis of large scale industrial distillation processes which provide separations of tons of liquid fractions on daily basis. Such industrial separations are common in petroleum, petrochemicals and organic chemical industries.
How to select the optimum Laboratory Distillation Technique?
Distillation is a laboratory practice adopted to isolate a liquid component from others depending on differences in volatilities. Most of you would be familiar with…
A simple way to understand and practice the clauses covered by ISO 17025:2005
I have been a part of an accredited laboratory for 10 years now and have successfully faced more than 12 audits based on the ISO…
Distillation and Reflux Condensers
Distillation and Reflux heating are common laboratory operations. Distillation becomes necessary when you have to isolate a pure solvent from a mixture of several other…
Observing Microorganism
Part 1 – Microscopy Microbiology deals with the study of microorganisms that cannot be seen distinctly with the unaided eye. Observation of microorganisms is an…
125 Job Profiles for an Analytical Chemist!
The ever increasing consumer awareness and the demand for quality have made analytical chemistry and analytical chemist, an integral and essential part of all industries.… | 2021-03-07 00:15:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5467115044593811, "perplexity": 1917.6987394316402}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00392.warc.gz"} |
https://www.linstitute.net/archives/733971 | # Edexcel A Level Chemistry:复习笔记6.1.4 Conventional Cell Representation
### Conventional Cell Representation
#### Conventional Representation of Cells
• As it is cumbersome and time-consuming to draw out every electrochemical cell in full, a system of notation is used which describes the cell in full, but does not require it to be drawn.
• An electrochemical cell can be represented in a shorthand way by a cell diagram (sometimes called cell representations or cell notations)
The conventional representation of voltaic cells
• By convention, the half cell with the greatest negative potential is written on the left of the salt bridge, so Eθcell = Eθright – Eθleft
• In this case, Eθcell = +0.34 – -0.76 = +1.10 V.
• The left cell is being oxidized while the right is being reduced
• If there is more than one species in solution, and the species are on different sides of the half-equation, the different species are separated by a comma
• This method of representing electrochemical cells is known as the conventional representation of a cell, and it is widely used
• If both species in a half reaction are aqueous then an inert platinum electrode is needed which is recorded on the outside of the half cell diagram
#### Some Examples
• For the iron(II) and iron(III) half cell reaction a platinum electrode is needed as an electron carrier
• The half equation is
Fe3+(aq) + e- ⇌ Fe2+(aq)
• So the cell convention as a left hand electrode would be
Pt 丨Fe2+(aq), Fe3+(aq)
• Notice the order must be Fe(II) then Fe(III) as the left side is an oxidation reaction, so Fe(II) is oxidised to Fe(III) by the loss of an electron
• The platinum electrode is separated by the phase boundary (vertical solid line), but the iron(II) and iron(III) are separated by a comma since they are in the same phase
• Non-metals will also require a platinum electrode
• If chlorine is used as an electrode the reduction reaction is
Cl2(g) + 2e- ⇌ 2Cl-(aq)
• The conventional representation of the half reaction would be
Cl2 (g), 2Cl- (aq) | Pt
• Notice that the half cell reaction is balanced; however, it would be also correct to write it as
Cl2 (g), Cl- (aq) | Pt
• This is because conventional cell diagrams are not quantitative- they are just representations of the materials and redox processes going on
• Most chemists tend to show them balanced anyway
• Combining these two half cells together gives
Pt | Fe2+(aq), Fe3+(aq) ∥ Cl2 (g), 2Cl- (aq) | Pt
• As you can see the overall cell diagram is not quantitative as the left side is a one electron transfer and the right side is a two electron transfer | 2022-09-26 03:28:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042291402816772, "perplexity": 1486.065170225709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00490.warc.gz"} |
https://hsm.stackexchange.com/questions/8167/how-did-newton-and-leibniz-interpret-the-integral/8168 | # How did Newton and Leibniz interpret the integral?
How did Newton and Leibniz think about the integral? Did they only see it as an anti-derivative or did they also think of it as the area under a curve?
• They thought of it both ways. In fact this is their main discovery: that anti-derivative is related to the area. – Alexandre Eremenko Jan 15 '19 at 19:25
## 2 Answers
It was known even before Newton and Leibniz that areas under curves can be found by inverting the "computation of derivatives" (drawing tangents). In explicit geometric form this "fundamental theorem of calculus" was derived by Newton's teacher Barrow, see Barrow's Fundamental Theorem by Wagner.
Newton and Leibniz developed explicit symbolic methods for computing derivatives (isolated from the general process of drawing tangents), and anti-derivatives. To the extent that they conceptualized integral as such, it was identified with the anti-derivatives (what we call the indefinite integral), but, of course, it was used as a tool for computing areas, among other things. Euler still thought of it this way in 18th century. The idea of the definite integral does not appear until Cauchy in the 19th, see Kallio's History of the Definite Integral.
Newton proves the upper and lower sums converge when the function (actually curve) is positive and monotonic (decreasing) in Lemmas II, III in the Principia. Newton does not formulate the process in terms of integration per se, nor in terms of functions. It is a solely geometrical process. The proof is equally valid if the function is negative or increasing; and it is easily extend to the case where the function is piecewise monotonic (i.e., a finite number of monotonic pieces). Newton does not make these trivial extensions.
According to Kallio's description, which @Conifold references, Fermat's work resembles Newton's, although Fermat does not seem to bound the area with inscribed and circumscribed rectangles but just uses (something like a limit of) one sum of rectangles. Fermat also works algebraically, not geometrically in the Principia. Also it seems Newton did not develop further the limit of sums as integration; at least, according to Boyer, as cited by Kallio, future work was in terms of anti-derivatives as @Conifold says. | 2020-01-24 08:34:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244029521942139, "perplexity": 813.9858961223028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00435.warc.gz"} |
https://askdev.io/questions/3045/on-increasing-quaternion-matrices | # On increasing quaternion matrices
Both matrix reproduction and also quaternion reproduction are non-commutative; therefore making use of terms like "premultiplication" and also "postmultiplication". After running into the principle of "quaternion matrices", I am a little bit puzzled regarding just how one might increase 2 of these points, given that there go to the very least 4 means to do this.
Some looking has actually netted this paper, yet not having any kind of accessibility to it, I have no other way in the direction of knowledge other than to ask this inquiry below.
If there are without a doubt these 4 means to increase quaternion matrices, just how does one identify which one to make use of in a scenario, and also what shorthand could be made use of to speak about a certain version of a reproduction?
0
2019-05-06 21:43:35
Source Share
I presume I need to expand my comment right into a solution. Offered 2 matrices $a_{ij}$ and also $b_{ij}$ with access in any kind of (associative) ring $R$, the all-natural definition of the item has access
$\displaystyle c_{ij} = \sum_k a_{ik} b_{kj}.$
This reproduction is associative, and also it additionally concurs with the reproduction one gets from any kind of limited - dimensional matrix depiction of $R$ by changing each access by the equivalent matrix.
I do not see any kind of certain factor to take into consideration a various idea of reproduction. Scuffing of several of the reproductions appears ridiculous to me, and also increasing in the contrary order offers you basically the very same reproduction.
This definition does not concur with the definition in my first comment ; reproduction by among the above matrices does not specify an $R$ - component homomorphism when $R$ is noncommutative.
0
2019-05-10 08:53:14
Source
This offers fairly an instinctive suggestion concerning what is taking place :
http://plus.maths.org/content/curious-quaternions
0
2019-05-09 10:58:30
Source
... however I am just permitted to upload one link per solution - so below is the follow up http://plus.maths.org/content/ubiquitous-octonions
0
2019-05-09 10:58:11
Source | 2021-03-06 01:11:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40491628646850586, "perplexity": 1035.8307818451656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00385.warc.gz"} |
https://planetmath.org/2groupoid | # 2-groupoid
###### Definition 0.1.
A 2-groupoid is a 2-category whose morphisms are all invertible, that is, ones such that, each $1$-arrow (morphism) is invertible with respect to the morphism composition.
###### Remark 0.1.
Note, however that $\omega$-groupoid has a distinct meaning from that of $\omega$-category).
An important reason for studying $2$–categories, and especially $2$-groupoids, is to use them as coefficient objects for non-Abelian Cohomology theories. Thus, some double groupoids defined over Hausdorff spaces that are non-Abelian (or non-commutative) are relevant to non-Abelian Algebraic Topology (NAAT) and http://planetphysics.org/?op=getobj&from=lec&id=61NAQAT (or NA-QAT).
One needs to distinguish between a 2-groupoid and a double-groupoid as the two concepts are very different. Interestingly, some double groupoids defined over Hausdorff spaces that are non-Abelian (or non-commutative) have true two-dimensional geometric representations with special properties that allow generalizations of important theorems in algebraic topology and higher dimensional algebra, such as the generalized van Kampen theorem with significant consequences that cannot be obtained through Abelian means.
Furthermore, whereas the definition of an $n$-groupoid is a straightforward generalization of a 2-groupoid, the notion of a multiple groupoid is not at all an obvious generalization or extension of the concept of double groupoid.
Title 2-groupoid Canonical name 2groupoid Date of creation 2013-03-22 19:21:09 Last modified on 2013-03-22 19:21:09 Owner bci1 (20947) Last modified by bci1 (20947) Numerical id 17 Author bci1 (20947) Entry type Definition Classification msc 55Q35 Classification msc 55Q05 Classification msc 20L05 Classification msc 18D05 Classification msc 18-00 Synonym 2-category with invertible morphisms Defines 2-groupoid Defines HDA Defines higher dimensional algebra Defines (m-1) arrows Defines | 2019-04-18 15:16:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 6, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9470913410186768, "perplexity": 3136.4067008716993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00100.warc.gz"} |
https://www.zbmath.org/?q=ai%3Aye.zhuan.1+se%3A00002177 | # zbMATH — the first resource for mathematics
Global regularity for the 2D micropolar equations with fractional dissipation. (English) Zbl 1397.35204
Summary: Micropolar equations, modeling micropolar fluid flows, consist of coupled equations obeyed by the evolution of the velocity $$u$$ and that of the microrotation $$w$$. This paper focuses on the two-dimensional micropolar equations with the fractional dissipation $$(-\Delta)^{\alpha} u$$ and $$(-\Delta)^{\beta}w$$, where $$0<\alpha, \beta<1$$. The goal here is the global regularity of the fractional micropolar equations with minimal fractional dissipation. Recent efforts have resolved the two borderline cases $$\alpha = 1$$, $$\beta = 0$$ and $$\alpha = 0$$, $$\beta = 1$$. However, the situation for the general critical case $$\alpha+\beta = 1$$ with $$0<\alpha<1$$ is far more complex and the global regularity appears to be out of reach. When the dissipation is split among the equations, the dissipation is no longer as efficient as in the borderline cases and different ranges of $$\alpha$$ and $$\beta$$ require different estimates and tools. We aim at the subcritical case $$\alpha+\beta>1$$ and divide $$\alpha\in (0,1)$$ into five sub-intervals to seek the best estimates so that we can impose the minimal requirements on $$\alpha$$ and $$\beta$$. The proof of the global regularity relies on the introduction of combined quantities, sharp lower bounds for the fractional dissipation and delicate upper bounds for the nonlinearity and associated commutators.
##### MSC:
35Q35 PDEs in connection with fluid mechanics 35B65 Smoothness and regularity of solutions to PDEs 76A10 Viscoelastic fluids 76B03 Existence, uniqueness, and regularity theory for incompressible inviscid fluids 35B45 A priori estimates in context of PDEs 35R11 Fractional partial differential equations 76U05 General theory of rotating fluids
Full Text:
##### References:
[1] H. Bahouri, J.-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Springer, 2011. [2] J. Bergh and J. Löfström, Interpolation Spaces, An Introduction, Springer-Verlag, Berlin-Heidelberg-New York, 1976. [3] D. Chamorro; P. G. Lemarié-Rieusset, Quasi-geostrophic equation, nonlinear Bernstein inequalities and α-stable processes, Rev. Mat. Iberoam., 28, 1109, (2012) · Zbl 1256.35191 [4] Q. Chen; C. Miao, Global well-posedness for the micropolar fluid system in critical Besov spaces, J. Differential Equations, 252, 2698, (2012) · Zbl 1234.35193 [5] P. Constantin and C. Foias, Navier-Stokes Equations, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988. · Zbl 0687.35071 [6] A. Córdoba; D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Commun. Math. Phys., 249, 511, (2004) · Zbl 1309.76026 [7] S. C. Cowin, Polar fluids, Phys. Fluids, 11, 1919, (1968) · Zbl 0179.56002 [8] C. Doering and J. Gibbon, Applied Analysis of the Navier-Stokes Equations, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 1995. · Zbl 0838.76016 [9] B. Dong; Z. Chen, Asymptotic profiles of solutions to the 2D viscous incompressible micropolar fluid flows, Discrete Cont. Dyn. Sys., 23, 765, (2009) · Zbl 1170.35336 [10] B. Dong; J. Li; J. Wu, Global well-posedness and large-time decay for the 2D micropolar equations, J. Differential Equations, 262, 3488, (2017) · Zbl 1361.35143 [11] B. Dong; Z. Zhang, Global regularity of the 2D micropolar fluid flows with zero angular viscosity, J. Differential Equations, 249, 200, (2010) · Zbl 1402.35220 [12] M. E. Erdogan, Polar effects in the apparent viscosity of suspension, Rheol. Acta, 9, 434, (1970) [13] A. C. Eringen, Theory of micropolar fluids, J. Math. Mech., 16, 1, (1966) [14] A. C. Eringen, Micropolar fluids with stretch, Int. J. Engng. Eci., 7, 115, (1969) · Zbl 0164.27507 [15] G. Galdi; S. Rionero, A note on the existence and uniqueness of solutions of micropolar fluid equations, Int. J. Engrg. Sci., 15, 105, (1977) · Zbl 0351.76006 [16] L. Grafakos, Loukas Modern Fourier Analysis, Third edition. Graduate Texts in Mathematics, 250. Springer, New York, 2014. · Zbl 1304.42002 [17] Q. Jiu; C. Miao; J. Wu; Z. Zhang, The two-dimensional incompressible Boussinesq equations with general critical dissipation, SIAM J. Math. Anal., 46, 3426, (2014) · Zbl 1319.35193 [18] T. Kato; G. Ponce, Commutator estimates and the Euler and the Navier-Stokes equations, Comm. Pure Appl. Math., 41, 891, (1988) · Zbl 0671.35066 [19] C. E. Kenig; G. Ponce; L. Vega, Well-posedness of the initial value problem for the Korteweg-de Vries equation, J. American Mathematical Society, 4, 323, (1991) · Zbl 0737.35102 [20] C. E. Kenig; G. Ponce; L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principle, Comm. Pure Appl. Math., 46, 527, (1993) · Zbl 0808.35128 [21] P. G. Lemarie-Rieusset, Recent Developments in the Navier-Stokes Problem, Chapman & Hall/CRC Research Notes in Mathematics Series, CRC Press, 2002. · Zbl 1034.35093 [22] G. Lukaszewicz, Micropolar Fluids. Theory and Applications, Modeling and Simulation in Science, Engineering and Technology, Birkhäuser, Boston, 1999. [23] A. J. Majda and A. L. Bertozzi, Vorticity and Incompressible Flow, Cambridge University Press, Cambridge, 2002. [24] C. Miao, J. Wu and Z. Zhang, Littlewood-Paley Theory and its Applications in Partial Differential Equations of Fluid Dynamics, Science Press, Beijing, China, 2012 (in Chinese). [25] S. Popel, A. regirer and P. usick, A continuum model of blood flow, Biorheology, 11, 427, (1974) [26] T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij operators and Nonlinear Partial Differential Equations, Walter de Gruyter, Berlin, New York, 1996. · Zbl 0873.35001 [27] V. K. Stokes, Theories of Fluids with Microstructure, Springer, New York, 1984. [28] R. Temam, Navier-Stokes Equations: Theory and Numerical Analysis, AMS Chelsea Publishing, Providence, RI, 2001. · Zbl 0981.35001 [29] H. Triebel, Theory of Function Spaces II, Birkhauser Verlag, 1992. [30] L. Xue, Well posedness and zero microrotation viscosity limit of the 2D micropolar fluid equations, Math. Methods Appl. Sci., 34, 1760, (2011) · Zbl 1222.76027 [31] K. Yamazaki, Global regularity of the two-dimensional magneto-micropolar fluid system with zero angular viscosity, Discrete Contin. Dyn. Syst., 35, 2193, (2015) · Zbl 1308.35232 [32] Z. Ye; X. Xu, Global well-posedness of the 2D Boussinesq equations with fractional Laplacian dissipation, J. Differential Equations, 260, 6716, (2016) · Zbl 1341.35135 [33] B. Yuan, Regularity of weak solutions to magneto-micropolar fluid equations, Acta Math. Sci., 30B, 1469, (2010) · Zbl 1240.35421
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-01-24 17:19:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5303981900215149, "perplexity": 1444.5127579744892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00076.warc.gz"} |
http://quant.stackexchange.com/tags/equities/new | # Tag Info
## New answers tagged equities
1
To answer your question consider the following example using actual prices for SPY ETF on 7/31/15: "hopey.netfonds.no" By looking at the last 19 trades that occurred at the very last second, you will see a notable price movement on prices. If you go to Google/Yahoo Finance the Closing Price for the ETF is 210.50 (largest trade at the close?) but the very ...
3
Most literature focus on comparing fund returns using a model alpha. A good overview is: Cahart (1997) and Berk and Binsbergen (2015). Basically you regress the fund returns on most common used factors (market return, HML, SMB, Liquidity and Momentum factors) and compare alphas after fees.
3
The round-trip latency from point A to a matching engine at point B can be thought of being comprised of two components: $RTT_{total,A \rightarrow B} = RTT_{network\_transit,A \rightarrow B} + MPL_{matching\_engine,B}$ Where $RTT$ is the round-trip time and $MPL$ is the message processing latency (how long it takes to receive a message and produce an ...
0
I'm no expert in this topic, but I'm not sure people will be willing to share this kind of data openly, given a lot of HFT shops use such "trade secrets" to gain a competitive edge. Incidentally, I've been reading the book "Flash Boys" and there are some numbers related to your query in there. For instance, when you submit a trade from downtown Manhattan, it ...
0
Assuming you are not doing HFT, seconds scale, then you could measure it. By placing a limit order and then monitoring its appearance in Level 2 market depth quotes. During quiet market, with limit price away from spread and not crowded.
1
If you have a friend studying at almost any university you can get access to WRDS. Inside WRDS just go to Compustat which has all the info you need for dates since 1950.
1
There are two mainly (good) free sources available online: wolphramalpha.com Quandl They report the mainly market and fundamental data, so you will not find any particular fundamental accounting ratio. In the case you need particular ratio or data, you should get some better financial data provider, as, for instance, Bloomberg or Thompson Reuters.
Top 50 recent answers are included | 2015-08-03 00:38:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2748599350452423, "perplexity": 2166.504213198746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989331.34/warc/CC-MAIN-20150728002309-00311-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://tongfamily.com/2020/04/03/using-asana-and-trello-for-a-startup/ | Well if you are working in a startup, it feel like things start to get out of control, there are a bunch of decent free choices. I've used Trello for years and it is a good way to have Kanban cards and is a nice overview of the projects. Also it is free and sharing is super easy. The main tips are:
1. You should use the organizations to organize things and keep it simple, just have a monthly scrum chart that has the simplest possible organization with a backlog, what's in progress, what's blocked and what's done and dropped.
2. Here are the big tricks. Don't put anything into in progress that you aren't working on right away. You basically don't want to flood your mind with in progress stuff. And you do want to sort it so the most important thing goes first.
3. Use the members feature to assign items to people and the due date so you know when you have an issue. A huge list without any priorities or due dates is pretty much a mess.
4. The thing doesn't scale in terms of views, so it is most useful for the big overview.
5. I personally like to grade how much time things take to do. This is not a full scrum, but good for personal management. My personal metric is in half hour chunks, that is 1 point means it would take me about 30 minutes to do. And instead of big estimates, I stole something from somewhere which is to use the fibonacci series to schedule times so that means a ? if you don't know, then it goes 0.5, 1, 2, 3, 5, 8 etc to figure out how many hours something will take.
6. Scrum for Trello is an awesome extension. There is a Mac App for Trello (which is really just Electron underneath I think), but it shows the so called burndown and let's you manage it. Tells you how much time you've used and what you estimated. The simple convention is to put the (estimated) in parenthesis and the [actuals] in brackets. The chrome extension works great, but Safari doesn't seem to work
7. Do your Chrome because of the above problem. But once you invest in it, you can see the number of hours that you have booked into in progress and that should help you be more efficient.
Asana is a little different, it doesn't have the card view, so you can track things way more deeply. I haven't used it nearly as much, but like Trello, it has a wonderful modern user interface in a browser. It has the same sharing and is free to use for smaller projects.
It is great, at least for me, when you are down deep in writing code or a project like that because it has so many layers.
1. There is a big collection of chrome extensions for Asana as well. But I like the idea of Asana Global Task View so you can see all of your tasks.
2. Storypoint like Scrum for Trello let's you add points to tasks so you can estimate what things will take
3. Global View. See all your tasks aggregated across all Asana projects. Man, I would switch from Trello just for this. | 2022-08-16 14:08:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2675933241844177, "perplexity": 798.8531473714622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00509.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Jun-Wang%20Lu | • ### A holographic p-wave superfluid(1410.5243)
Dec. 12, 2014 hep-th
In the probe limit, we numerically construct a holographic p-wave superfluid model in the 4D and 5D AdS black holes coupled to a Maxwell-complex vector field. We find that, for the condensate with the fixed superfluid velocity, the results are similar to the s-wave cases in both 4D and 5D spacetimes. In particular, "The Cave of Winds" and the phase transition always being the second order take place in the 5D case. Moreover, we find the second-first order translating point $\frac{S_y}{\mu}$ increases with the mass squared. Furthermore, for the supercurrent with the fixed temperature, the results agree with the GL prediction near the critical temperature. In addition, this complex vector superfluid model is still a generalization of the SU(2) superfluid model, and also provides a holographic realization of the $He_3$ superfluid system.
• ### Lifshitz effects on holographic $p$-wave superfluid(1412.3689)
Dec. 11, 2014 hep-th
In the probe limit, we numerically build a holographic $p$-wave superfluid model in the four-dimensional Lifshitz black hole coupled to a Maxwell-complex vector field. We observe the rich phase structure and find that the Lifshitz dynamical exponent $z$ contributes evidently to the effective mass of the matter field and dimension of the gravitational background. Concretely, we obtain the Cave of Winds appeared only in the five-dimensional anti-de Sitter~(AdS) spacetime, and the increasing $z$ hinders not only the condensate but also the appearance of the first-order phase transition. Furthermore, our results agree with the Ginzburg-Landau results near the critical temperature. In addition, the previous AdS superfluid model is generalized to the Lifshitz spacetime.
• ### Lifshitz Scaling Effects on Holographic Superconductors(1311.2699)
Aug. 18, 2014 hep-th, hep-ph, gr-qc
Via numerical and analytical methods, the effects of the Lifshitz dynamical exponent $z$ on holographic superconductors are studied in some detail, including $s$ wave and $p$ wave models. Working in the probe limit, we find that the behaviors of holographic models indeed depend on concrete value of $z$. We obtain the condensation and conductivity in both Lifshitz black hole and soliton backgrounds with general $z$. For both $s$ wave and $p$ wave models in the black hole backgrounds, as $z$ increases, the phase transition becomes more difficult and the growth of conductivity is suppressed. For the Lifshitz soliton backgrounds, when $z$ increases ($z=1,~2,~3$), the critical chemical potential decreases in the $s$ wave cases but increases in the $p$ wave cases. For $p$ wave models in both Lifshitz black hole and soliton backgrounds, the anisotropy between the AC conductivity in different spatial directions is suppressed when $z$ increases. The analytical results uphold the numerical results.
• ### Magnetic-field effects on $p$-wave phase transition in Gauss-Bonnet gravity(1405.2499)
July 8, 2014 hep-th
In the probe limit, we study the holographic $p$-wave phase transition in the Gauss-Bonnet gravity via numerical and analytical methods. Concretely, we study the influences of the external magnetic field on the Maxwell complex vector model in the five-dimensional Gauss-Bonnet-AdS black hole and soliton backgrounds, respectively. For the two backgrounds, the results show that the magnetic field enhances the superconductor phase transition in the case of the lowest Landau level, while the increasing Gauss-Bonnet parameter always hinders the vector condensate. Moreover, the Maxwell complex vector model is a generalization of the SU(2) Yang-Mills model all the time. In addition, the analytical results backup the numerical results. Furthermore, this model might provide a holographic realization for the QCD vacuum instability.
• ### Lifshitz Effects on Vector Condensate Induced by a Magnetic Field(1403.5649)
May 19, 2014 hep-th
By numerical and analytical methods, we study in detail the effects of the Lifshitz dynamical exponent $z$ on the vector condensate induced by an applied magnetic field in the probe limit. Concretely, in the presence of the magnetic field, we obtain the Landau level independent of $z$, and also find the critical value by coupling a Maxwell complex vector field and SU(2) field into a (3+1)-dimensional Lifshitz black hole, respectively. The research results show that for both two models with the lowest Landau level, the increasing $z$ improves the response of the critical temperature to the applied magnetic field even without the charge density, and the analytical results uphold the numerical results. In addition, we find even in the Lifshitz black hole, the Maxwell complex vector model is still a generalization of the SU(2) Yang-Mills model. Furthermore, we construct the square vortex lattice and discuss the implications of these results.
• ### Five-dimensional generalized $f(R)$ gravity with curvature-matter coupling(1404.1681)
April 7, 2014 gr-qc
The generalized $f(R)$ gravity with curvature-matter coupling in five-dimensional (5D) spacetime can be established by assuming a hypersurface-orthogonal spacelike Killing vector field of 5D spacetime, and it can be reduced to the 4D formulism of FRW universe. This theory is quite general and can give the corresponding results to the Einstein gravity, $f(R)$ gravity with both no-coupling and non-minimal coupling in 5D spacetime as special cases, that is, we would give the some new results besides previous ones given by Ref.\cite{60}. Furthermore, in order to get some insight into the effects of this theory on the 4D spacetime, by considering a specific type of models with $f_{1}(R)=f_{2}(R)=\alpha R^{m}$ and $B(L_{m})=L_{m}=-\rho$, we not only discuss the constraints on the model parameters $m$, $n$, but also illustrate the evolutionary trajectories of the scale factor $a(t)$, the deceleration parameter $q(t)$ and the scalar field $\epsilon(t)$, $\phi(t)$ in the reduced 4D spacetime. The research results show that this type of $f(R)$ gravity models given by us could explain the current accelerated expansion of our universe without introducing dark energy. | 2020-02-23 19:54:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6830592751502991, "perplexity": 702.1076078543582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145839.51/warc/CC-MAIN-20200223185153-20200223215153-00331.warc.gz"} |
https://www.physicsforums.com/threads/partitions-for-riemann-sum.729897/ | # Homework Help: Partitions for Riemann sum
1. Dec 24, 2013
### NATURE.M
So my textbook asks to show $\int^{3}_{1} x^{2}dx = \frac{26}{3}$.
They let the partition P = {$x_{0},.....,x_{n}$}, and define the upper Riemann sum as U(P) = $\sum^{i=1}_{n} x_{i}Δx_{i}$ and lower sum as
L(P) = $\sum^{i=1}_{n} x_{i-1}Δx_{i}$
I understand this part, but the next part is where I'm confused.
For each index i, 1$\leq$i$\leq$n,
$3x^{2}_{i-1}\leq x^{2}_{i-1} + x_{i-1}x_{i}+x^{2}_{i}\leq3x^{2}_{i}$
It's probably something I'm overlooking by where does the middle term come from and the 3 ??
2. Dec 25, 2013
### Office_Shredder
Staff Emeritus
The inequality comes from the fact that $x_{i-1} \leq x_i$ for all i, and therefore
$$x_{i-1}^2 \leq x_{i-1}^2$$
and
$$x_{i-1}^2 \leq x_{i-1}x_i$$
and
$$x_{i-1}^2 \leq x_{i}^2$$
so adding these all together give
$$3 x_{i-1}^2 \leq x_{i-1}^2 + x_{i-1} x_i + x_{i}^2$$
3. Dec 25, 2013
### Newu
I think they want to express this, because we have
$$3x_{i-1}^{2}\leq x_{i-1}^{2} + x_{i-1} x_{i}^{2} +x_{i}^{2}\leq 3 x_{i}^{2},$$
so, $$3x_{i-1}^{2}(x_{i}-x_{i-1})\leq (x_{i-1}^{2} + x_{i-1} x_{i} +x_{i}^{2}) (x_{i}-x_{i-1}) \leq 3 x_{i}^{2} (x_{i}-x_{i-1}) ,$$
namely,$$3 x_{i-1}^{2}\Delta{x_{i}}\leq ( x_{i}^{3} - x_{i-1}^{3} )\leq 3x_{i}^{2}\Delta{x_{i}}.$$
Then, we can get
$$\sum_{i=1}^{n} x_{i-1}^{2} \Delta x_{i} \leq \frac{1}{3}( x_{n}^{3} - x_{0}^{3} ) \leq \sum_{i=1}^{n} x_{i}^{2} \Delta x_{i} ,$$
that is to say,
$$L(P)\leq \frac{26}{3} \leq U(P).$$
Because $f(x)=x^{2}$ is Riemann-integrable on $[1,3]$, let $I=\int _{1}^{3}f(x)dx\, , \lambda = \max \limits_{1\leq i \leq n}(\Delta{x_{i}})\rightarrow 0$, so
$$\lim_{\lambda\rightarrow 0}{U(P)}=L=I=l= \lim_{\lambda \rightarrow 0}{L(P)}.$$
According to the former reasoning, both of $L$ and $l$ equal $\frac{26}{3}$, so $I=\frac{26}{3}$.
Last edited: Dec 25, 2013 | 2018-07-18 05:40:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8756982088088989, "perplexity": 1350.2067499804903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590051.20/warc/CC-MAIN-20180718041450-20180718061450-00529.warc.gz"} |
https://tex.stackexchange.com/questions/208695/remove-page-number-on-first-level-in-index | # Remove page number on first level in index
I am using imakeidx to create an index for my recipe book.
I would like to have the index show the following level-wise:
Appetizers
Recipe 1, pagenumber
Recipe 2, pagenumber
What I want, is the index to show the main category without pagenumber, but the actual recipes with pagenumbers. This is a test code I have so far:
\documentclass[12pt, letterpaper]{book}
\usepackage{imakeidx}
\makeindex[name=cat, title=Category, columns=1]
\makeindex[name=type, title=Type of Dish]
\begin{document}
\index[type]{Casual} %Want this in bold and no page number
\index[type]{Quick} %Want this in bold and no page number
\index[type]{Formal} %Want this in bold and no page number
Appetizers \index[cat]{Appetizers} %Want this in bold and no page number
\newpage
Cauliflower soup \index[cat]{Appetizers!Cauliflower Soup} \index[type]{Casual!Cauliflower Soup} %Want this to show as usual with page number
\newpage
Drinks \index[cat]{Drinks} %Want this in bold and no page number
\newpage
Lemon Drop \index[cat]{Drinks!Lemon Drop} %Want this to show as usual with page number
\printindex[cat]
\printindex[type]
\end{document}
\documentclass[12pt, letterpaper]{book}
\usepackage{imakeidx}
\makeindex[name=cat, title=Category, columns=1]
\makeindex[name=type, title=Type of Dish]
\newcommand\foo[1]{}
\newcommand\textbfz[2]{\textbf{#1}}
\begin{document}
\index[type]{Casual@\textbfz{Casual}|foo} %Want this in bold and no page number
\index[type]{Quick@\textbfz{Quick}|foo} %Want this in bold and no page number
\index[type]{Formal@\textbfz{Formal}|foo} %Want this in bold and no page number
Appetizers \index[cat]{Apetizers@\textbfz{Appetizers}|foo} %Want this in bold and no page number
\newpage
Cauliflower soup \index[cat]{Apetizers@\textbfz{Appetizers}!Cauliflower Soup}
\index[type]{Casual@\textbfz{Casual}|!Cauliflower Soup} %Want this to show as usual with page number
\newpage
Drinks \index[cat]{Drinks@\textbfz{Drinks}|foo} %Want this in bold and no page number
\newpage
Lemon Drop \index[cat]{Drinks@\textbfz{Drinks}!Lemon Drop} %Want this to show as usual with page number
\printindex[cat]
\printindex[type]
\end{document}
• your spelling of "appetizers" is somewhat peculiar, not to say inconsistent. more to the point, what is the point of the period at the end of the category entries? (not needed, i think.) Oct 23, 2014 at 20:49
• @barbarabeeton spellink? oh never looked at that, was just looking at the fonts:-) the comma yes not really needed could replace @\textbf by @\zz defined by \newcommand\zz[2]{\textbf{#1}} then #2 would gobble the comma. Oct 23, 2014 at 20:53
• @barbarabeeton better? Oct 23, 2014 at 21:07
• Just one question, why is column=1 needed? Oct 23, 2014 at 22:09
• @Ahana I didn't even see it, it was just in the original file. Oct 23, 2014 at 22:14 | 2022-08-14 22:39:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7525262236595154, "perplexity": 9256.463964346189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00732.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=cheb&paperid=811&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Chebyshevskii Sb.: Year: Volume: Issue: Page: Find
Chebyshevskii Sb., 2019, Volume 20, Issue 3, Pages 272–281 (Mi cheb811)
On linear independence of the values of some hypergeometric functions over the imaginary quadratic field
P. L. Ivankov
Bauman Moscow state technical University (Moscow)
Abstract: The main difficulty one has to deal with while investigating arithmetic nature of the values of the generalized hypergeometric functions with irrational parameters consists in the fact that the least common denominator of several first coefficients of the corresponding power series increases too fast with the growth of their number. The last circumstance makes it impossible to apply known in the theory of transcendental numbers Siegel's method for carrying out the above mentioned investigation. The application of this method implies usage of pigeon-hole principle for the construction of a functional linear approximating form. This construction is the first step in a long and complicated reasoning that leads ultimately to the required arithmetic result. The attempts to apply pigeon-hole principle in case of functions with irrational parameters encounters insurmountable obstacles because of the aforementioned fast growth of the least common denominator of the coefficients of the corresponding Taylor series. Owing to this difficulty one usually applies effective construction of the linear approximating form (or a system of such forms in case of simultaneous approximations) for the functions with irrational parameters. The effectively constructed form contains polynomials with algebraic coefficients and it is necessary for further reasoning to obtain a satisfactory upper estimate of the modulus of the least common denominator of these coefficients. The known estimates of this type should be in some cases improved. This improvement is carried out by means of the theory of divisibility in quadratic fields. Some facts concerning the distribution of the prime numbers in arithmetic progression are also made use of.
In the present paper we consider one of the versions of effective construction of the simultaneous approximations for the hypergeometric function of the general type and its derivatives. The least common denominator of the coefficients of the polynomials included in these approximations is estimated subsequently by means of the improved variant of the corresponding lemma. All this makes it possible to obtain a new result concerning the arithmetic values of the aforesaid function at a nonzero point of small modulus from some imaginary quadratic field.
Keywords: hypergeometric function, effective construction, linear independence, imaginary quadratic field.
DOI: https://doi.org/10.22405/2226-8383-2018-20-3-272-281
Full text: PDF file (604 kB)
UDC: 511.361
Accepted:12.11.2019
Citation: P. L. Ivankov, “On linear independence of the values of some hypergeometric functions over the imaginary quadratic field”, Chebyshevskii Sb., 20:3 (2019), 272–281
Citation in format AMSBIB
\Bibitem{Iva19} \by P.~L.~Ivankov \paper On linear independence of the values of some hypergeometric functions over the imaginary quadratic field \jour Chebyshevskii Sb. \yr 2019 \vol 20 \issue 3 \pages 272--281 \mathnet{http://mi.mathnet.ru/cheb811} \crossref{https://doi.org/10.22405/2226-8383-2018-20-3-272-281} | 2020-04-03 06:29:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084850311279297, "perplexity": 806.8131677465941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00306.warc.gz"} |
https://www.physicsforums.com/threads/calculate-neutron-dose-rate-from-a-reactor-to-an-object.876284/ | # Calculate neutron dose rate from a reactor to an object
• A
So, I'd like to calculate the dose rate to an object, say 1g of quartz (SiO2) placed into a research reactor neutron flux. The average kinetic energy of research reactor neutrons is 2 MeV but individual neutron energies vary dramatically. Say the thermal neutron flux is 1E13 n/cm2/sec and the fast neutron flux is 1E12 n/cm2/sec. I'm confused as to how to properly take the different ranges of energies into account, or whether to use average energy. I see a lot of information for calculating neutron dose to humans but not much for calculating neutron dose to anything else. Any helpful info or formulas I'm missing would be appreciated. I don't expect you to look up all the cross-sections or anything, just any helpful info would be great. Thanks! | 2022-05-27 21:40:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8257706761360168, "perplexity": 904.7027526230759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00139.warc.gz"} |
http://mathoverflow.net/questions/82424/orbit-stratification-of-semi-infinite-flag-manifold | # Orbit stratification of semi infinite flag manifold?
Denote semi infinite flag manifold by $Fl_{\infty/2}=G((t))/N_-((t))H[[t]]$, denote $B_-((t))=N_-((t))H[[t]]$
from the book of Frenkel and Benzvi" Vertex algebras and algebraic curves", They take group $\hat{N_+}$ which is Lie group corresponding to unipotent radical of affine Kac-Moody algebra $\hat{g}$ respect to Standard Cartan decomposition of $\hat{g}$. Then they consider the orbit of action of $\hat{N_+}$ on $Fl_{\infty/2}$ and the $\hat{N_+}$-orbit of unit coset is isomorphic to $N_+[[t]]$ which is subspace of big cell ($N_+((t))$) of $Fl_{\infty/2}$. My question is:
What are the other $\hat{N_+}$-orbit of $Fl_{\infty/2}$? Is there any stratification of $Fl_{\infty/2}$ into these orbits?
Roman Fedorov told me that it may make more sense to consider $B_-((t))($ action on $G((t))/\hat{N_+}$. Then it seems similar to what Gaitsgory ever considered in his paper "twisted Whittaker model..." He considered the action of $N_-((t))$ onto affine Grassmannian $G((t))/G[[t]]$. And he claimed that the orbits are essentially infinite dimensional. But he did not write down the explicit decomposition.
So my question is how to write down this decomposition and how to write down the orbit decompostion of $\hat{N_+}$ acting on $Fl_{\infty/2}$?
-
The affine Grassmannian is a very different animal from the semi-infinite flag manifold, since it is ind-finite type. The $N_((t))$ orbits in $Gr$ are studied in more depth in section 7 of arxiv.org/abs/alg-geom/9703022 and in Mirkovic-Vilonen. – S. Carnahan Dec 2 '11 at 4:36 | 2014-11-23 19:19:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899280846118927, "perplexity": 346.1706301275872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379636.59/warc/CC-MAIN-20141119123259-00183-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://bookie.mozdev.org/source/browse/aphrodite/www/faq.html?annotate=1.14;hideattic=0;sortby=rev | ### Annotation of aphrodite/www/faq.html, revision 1.14
1.1 david 1: <!-- MAIN CONTENT -->
1.13 john 2:
1.14 ! john 3: <img src="http://aphrodite.mozdev.org/images/aphrodite.jpg" width="273" height="57" alt="Aphrodite Logo">
1.13 john 4:
1.1 david 5: <p><b>FAQ</b>
6:
1.14 ! john 7: <p><b>Q:</b> When will the next release of Aphrodite be ready?
1.1 david 8:
1.14 ! john 9: <p><b>A:</b> When it's ready ;o). Aphrodite 0.6 is targeted for Mozilla 1.0. It won't be released until
! 10: Mozilla 1.0 is ready and all major bugs are resolved.
1.1 david 11:
1.4 david 12: <p><hr>
1.1 david 13:
1.14 ! john 14: <p><b>Q:</b> Why won't Aphrodite run on the Mac?
1.1 david 15:
1.14 ! john 16: <p><b>A:</b> The reason it won't work on the mac is because the main menu is in a sub iframe.
! 17: Macs didn't like this. We are planning on switching to Mozilla's skin system in Aphrodite 0.7.
! 18: This should fix the problem.
1.6 petejc 19:
1.7 petejc 20: <p><hr>
21:
1.6 petejc 22: <p><b>Q:</b> Why won't Aphrodite .05 launch on Windows?
23:
24: <p>
25: Here are some of the issues i've come across on windows.
26:
1.8 petejc 27: <p>
1.9 petejc 28: <b>A:</b> installed-chrome.txt has unix ^J characters where windows wants ^M.
1.6 petejc 29: <br>Edit this file in notepad and delete the characters you see and replace them w/ a carrage return.
30: Aphrodite worked fine for me after i did this.
31: <p>
1.10 petejc 32: <b>A:</b> Using a shotcut and aphrodite icon to start up aphrodite.
1.6 petejc 33: <br>After you install Aphrodite, quit mozilla. Go to the bin directory where your mozilla nightly is installed and create a shortcut to mozilla and put in on your desktop.
34: <br>Right click on the properties of the shortcut and select choose icon. Go to the chrome\aphrodite\WIN folder and select the ico for your icon(thanks John Dobbins). For your Target path you will need to enter.
35: <br>
1.12 petejc 36: "c:\some\path\to\mozilla.exe" -console -chrome chrome://aphrodite/content/install.xul
1.6 petejc 37: <br>After you finish the Aphrodite install Aphrodite will launch for you.
38: <br>After this, edit this Target again and remove "install.xul". You no longer need it.
39: Then Aphrodite should run fine from now on.
1.11 petejc 40: <p>
41: <b>A:</b> Or if you are comfortable using a command prompt, you can launch Aphrodite like this:
1.6 petejc 42: <br>mozilla.exe -console -chrome chrome://aphrodite/content/install.xul
43: <br>Then
44: <br>mozilla.exe -console -chrome chrome://aphrodite/content/
45:
46:
FreeBSD-CVSweb <freebsd-cvsweb@FreeBSD.org> | 2019-04-24 10:14:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2687765657901764, "perplexity": 14596.909192895164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578640839.82/warc/CC-MAIN-20190424094510-20190424120510-00112.warc.gz"} |
http://community.wolfram.com/groups/-/m/t/1032663 | Code puzzles: turning docs into educational games
GROUPS:
Teaching programing and assessing learning progress is often a very custom task. I wanted to create a completely automated "practically" infinite stream of random puzzles that guide a leaner towards improving programing skills. I think the major problem is content creation. To test whether the learner knows a programming concept, an exercise needs to be wisely designed. And it is better to have a randomized set of such exercises to definitely test the knowledge and exclude guesses and cheating and so on. Often creating such educational materials is very tedious, time consuming, and manual. Exactly like creating good documentation. I will explain one simple idea of using docs to make an educational game. This is just a barebone prototype to clearly follow the inner workings (try it out & share: https://wolfr.am/bughunter ). Please comment with feedback on how we can develop this idea further.
Introduction: efficient use of resources
The docs are the finest wealth and depth of information and should be explored beyond their regular usage. Manual painstaking time consuming effort of creating good programing documentation should be used to its fullest potential. An automated game play would be a novel take on docs. We can use existing code examples in docs to randomly pull pieces of code and make programing exercises automatically. Being able to read code and find bugs is, in my experience, one of the most enlightening practices. The goal of the linked above game is to find a defect of the input code (bug) and fix it. Hence, the "bug hunter". There are just 2 possible outcomes of a single game cycle, --- and after each you can "try again":
Core game code: making puzzles
Wolfram Language (WL) documentation is one of the best I've seen. It has pages and pages of examples starting from simple ones and going though the all details of the usage. Moreover the docs are written in WL itself and furthermore, WL can access docs and even has internal self-knowledge of its structure via WolframLanguageData. For instance, this is how you can show a relationship community graph for symbols related to GatherBy:
WolframLanguageData["GatherBy", "RelationshipCommunityGraph"]
We can use WolframLanguageData to access docs examples and then drop some parts of the code. The puzzle is then for the learner to find what is missing. For the sake of clarity designing a small working prototype lets limit test WL functions and corresponding docs' pages to some small number. So out of ~5000 (and we just released a new addition):
WolframLanguageData[] // Length
4838
built in symbols I just take 30
functions = {"Append", "Apply", "Array", "Cases", "Delete", "DeleteCases", "Drop", "Except",
"Flatten", "FlattenAt", "Fold", "Inner", "Insert", "Join", "ListConvolve", "Map", "MapThread",
"Nest", "Outer", "Partition", "Prepend", "ReplacePart", "Reverse", "RotateLeft", "RotateRight",
functions // Length
30
that are listed on a very old but neat animated page of some essential core-language collection. I will also add some "sugar syntax" to potential removable parts of code:
sugar = {"@@", "@", "/@", "@@@", "#", "^", "&"};
So, for instance, out of the following example in docs we could remove a small part to make a puzzle:
Here is an example of "sugar syntax" removal, which for novice programmers would be harder to solve:
Next step is to define a function that can check if a string is a built-in symbol (function, all 5000) or if it is some of sugar syntax we defined above:
ClearAll[ExampleHeads];
Select[
Cases[e,_String, Infinity],
(NameQ["System"<>#]||MemberQ[sugar,#])&&#=!="Input"&
]
Next function essentially makes a single quiz question. First it randomly picks a function from list of 30 symbols we defined. Then it goes to the doc page of that symbol to the section called "Basic Examples". It finds a random example and removes a random part out of it:
ranquiz[]:=Module[
ranexa=RandomChoice[WolframLanguageData[ranfun,"DocumentationBasicExamples"]][[-2;;-1]];
{
ranexa[[2]],
ranfun
}
]
Now we will define a few simple variables and tools.
Image variables
I keep marveling how convenient it is that Mathematica front end can make images to be part of code. This makes notebooks a great IDE:
Databin for tracking stats
It is important to have statistics of your learning game: to understand how to improve it where the education process should go. Wolfram Datadrop is an amazing tool for these purposes.
We define the databin as
bin = CreateDatabin[<|"Name" -> "BugHunter"|>]
Deploy game to the web
To make an actual application usable by everyone with internet access I will use Wolfram Development Platform and Wolfram Cloud. First I define a function that will build the "result of the game" web page. It will check is answer is wrong or right and give differently designed pages accordingly.
quiz[answer_String,check_String,fun_String]:=
(
Grid[{
Grid[{{Style["Right! You got the bug!",40,Darker@Red,FontFamily->"Chalkduster"]},{First[imgs]}}],
Grid[{{Style["Wrong! The bug got you!",40,Darker@Red,FontFamily->"Chalkduster"]},{Last[imgs]}}]
]},
{Row[
"|",
"|",
Spacer[10]
]},
{Style["===================================================="]},
{hyperlink["An Elementary Introduction to the Wolfram Language","https://www.wolfram.com/language/elementary-introduction"]},
{logo}
}]
)
This function is used inside CloudDeploy[...FormFunction[...]...] construct to actually deploy the application. FormFunction builds a query form, a web user interface to formulate a question and to get user's answer. Note for random variables to function properly Delayed is used as a wrapper for FormFunction.
CloudDeploy[Delayed[
quizloc=ranquiz[];
FormFunction[
{{"code",None} -> "String",
{"x",None}-><|
"Input"->StringRiffle[quizloc[[3;;4]],","],
"Interpreter"->DelimitedSequence["String"],
"Control"->Function[Annotation[InputField[##],{"class"->"sr-only"},"HTMLAttrs"]]|>},
quiz[#code,#x[[1]],#x[[2]]]&,
AppearanceRules-> <|
"Title" -> Grid[{{title}},Alignment->Center],
"MetaTitle"->"BUG HUNTER",
"Description"-> Grid[{
{Style["Type the missing part of input code",15, Darker@Red,FontFamily->"Ayuthaya"]},
{Rasterize@Grid[{
{"In[1]:=",quizloc[[1]]},
{"Out[1]=",quizloc[[2]]}},Alignment->Left]}
}]
|>]],
"bughunter",
Permissions->"Public"
]
The result of the deployment is a cloud object at a URL:
CloudObject[https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter]
with the short version:
URLShorten["https://www.wolframcloud.com/objects/user-3c5d3268-040e-45d5-8ac1-25476e7870da/bughunter", "bughunter"]
https://wolfr.am/bughunter
And we are done! You can go at the above URL and play.
Further thoughts
Here are some key points and further thoughts.
• Automation of content: NO new manual resource development, use existing code bases.
• Automation of testing: NO manual labor of grading.
• Quality of testing: NO multiple choice, NO guessing.
• Quality of grading: almost 100% exact detection of mistakes and correct solutions.
• Fight cheating: clear to identify question type "find missing code part" helps to ban help from friendly forums (such as this one).
• Almost infinite variability of examples if whole docs system is engaged.
• High range from very easy to very hard examples (exclusion of multiple functions and syntax can make this really hard).
Improvements:
• Flexible scoring system based on function usage frequencies.
• Optional placeholder as hint where the code is missing.
• Using network of related functions (see above) to move smoothly through the topical domains.
• Using functions frequency to feed easier or harder exercises based on test progress.
1 month ago
11 Replies
Reserved for analytics
1 month ago
Frederick Wu 1 Vote The idea is fresh and new. However, I am afraid, if the kids would really love to play it. The illustrated examples appears all low level errors, which could be detected and assisted by WL grammar color system easily.In future, the kids are very very very smart. That means they would become boring very quickly either. If we want those kind of projects get to work, we should design it really playful. Maybe, we should ask and test with kids.
1 month ago
@Frederick Wu thanks for taking a look and the comments. However, I am afraid, if the kids would really love to play it. I was actually thinking about targeting adults who are a bit above the beginner level. A few folks I tested with were adults and they enjoyed playing it. Some said that the bugs were to scary though ;-) The illustrated examples appears all low level errors, which could be detected and assisted by WL grammar color system easily. If I understood what you mean by "grammar color system" correctly, then even those examples I are showed in the post already undetected by it - there is no colors in front end highlighting these:Moreover a few testers suggested I use an optional indicator to show where exactly the code is missing, --- as a hint to help the solution. I also think those cases that will trigger "grammar color system" are quite good because learners need to have a good habit and understanding how code highlighting works to catch errors during a real programing workflow. In future, the kids are very very very smart. That means they would become boring very quickly either. If we want those kind of projects get to work, we should design it really playful. Maybe, we should ask and test with kids. Yes, if you mean young kids, they would probably need another type of game or at least some better game-play dressing for this idea. I'd love to hear your ideas who to make this work. If you come up with anything please share.
1 month ago
Manjunath Babu 1 Vote Hello @Vitaliy Kaurov I enjoyed playing your Bug Hunter web app. The code and the thought process behind this application is brilliant. I did not know the Legacy Animations Documentation pages existed. Its a very nice and intuitive way of presenting it to users.I got a similar idea at Wolfram Summer School 2016 and built a simpler prototype. Link: http://community.wolfram.com/groups/-/m/t/886715 . Its called Infinite Coding Problems Generator in Wolfram Programming Language. I used templates and loaded a CSV file that contains questions from EIWL book. Your web app is so much fun and its giving me new ideas for improving my prototype. These educational applications has tremendous potential and spikes the learning curve in students. I like to see your app growing big like http://challenges.wolfram.com.Thank you Try my web app here: https://wolfr.am/e0t5Zn50
1 month ago
This is some great work @Manjunath Babu! Could you explain to me briefly here, how did you handle the correctness check? Was it a verbatim-check of the code? In Wolfram Language often different versions of code yield correct result, this is why we call it multi-paradigm language. For example: data = RandomReal[1, {100, 2}]; ListPlot[data] Graphics[Point[data]] Were you taking that into account?
1 month ago
Manjunath Babu 1 Vote Hello @Vitaliy Kaurov ,I haven't taken Random Functions into account yet.JSON Template looks something like this: [ { "Number": "11.2", "Question": "Make a single string of the whole alphabet, in upper case.", "Answer": "Function[StringJoin[Alphabet[\"Alphabet\"]]]", "Template": "Make a single string of the all Alphabet alphabets, using Function.", "Data": {"Function": ["ToUpperCase", "ToLowerCase"], "Alphabet": ["French","Russian", "Italian", "English","German"]} }, { "Number":"1.3", "Question":"Multiply the whole numbers from 1 to 5.", "Answer":"Times @@ Range[1, Number]", "Template":"Multiply the whole numbers from 1 to Number.", "Data":{"Number": "RandomInteger[{5,10}]"} }, { "Number":"4.1", "Question":"Make a bar chart of Number.", "Answer":"Function[Number]", "Template":"Make a Function of Number.", "Data":{"Function":["BarChart3D","BarChart","PieChart","PieChart3D","ListLinePlot"],"Number":"RandomInteger[50, 4]"} } ] To check for correctness, I execute the originally solution expression and execute the user provided answer expression. If these two match, then its correct.Since Wolfram Language is a symbolic language, it just compares at the lowest level of both executed expressions.However, the problem with this approach is, This prototype doesn't work with Random Function. Since Random function will store the final answer in different values at the lowest level.
1 month ago
I think there also could be some problem with sorting functions. If the goal is to return a list of elements independent of their order, some approaches may return them in different order. Then SameQ` will not match.
1 month ago
Arno Bosse 1 Vote This is great! I'm an adult beginner myself, with no formal training in programming. What I continue to find the most challenging aspect of learning the WL is the syntax. You have to get up to speed with the syntax, and quickly, or much of the docs, and most of the great examples shared here won't be comprehensible. This is also why, as much as I enjoyed working through the exercises in SW's new EIWL textbook, I feel that it still places too much emphasis on covering functions and not enough (and not soon enough) on syntax. I've tried out BugHunter a few times and haven't encountered an example yet that removed the 'sugar'. So I agree that allowing a user to pick different areas or themes for testing (for example, 'quiz me on syntax', 'quiz me on functions related to network analysis' etc.) would be a good choice for an initial enhancement. Thank you for making this! | 2017-05-01 00:36:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20638933777809143, "perplexity": 2748.222266661568}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917126237.56/warc/CC-MAIN-20170423031206-00430-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://blog.allegro.tech/2020/10/sureness.html | Imagine that you were charged with finishing a task started by your colleague. He or she implemented it just before leaving for a vacation. Now it is your job to finish and release it.
Example: To make this example more practical, let’s say that the task is to count reactions that are stored in pageReactionsRepository. This is your colleague solution implemented in Kotlin:
suspend fun getReactionsForPage(pageKey: String, userUuid: String?): PageReactionsResponse =
pageReactionsRepository.getPageReactions(pageKey)
.let { reactions ->
PageReactionsResponse(
pageKey = pageKey,
reactionsCount = emptyReactionsMap + reactions.groupingBy { it.reaction }.eachCount(),
userChoice = userUuid?.let { reactions.find { it.userUuid == userUuid } }?.reaction
)
}
Is this solution correct? Think about it for a minute. After analysing above code you might conclude that this solution seems correct, but are you sure about it? An even better question would be: what is your degree of sureness that this solution is correct? There is one thought experiment that help us establish that: How much money would you bet on this function being correct?
### Releasing is always a bet
Think of a release as a bet. You need to put money on correctness, and if you are wrong, you lose it all. It won’t be a fair bet, though. If you are right, you win nothing. If you are wrong, you need to pay. A third alternative is to spend more of your precious time and investigate.
So now, would you bet 10 cents on this solution? Yeah, why not. Would you bet 10 dollars? Some readers probably would, but I wouldn’t. From my observations, most experienced developers wouldn’t either as we’ve seen enough good looking code with a surprising outcome (we sometimes collect such snippets as puzzlers). Would you bet 1000 dollars on this solution working 100% correctly? I think no sane person would.
This game design is no coincidence. It simulates you from your employer’s perspective. Assuming there are no other verification mechanisms, this is how it would work — on the one hand, you can spend more time (which is valuable); on the other, a bug in production can be costly. Sure, in most cases, there are some testers on the way. But depending too heavily on testers is dangerous, and the fact that we missed a bug is already a risk. We will talk about it later.
### The stake is different in different products and functionalities
Most mistakes can be fixed. The highest costs are:
• Users’ discontent that can negatively influence their behavior and your company image.
This cost is much smaller for small companies or start-ups. It is also much smaller for internal products (for our employees). This is why small companies often care less about proper testing. Big products known publicly are under much higher pressure. At Allegro some mistakes might cost the company thousands or even millions of dollars. This is why we take testing seriously.
### We build sureness via different forms of checking
So let’s assume this is a 100 dollars bet. What would you do to make sure it works fine? This is where we start the process of checking.
Note: Checking and testing are not the same, but those two terms are often confused. People often call testing what should be called checking (https://www.infoq.com/news/2009/12/testing-or-checking/). I will not play language nazi here, and I stick with the term “software testing” as a subcategory of checking.
The first intuitive step is to check it out manually. The steps are:
1. Use this functionality and see how it behaves.
2. If it seems correct, think about possible problems and check them out. So we try to emulate a situation that might not be handled well, and we see what happens. If anything catches our attention, we stop and try to fix it. If not, we feel more confident that our functionality works fine.
This is what many programmers do, but it works well only in the short term. The problem is that it does not scale well.
### We can still lose the bet in the future — we need automatic testing
When I was still a student, a friend of mine, who worked in a big corporation, complained that other people write some components, and they do not leave any tests. “This is extremely immature,” he said, “Later, I need to touch it to introduce some change, and I have no idea if I broke something or not.” This system couldn’t be tested manually. It was (and is) too big, and I am not sure if there is a single person who understands it all. Even if it could, it would take a preposterous amount of time for each small change.
This is the answer to the question why hiring many manual testers is not a good long-term solution (even though it is practiced by many companies in Poland). It does not scale well. The bigger your system, the more manual testers you need to maintain to just check regression.
We need automatic testing — testing we can run again and again and have it check all main functionalities.
It is not free as it needs to be written, and it needs to be maintained (when we change logic in our system, we need to update tests as well). However, in the end, if your project is big and changes over time, it will surely pay off.
### Lack of tests leads to anxious developers and terrible code
Having a properly tested code is very important for us, developers. Without it, we feel anxious when operating on it. For a good reason: a small change might break who knows what. This has further consequences. When developers are worried about touching the code, and the project is too big for them to comprehend and test it all manually, they will make minimal changes possible. They will not refactor the legacy code. As a consequence:
• Components look worse and worse over time.
• We will soon have outdated system design, not reflecting how our system works now.
• Code will be less and less understandable by developers.
### Testing takes time and skills
On the other hand, writing tests has costs too:
• Proper testing takes a lot of time.
• Testing requires adjustments in architecture, which is both positive (it is often cleaner) and negative (it is often more complicated).
• Testing well is hard and needs to be learned. It is easy to write tests that test little and constrain our code a lot. It is hard to do the other way around.
It is also said, that if we first think about tests before implementation, we design both tests and components differently. Probably in a better way, but who knows for sure.
Testing is not easy, but it is worth to learn it. It takes time, but it will pay off later. We need to learn to write proper tests in an appropriate amount, so they serve us well.
### We need to stay in touch with tests
Another trick teams do when developers do not want to write tests is hiring a tester to write automatic tests. It works… to a certain degree. He might be very helpful by writing end-to-end tests for you, but they are not enough, as we will see later. We should also not entirely give up on testing ourselves.
As creators, we need to know our product and how to use it well. If we don’t, we lose contact with it. This should never happen for many reasons (decision-making, system design, operating on the project). Manual testing is one of the best ways to stay in touch with the system.
### Do not send unchecked functionality to a tester
One terrible mistake I see some developers doing is sending unchecked code to testers without even running it. It is nearly sure it will be back with some mistakes. The time developer saved on not checking will be wasted on getting back to the task and starting it again. He also wasted another person’s time and increased the risk of a problem getting released to production. A tester should be the second person to check something out. Not the only one. This should never happen.
### We need to unit test components ourselves
A tester can write end-to-end tests for the system. Having it well tested, we can have some degree of sureness our system behaves correctly. It does not mean that its components work correctly, though.
A few times, I had a funny situation where mistakes in 2 components canceled each other out, and the system as a whole worked fine. After fixing one of them, unexpected behavior revealed itself. Another common problem is that component is not implemented correctly, but the way we use it does not reveal it. However, if we use this component somewhere else, we might have a problem. This causes a cascade of problems. Refactorization then might be much harder.
Another thing is that when we test components, we know better what extreme situations might break them.
Finally, unit tests are generally much faster than other kinds of tests, and they are an important tool we should use when we develop a solution to check out our code’s correctness. | 2021-05-14 20:20:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3591371774673462, "perplexity": 922.0168543075629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00545.warc.gz"} |
https://www.physicsforums.com/threads/chebyshevs-theorem.381287/ | # Chebyshev's theorem
1. Feb 24, 2010
### cyod22
1. do bonds reduce the overall risk of an investment portfolio? let x be a random variable representing annual % return for Vangaurd Total Stock Index (all stocks). Let y be a random variable representing annual return for Vangaurd Balanced Index(60% stock and 40% bond). For the past several years we have the following data.
x: 11 0 36 21 31 23 24 -11 -11 -21
y: 10 -2 29 14 22 18 14 -2 -3 -10
a.) Compare Ex, Ex2, Ey and Ey2 (2 = squared)
b.) use results in part (a) to compute the sample mean, variance, and standard deviation for x and for y.
c.) Compute a 75 % Chebyshev interval about the mean for x values and also for y values. Use interval to compare funds.
I was able to do part a & b but have no idea what they want for c. i do have the answer but i am not sure what they used to get the answer.
2. Relevant equations
3. The attempt at a solution
2. Feb 25, 2010
### vela
Staff Emeritus
What is a Chebyshev interval?
3. Feb 26, 2010
First (and simplest): how many standard deviations around the mean does chebyshev's theorem say you must go to include 75% of the data values? (remember chebychev's theorem says the percentage of values between $$\bar x \pm ks$$ is at least
$$1 - {1}/{k^2}$$. | 2017-10-20 15:11:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5755048990249634, "perplexity": 1066.7954769035432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00237.warc.gz"} |
http://chemicalstatistician.wordpress.com/2013/03/24/inference-in-simple-linear-regression-with-logarithmic-transformation-an-illustration-with-exponential-decay-of-ddt-in-trout/ | # Estimating the Decay Rate and the Half-Life of DDT in Trout – Applying Simple Linear Regression with Logarithmic Transformation
This blog post uses a function and a script written in R that were displayed in an earlier blog post.
#### Introduction
This is the second of a series of blog posts about simple linear regression; the first was written recently on some conceptual nuances and subtleties about this model. In this blog post, I will use simple linear regression to analyze a data set with a logarithmic transformation and discuss how to make inferences on the regression coefficients and the means of the target on the original scale. The data document the decay of dichlorodiphenyltrichloroethane (DDT) in trout in Lake Michigan; I found it on Page 49 in the book “Elements of Environmental Chemistry” by Ronald A. Hites. Future posts will also be written on the chemical aspects of this topic, including the environmental chemistry of DDT and exponential decay in chemistry and, in particular, radiochemistry.
Dichlorodiphenyltrichloroethane (DDT)
Source: Wikimedia Commons
A serious student of statistics or a statistician re-learning the fundamentals like myself should always try to understand the math and the statistics behind a software’s built-in function rather than treating it like a black box. This is especially worthwhile for a basic yet powerful tool like simple linear regression. Thus, instead of simply using the lm() function in R, I will reproduce the calculations done by lm() with my own function and script (posted earlier on my blog) to obtain inferential statistics on the regression coefficients. However, I will not write or explain the math behind the calculations; they are shown in my own function with very self-evident variable names, in case you are interested. The calculations are arguably the most straightforward aspects of linear regression, and you can easily find the derivations and formulas on the web, in introductory or applied statistics textbooks, and in regression textbooks.
#### Linearizing a Non-Linear Relationship
As I mentioned previously in my last blog post about some conceptual foundations of simple linear regression, the “linear” term in the name refers to the linearity between the target and the regression coefficients ($\beta_0$ and $\beta_1$). The relationship between the target (Y) and the predictor (x) may not be linear to begin with, and some transformation may be required to linearize the predictor-target relationship. The exponential decay of an environmental pollutant is an example of such a model that can be linearized for simple linear regression to be used.
The exponential decay of a chemical’s concentration can be mathematically described as follows:
$C(t) = C_0e^{-\lambda t}$.
- $t$ is the time
- $C_0$ is the initial concentration of the chemical
- $C(t)$ is the concentration of the chemical at time $t$
- $\lambda$ is the rate of decay
This is especially common for radioactive decay. It can be shown mathematically using differential equations that a chemical will decay exponentially if the following conditions are satisfied:
1) There is no influx of the chemical into the system in question.
2) The rate of decay (or outflux) of the chemical is proportional to its amount in the system.
If $C(t)$ is the target variable and $t$ is the predictor variable, then it is clear that the above equation is not linear between $C(t)$ and $t$. However, let’s evaluate the natural logarithm of each side.
$ln[C(t)] = ln[C_0] - \lambda t$
If $ln[C(t)]$ is the target variable and $t$ is the predictor variable, then this is a linear relationship between the target and the predictor. $ln[C_0]$ is the y-intercept, and $\lambda$ is the rate of exponential change. The negative sign in front of $\lambda$ specifies that it $ln[C(t)]$ decreases or decays with respect to $t$.
#### Linearizing the Exponential Decay Model
Here are the data that I will use to illustrate simple linear regression.
Time (Year) DDT Concentration
[1,] 1970 19.19
[2,] 1971 13.00
[3,] 1972 11.31
[4,] 1973 9.96
[5,] 1974 8.42
[6,] 1975 7.50
[7,] 1976 5.65
[8,] 1977 6.34
[9,] 1978 4.58
[10,] 1979 6.91
[11,] 1980 4.74
[12,] 1981 3.22
[13,] 1982 2.74
[14,] 1984 2.22
[15,] 1986 1.10
[16,] 1988 1.44
[17,] 1990 1.39
[18,] 1992 1.16
[19,] 1995 0.98
[20,] 1998 0.85
Here is the scatter plot of these raw, untransformed data.
As you can see, the relationship is not very linear. Here is the scatter plot with the natural logarithmic transformation.
As expected, the relationship between the target and the predictor post-transformation looks much more linear.
#### My Own R Function for Simple Linear Regression
In an earlier blog post, I wrote my own function and script to conduct simple linear regression. First, I used setwd() to change my working directory to the folder containing this function, which was stored in the same folder as the script. Then, I used the source() function to call my function, simple.linear.regression(), so that it is loaded onto the workspace. Finally, I ran simple.linear.regression() with my 2 vectors of the predictor and the target as the 2 arguments. You can see that I tried to replicate the 4 main columns of the output from the lm() function.
> setwd('INSERT YOUR DIRECTORY PATH HERE')
> source('simple linear regression.R')
> DDT.regression.Eric = simple.linear.regression(year, ln.DDT)
> DDT.regression.Eric
Estimated Coefficient Standard Error t-statistic p-value
beta0 225.9138462 14.308314047 15.78899 5.450466e-12
beta1 -0.1133633 0.007222532 -15.69579 6.021962e-12
#### Estimating the Decay Rate and the Half-Life of a Chemical from Simple Linear Regression
Recall that simple linear regression was fitted for the model
$ln[C(t)] = ln[C_0] - \lambda t$.
From the results above, the estimate of the decay rate, $\lambda$, is 0.1133633. (The negative sign in front of the estimate indicates that this is a decay rather than a growth.) The p-value is 6.021962e-12, so there is overwhelmingly strong evidence for this estimate to be statistically significant.
A valuable quantity for chemists to gauge the length of time that a pollutant will stay in its environment is its half-life. I see it often calculated from a single datum in textbook examples and exercises; I think that it is much better to collect many data on the concentration of the pollutant as it decays and estimate the half-life from simple linear regression to reduce noise in the estimation. To estimate the half-life, set $C(t)$ to be half of the initial concentration, $C_0$, and solve for time.
$0.5 C_0 = C_0 e^{-0.1133633 t}$
$0.5 = e^{-0.1133633 t}$
$\frac{ln(0.5)}{-0.1133633} = t_{1/2}$
$6.114388 \ \text{Years} = t_{1/2} = \text{Half-life}$ | 2014-08-30 20:12:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235262632369995, "perplexity": 2485.1745802928913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00209-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/43580-mean-value-theorem.html | # Math Help - Mean Value Theorem
1. ## Mean Value Theorem
my problem I am working on is f(x)= x+(54/x); [6,9]
First I took the first derivative and found that it was f ' (x)= 1-(54/x^2)
I then found f(6) = 15 and f(9) = 15
If I plug those into the formula f(b)-f(a) I get 0 this just doesn't make
b-a 3
This just doesn't make sense!! Could someone help me?
2. What doesn't make sense?
$f'(c) = \frac{f(b)- f(a)}{b-a} \: \: c \in (a,b)$
Now, you found that f'(c) = 0, i.e. there exists some point on your interval such that its derivative is equal to 0. What does it exactly mean for your function to have its derivative equal to 0 at c?
Hint: Rolle's Theorem, a special case of the mean value theorem.
3. Originally Posted by kelleannmoore@yahoo.com
my problem I am working on is f(x)= x+(54/x); [6,9]
First I took the first derivative and found that it was f ' (x)= 1-(54/x^2)
I then found f(6) = 15 and f(9) = 15
If I plug those into the formula f(b)-f(a) I get 0 this just doesn't make
b-a 3
This just doesn't make sense!! Could someone help me?
The Mean Value Theorem states that if $f(x)$ is continous on some interval $D=[a,b]$ and differentiable on $(a,b)$ then
$\exists{c}\in{D}\backepsilon~f'(c)=\frac{f(b)-f(a)}{b-a}$
So we see that we have a special case of the Mean Value Theroem called Rolle's Theorem that states, that if our function meets the same conditions as before and that $f(a)=f(b)$
Then $\exists{c}\in{D}\backepsilon~f'(c)=0$ | 2014-12-27 06:03:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7324884533882141, "perplexity": 427.2279647530835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447550545.65/warc/CC-MAIN-20141224185910-00014-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://finalgebra.com/algebra/time-value-of-money/ | Home » Algebra » Time Value of Money
Interest Calculation
Introduction to interest calculation formulas that determine how money balances deposited into an interest earning account grow over time.
Effective Annual Rate (EAR)
The effective annual rate (EAR) is a conversion of interest rates to an equivalent rate with annual compounding.
Derivation of Continuous Compounding
Derivation of continuous compounding shows how compound interest converges to an exponential function as compounding intervals tend to zero.
Annuity Calculation
Introduction to fixed interest annuity calculation with formulas for present and future value as well as usage examples.
Derivation of Annuity Formulas
Easy to understand derivation of annuity formulas for the calculation of fixed interest series of constant payments and perpetuities.
# Time Value of Money
The time value of money takes account of changes to the financial value of money over time. In order to understand this concept it may help to first consider the analogy of the more commonly known phenomenon of inflation. When the rate of inflation is high money loses real purchasing power quickly. Similarly, the higher the interest rates in financial markets, the less valuable is future money in terms of money of today.
Inflation is a change in the ratio of the amount of money chasing goods and services. Where money growth exceeds the one of productivity, the real value of money drops. In contrast, time value of money is exclusively concerned with the reproduction of money through interest within the financial system.
Figure 1 shows an example bank account earning 4% of interest. While its nominal value keeps growing along the green line, its present value shown in black is flat-lining. This is because the conversion of future or past money balances to present time value reverts the effects of compound interest.
## Present and Future Value
Reverting compound interest effects by discounting back to present value is the essence of the concept of time value of money. Present and future values of current account balances follow from compound interest calculation. Assuming constant interest rates with discrete compounding, the relevant formulas are:
\begin{gather*}
FV(t)=PV \cdot (1+\frac{1}{m}\cdot r_i)^{t \cdot m}\\
\\
PV=FV(t) \cdot (1+\frac{1}{m}\cdot r_d)^{-t \cdot m}\\
\end{gather*}
\begin{alignat*}{2}
FV(t):~&& &\mathrm{future ~ value}\\
&& &\mathrm{at ~ time} \space t\\
PV:~&& &\mathrm{present ~ value}\\
&& &(\mathrm{at ~ time} \space 0)\\
m: ~&& & \mathrm{compounding}\\
&& & \mathrm{frequency}\\
r_i: ~&& & \mathrm{interest \space rate}\\
r_d: ~&& & \mathrm{discount \space rate}\\
\end{alignat*}
While the future value FV is an exponential growth, the present value PV is the inverse, an exponential decline[1]. Therefore, when the interest rate ri is same as the discount rate rd, then the present value of an interest earning balance remains constant through time.
For instance, the pricing of annuities is an application of discounting to present value. Though most textbook examples assume the equality of investment interest ri and discount interest rd, they almost always differ in retail banking. Just compare offers on high interest savings accounts and mortgages to see this.
## Time Value versus Real Value
Since time value of money is a matter of interest rates within the financial system, its relation to the real economy is often neglected. However, it’s real purchasing power that should matter most to retail banking clients.
As of writing this article in November 2022, the US and the EU are experiencing a period of financial repression[2]. During normal economic times the real interest rate, which is defined as the difference between nominal interest and inflation, is positive. Without real interest, incentives for lending and saving are lacking. Therefore, financial markets should naturally adjust interest rates to top inflation. And politics are repressive if they don’t allow this to happen.
Nevertheless, Figure 2 depicts the situation that citizens throughout the world currently find themselves in. Even though the financial system considers a savings account earning 4% of interest as having a stable time value, real values keep dropping.
Moreover, the financial view of time value is not without contradictions in itself. Arbitrage free pricing rules enforce a depreciation of future money balances when interest rates are rising. But rising interest rates limit the speed of money creation and thus counter inflation. Consequently, adjustments between real and financial economy can temporarily diverge from long term trends.
## References
[1] Present Value, Wikipedia.org
[2] The Price of Time: The Real Story of Interest, Edward Chancellor, 2022, Allen Lane
Published: November 26, 2022
Updated: November 29, 2022
Financial Algebra
Financial Algebra | 2023-03-29 15:20:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5319469571113586, "perplexity": 2667.6109043867796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00255.warc.gz"} |
https://indico.cern.ch/event/769736/timetable/?view=standard | The Modern Physics of Compact Stars and Relativistic Gravity 2019
Etc/GMT+4
Yerevan, Armenia
Yerevan, Armenia
Department of Physics, Alex Manoogian str. 1, Yerevan, Armenia
,
Description
This conference is the 5th in a series which aim to bring together people working in astrophysics of compacts stars, physics of dense matter, gravitation and cosmology, observations of pulsars and binary neutron stars and related fields. It is dedicated to the 100th Anniversary of the establishment of the Yerevan State University.
The previous conferences were held in 2008, 20132015 and 2017
Participants
• aakash narayan
• Abolfazl Chaman Motlagh
• Alexander Ayriyan
• Andrey Chugunov
• Aneta Magdalena Wojnar
• Anna Kotanjyan
• Ara Avetissian
• Aram Saharian
• Armen Sedrakian
• Arpine Piloyan
• Arthur Suvorov
• Arus Harutyunyan
• Arvin Ravanpak
• Ashot Gevorkyan
• Aurel Bulgac
• Bao-An Li
• Bernd-Jochen Schaefer
• Daniel Baghdasaryan
• David Blaschke
• David Edwin Alvarez Castillo
• David Sedrakian
• David Simonyan
• Diana Alvear Terrero
• Dipanjan Dey
• Dmitry Groshev
• Fatemeh Kayanikhoo
• Gevorg Hajyan
• Giuseppe Pagliara
• Gohar Harutunyan
• Grigor Alaverdyan
• Habib Yousefi Dezdarani
• Hayk Sargsyan
• Hemza Azri
• Hovhannisyan Martik
• Hovik Grigorian
• James Lattimer
• Karen Shahabasyan
• Kirill Bronnikov
• Kutay Arınç Çokluk
• Laleh Zahra Namvar
• Levon Pogosian
• Mahboubeh Shahrbaf M.
• Mekhak Hayrapetyan
• Mohsen Bigdeli
• morteza taghilo
• Narine Gevorgyan
• Nurgissa Myrzakulov
• Oleksii Ivanytskyi
• Peter Senger
• Roland Avagyan
• Salvatore Capozziello
• Sergey Odintsov
• Seyedeh Sedigheh Hashemi
• Shaswata Chowdhury
• Stefano Bellucci
• Sunil Kumar Tripathy
• Suvankar Paul
• Tigran Petrosyan
• Toru Kojo
• Toshitaka Tatsumi
• Udita Shukla
• VALERIY OBUKHOV
• Vardan Manukyan
• Yuri Vartanyan
• Zahra Sharifi
• Tuesday, 17 September
• 08:30 09:30
Registration
• 09:30 09:50
• 09:50 10:25
Some Quantum Effects in de Sitter Spacetime with Compact Dimensions 35m
We investigate the effects of background curvature, nontrivial topology and of a planar
boundary on the properties of the vacuum state for a charged scalar field. The
background geometry is locally dS with an arbitrary number of toroidally compact
dimensions. The planar boundary is perpendicular to one of infinite dimensions and on it
the charged scalar field obeys the Robin boundary condition. Along compact
dimensions general quasiperiodicity conditions are imposed and, in addition, the
presence of a constant gauge field is assumed. The latter induces Aharonov-Bohm-type
effect on the vacuum expectation values (VEVs) of physical observables. The periodicity
conditions imposed on fields along compact dimensions give rise to the modification of the
spectrum for normal modes and, related to this, the expectation values of physical
observables are changed. As important local characteristics of the vacuum state we
consider the VEVs of the field squared, energy-momentum tensor and of the current
density.
Speaker: Dr Anna Kotanjyan (Yerevan State University)
• 10:25 11:00
Searching for new physics with future CMB experiments 35m
Large scale B-mode patterns in CMB polarization, if detected, would constitute a “smoking gun” signature of primordial gravitational waves generated during an inflationary phase in the early universe. In this talk, I will discuss other sources of B-modes, such as primordial magnetic fields, axion-like fields and cosmic strings, and prospects of isolating their distinguishing features with future CMB measurements.
Speaker: Prof. Levon Pogosian (Simon Fraser University)
• 11:00 11:35
Vacuum polarization by cosmic strings 35m
We investigate the polarization of the vacuum for scalar, fermionic and electromagnetic fields induced by cosmic strings. Locally Minkowski, de Sitter and anti-de Sitter background geometries are considered. As local characteristics of the vacuum the expectation values of the field squared and of the energy-momentum tensor are considered. The contributions induced by the nontrivial topology of a cosmic string are explicitly extracted. The asymptotic behavior of the vacuum expectation values is discussed near the string and at large distances. For the de Sitter and anti-de Sitter geometries the influence of the gravitational filed on the vacuum characteristics is essential at proper distances from the string larger than the curvature radius of the background spacetime.
Speaker: Prof. Aram Saharian (Department of Physics, Yerevan State University)
• 11:35 11:55
Coffee break 20m
• 11:55 12:30
Vacuum currents in braneworlds with compact dimensions 35m
We investigate the vacuum expectation value (VEV) of the current density for charged quantum fields in background of locally AdS spacetime with an arbitrary number of toroidally compact dimensions and in the presence of a constant gauge field. Along compact dimensions the field operator obeys quasiperiodicity conditions with arbitrary phases. The VEVs for the charge density and the components of the current density along uncompact dimensions vanish. The components along compact dimensions are decomposed into the brane-free and brane-induced contributions. The behavior of the vacuum currents in various asymptotic regions of the parameters is investigated. Applications are given to braneworld mpdels of the Randall-Sundrum type with compact dimensions. In the special case of three-dimensional spacetime, the corresponding results are applied for the investigation of the edge effects on the ground state current density induced in curved graphene tubes by an enclosed magnetic flux.
Speaker: Stefano Bellucci (Istituto Nazionale Fisica Nucleare (IT))
• 12:30 13:05
Potentially observable cylindrical wormholes without exotic matter in general relativity 35m
All known solutions in GR describing rotating cylindrical wormholes lack asymptotic flatness in the radial directions and thus cannot describe wormhole entrances as local objects in our Universe. To overcome this difficulty, wormhole solutions are joined to flat asymptotic regions at some cylindrical surfaces on both sides of the throat. The whole configuration thus consists of three regions, the internal one containing a wormhole throat, and two flat external ones. It remains to find such solutions where the matter content of the internal region and both junction surfaces respect the weak energy condition. Two examples of such configurations have been found, in one of which the internal matter is represented by a stiff perfect fluid and another one with a special kind of anisotropic fluid. In both examples, the resulting configurations do not contain closed timelike curves.
Speaker: Prof. Kirill Bronnikov (VNIIMS, Moscow)
• 13:05 14:30
Lunch Break 1h 25m
• 14:30 15:00
Transport properties in magnetized compact stars 30m
Nowadays strong magnetic field has been observed or expected in compact stars or during
relativistic heavy-ion collisions. In particular, magnetars may have a huge magnetic field of
O(10 15 G) at the surface. We here consider the transport properties of Dirac particles in the
presence of a strong magnetic field. As a phenomenological implication, the heat conductivity is
interesting and important in the context of the thermal evolution of magnetars: The heat
conductivity is, in general, a tensor in the coordinate space,$\kappa_{ij}$ (I,j=x,y,z); the off-diagonal
components represent the thermal Hall conductivity.
First, we discuss the electron contribution in the crust of magnetars, since the main
mechanism of thermal transport is responsible for conducting electrons. The diagonal
components give the thermal currents proportional to the gradient of temperature. It comes
from some dissipative effects for electron propagation and has a classical analogy to the
Drude-Zener formula. On the other hand, the off-diagonal components consist of two parts
$\kappa_{ij}$ =$\kappa_{ij}$ I + $\kappa_{ij}$ II ($i\neq j$), where the first term represents the dissipative contribution similar to the
diagonal components and has been studied by many authors [1]. However, there is a little
study about the second term, which is a genuine quantum effect and gives a non-dissipative
contribution. It comes from the field-dependent level density and has no classical analogy [2]:
the Landau levels become essential in the strong magnetic field and the density of states
(DOS) is a field-dependent quantity, while DOS is not field dependent in the classical limit.
Sometimes $\kappa_{ij}$ II has been missed in the literature. We elucidate its contribution by way of the
Kubo formula and estimate its importance.
Next, we discuss the anomalous thermal Hall effect in quark matter, which may develop in the
core of compact stars. Recently we have shown a possibility of the anomalous Hall effect in
dense QCD matter by the use of the Kubo formula [3], where inhomogeneous chiral phase
(DCDW phase) is realized [4]. The important consequence is that the Hall conductivity $\kappa_{ij}$
becomes nonvanishing even in the absence of the magnetic field. It has a geometrical origin
and modifies the Maxwell equation as in the Weyl semimetal [5]: the energy spectrum
exhibits asymmetry with respect to the zero energy to produce a kind of “magnetization” in
the DCDW phase, and the Hall current flows in the direction perpendicular to the
magnetization. Since thermal conductivity is closely related to conductivity $\kappa_{ij}$, we can expect
the anomalous thermal Hall effect there as well [6]. It then should give another contribution to
the thermal conductivity independent of the magnetic field. We discuss the interplay of these
terms in the non-dissipative contribution $\kappa_{ij}$ II .
Finally, we briefly discuss some implications of the non-dissipative thermal Hall conductivity
$\kappa_{ij}$ II , in the context of thermal evolution of magnetars.
[1] A.Y. Potekhin, J.A. Pons, D. Page, Space Sci.Rev. 191 (2015) 239.
[2] P. Streda, Solid State Phys. 15 (1982) L717.
[3] T.Tatsumi, R. Yoshiike, K. Kashiwa, PLB 785(2018) 46.
[4] E. Nakano and T. Tatsumi, PRD 71 (2005) 114006.
[5] N.P. Armitage, E.J. Mele, A. Vishwanath, Rev.Mod.Phys. 90, 015001
[6] L. Smrcka and P. Streda, J.Phys. C10 (1977) 2153.
Speaker: Toshitaka Tatsumi (Kyoto U.)
• 15:00 15:30
Low-mass stellar objects in modified gravity 30m
I will demonstrate how the minimum main sequence mass of the low-mass stars is affected by the Palatini gravity: it turns out that such objects, whose the internal structure is known better in comparison to compact stars, can be used to test modified theories of gravity.
Speaker: Dr Aneta Magdalena Wojnar (Espirito Santo University)
• 16:00 18:00
Excursion to Garni and Gegard
• Wednesday, 18 September
• 09:00 09:40
The universe acceleration in modified gravity: an overview 40m
General introduction to cosmology of modified gravity is given. It is shown that different forms of modified gravity are possible: many of them being consistent with Solar system tests and cosmological bounds. Special attention is paid to F(R) gravity. It is shown that such theory may naturally describe the early-time inflation with late-time acceleration (dark energy epoch). Realistic versions of F(R) gravity are proposed. The inflationary indices are shown to be consistent with Planck experiment. New ghost-free versions of modified gravity are introduced and their cosmological evolution is studied. It is shown that it may naturally give the unification of inflation with dark energy while scalar field which appears there plays the role of dark matter.
Speaker: Prof. Sergey Odintsov (ICREA and ICE-CSIC, Barcelona)
• 09:40 10:20
Time-Dependent Density Functional Theory for Fermionic Superfluids: from Cold Atomic Gases, to Nuclei and Neutron Stars Crust 40m
In cold atoms and in the crust of neutron stars the pairing gap can reach values comparable with the Fermi energy. While in nuclei the neutron gap is smaller, it is still of the order of a few percent of the Fermi energy. The pairing mechanism in these systems is due to short range attractive interactions between fermions and the size of the Cooper pair is either comparable to the inter-particle separation or it can be as big as a nucleus, which is still relatively small in size. Such a strong pairing gap is the result of the superposition of a very large number of particle-particle configurations, which contribute to the formation of the Copper pairs. These systems have been shown to be the host of a large number of remarkable phenomena, in which the large magnitude of the pairing gap plays an essential role: quantum shock waves, quantum turbulence, Anderson-Higgs mode, vortex rings, domain walls, soliton vortices, vortex pinning in neutron star crust, unexpected dynamics of fragmented condensates and role of pairing correlations in collisions on heavy-ions, Larkin-Ovchinnikov phase as an example of a Fermi supersolid, role pairing correlations control the dynamics of fissioning nuclei, self-bound superfluid fermion droplets of extremely low densities.
Speaker: Prof. Aurel Bulgac (University of Washington)
• 10:20 10:40
Coffee break 20m
• 10:40 11:20
High-Density Nuclear Symmetry Energies from Observations of Neutron Stars and Gravitational Waves 40m
The high-density behavior of nuclear symmetry energy is the most uncertain part of the Equation
of State (EOS) of dense neutron-rich nucleonic matter [1]. It has significant ramifications in
understanding properties of nuclear reactions induced by rare isotopes, neutron stars and
gravitational waves from various sources. Using a new technique of inverting numerically the
Tolman-Oppenheimer-Volkov (TOV) equation and Bayesian inferences, we show that a firmly
restricted EOS parameter space is established using observational constraints on the radius,
maximum mass, tidal deformability and causality condition of neutron stars [2,3,4]. The
constraining band obtained for the pressure as a function of energy (baryon) density is in good
agreement with that extracted recently by the LIGO and Virgo Collaborations from their
improved analyses of the tidal deformability of neutron stars involved in the GW170817 event.
Rather robust upper and lower boundaries on nuclear symmetry energies are extracted from the
observational constraints up to about twice the saturation density of nuclear matter. Moreover, by
studying variations of the causality surface where the speed of sound equals that of light at
central densities of the most massive neutron stars within the restricted EOS parameter space, the
absolutely maximum mass of neutron stars is found to be about 2.40 Msun. Implications of these
findings on the recently reported mass 2.17 Msun of PSR J0740+6620 [5,6], calculations of the
EOS of dense neutron-rich matter, heavy-ion reactions in terrestrial laboratories as well as the
frequencies and damping times of oscillating modes of neutron stars are discussed [7].
References
[1] B.A. Li, Nuclear Physics News 27, 7 (2017).
[2] N.B. Zhang, B.A. Li and J. Xu, The Astrophysical J. 859, 90 (2018).
[3] N.B. Zhang and B.A. Li, J. Phys. G: Nucl. Part. Phys. 46, 014002 (2019).
[4] N.B. Zhang and B.A. Li, EPJA 55, 39 (2019).
[5] H. T. Cromartie et al., arXiv:1904.06759
[6] N.B. Zhang and B.A. Li, arXiv:1904.10998, The Astrophysical J. (2019) in press.
[7] D.H. Wen, B.A. Li, H.Y. Chen and N.B. Zhang, Phys. Rev. C 99, 045806 (2019)
Speaker: Prof. Bao-An Li (Department of Physics and Astronomy, Texas A&M University-Commerce, Commerce, TX 75429, USA)
• 11:20 12:00
Cosmic matter in the laboratory- The Compressed Baryonic Matter experiment at FAIR 40m
The Compressed Baryonic Matter (CBM) experiment is of the major scientific pillars of the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt. In collisions between heavy nuclei at FAIR energies, it is expected that the matter in the reaction zone is compressed to more than five times saturation density, corresponding to the density in the core of a massive neutron star. This offers the unique opportunity, to study in the laboratory the high-density equation-of-state (EOS) of nuclear matter, and to search for new phases of QCD matter at large baryon chemical potentials. Promising experimental observables sensitive to the EOS and to possible phase transitions will be discussed, together with the expected performance of the CBM experiment, and the status of the FAIR project.
Speaker: Prof. Peter Senger (GSI, MEPhI)
• 12:00 13:30
Lunch break 1h 30m
• 13:30 14:00
Merger of compact stars in the two-families scenario 30m
I will discuss the phenomenological implications of the two-families scenario on the merger of compact stars. After reviewing the main properties of this scenario, which is based on the coexistence of hadronic stars (HSs) and quark stars (QSs), I will present results of population synthesis analyses for the estimates of the rate of events associated with the merger of two HSs, two QSs or a HS and a QSs. I will move then to the results obtained by numerical simulations of HS-HS mergers concerning the threshold mass for the prompt collapse, the postmerger GW signal and the mass dynamically ejected. Finally, after discussing the interpretation of GW170817 as due to the merger of a HS-QS system, I will argue that the specific signature of our scenario is the observation of cases of prompt collapses even for systems with a mass smaller than 2.74 m_sun (i.e. the mass of the source of GW170817).
Speaker: giuseppe pagliara
• 13:30 14:00
Scalar Field and Quintessence in Gauge Symmetry Group $SU(2)\otimes U(1)$ 30m
We consider the formation of structured and massless particles with spin 1 (vector boson), by using the Yang-Mills like stochastic equations system for the group symmetry $SU(2)\otimes U(1)$ without taking into account the nonlinear term characterizing self-action. We prove that, in the first phase of relaxation, as a result of multi-scale random fluctuations of quantum fields, massless particles with spin 1, further referred as \emph{hions}, are generated in the form of statistically stable quantized structures, which are localized on 2D topological manifolds. We also study the wave state and the geometrical structure of the \emph{hion} when as a free particle and, accordingly, while it interacts with a random environment becoming a quasi-particle with a finite lifetime. In the second phase of relaxation, the vector boson makes spontaneous transitions to other massless and mass states. The problem of entanglement of two \emph{hions} with opposite projections of the spins $+1$ and $-1$ and the formation of a scalar zero-spin boson are also thoroughly studied. We analyze the properties of the scalar field (dark energy-quintessence) and show that it corresponds to the Bose-Einstein (BE) condensate. The scalar boson decay problems, as well as a number of features characterizing the stability of BE condensate, are also discussed. Then, we report on the structure of empty space-time in the context of new properties of the quantum vacuum, implying on the existence of a natural quantum computer with complicated logic, which manifests in the form of dark energy. The possibilities of space-time engineering are also discussed.
Speaker: Prof. Ashot Gevorkyan (Institute for Informatics and Automation Problems NAS of RA/Institute Chemical Physics NAS of RA)
• 14:00 14:30
Cosmological constant induced by a bulk scalar in braneworlds with compact dimensions 30m
We investigate the vacuum expectation value (VEV) of the surface energy-momentum tensor for a charged scalar field in a higher dimensional locally anti-de Sitter spacetime with two parallel branes and with a compact dimension (generalized Randall–Sundrum model). The presence of a constant background gauge field is assumed. The latter gives rise to Aharonov-Bohm type effect on the characteristics of the scalar vacuum. The problem is reduced to the investigation of the VEV of the field squared on the branes. It is shown that the VEV can be decomposed into three contributions representing the VEV in the brane-free geometry, the VEV in a single brane geometry, and the contribution due to the second brane. The latter is investigated, and it is shown that this gives rise to a cosmological constant on the visible brane (our universe). The behavior of the cosmological constant is studied as a function of the locations of the branes, of the length of the compact dimension and of the magnetic flux enclosed by the compact dimension. In particular, it is shown that the cosmological constant is a periodic function of the magnetic flux with the period equal to the flux quantum. Depending on the parameters of the problem it can be either negative or positive.
Speaker: Hayk Sargsyan (Yerevan State University)
• 14:00 14:30
First-order phase transition from hypernuclear matter to deconfined quark matter obeying new constraints from compact star observations 30m
We reconsider the problem of the hyperon puzzle and its suggested
solution by quark deconfinement within the two-phase approach to hybrid
compact stars with recently obtained hadronic and quark matter
equations of state. For the hadronic phase we employ the hypernuclear
equation of state from the lowest order constrained variational method
and the quark matter phase is described by a sufficiently stiff equation
of state based on a color superconducting nonlocal Nambu-Jona Lasinio
model with constant (model A) and with density-dependent (model B)
parameters. We provide for the first time a hybrid star EoS with an
intermediate hypernuclear matter phase for which the maximum mass of
the compact star reaches 2.2 solar mass.
Speaker: Mahboubeh Shahrbaf
• 14:30 15:00
Decrease in Mass of the Protoquark Stars During their Cooling 30m
The change in mass of the protoquark stars during their cooling is studied. When a supernova explodes, its central part shrinks so quickly that the lepton charge due to weak processes does not have time to change. Therefore, the chemical equilibrium is established after the formation of the protoquark star with a temperature of 1012 K, when the star's matter is opaque to neutrinos. It is shown that in this state the thermal energy reserves of the hot quark matter are huge: up to 20-40 percent of the total energy. This state of the star does not last long, but it can play a crucial role in the future fate of the star. When it cools down, all this energy leaves the star. Therefore the mass of the cooled quark star will be less than the mass of the original protoquark star by 20–40 percent too.
The maximum masses of cold and hot quark stars differ slightly. Consequently, among the existing quark stars, the number of massive stars will be relatively less. This may also be for protoneutron stars too.
Yerevan State University.
ghajyan@ysu.am
Speaker: Mr Gevorg Hajyan (Yerevan State University)
• 14:30 15:00
Electrodynamics of axion-active system: polarization and stratification of plasma in an axionic dyon magnetosphere. 30m
The state of a static spherically symmetric relativistic axionically active multi-component plasma
in the gravitational, magnetic and electric fields of an axionic dyon is studied in the framework
of the Einstein - Maxwell - Boltzmann - axion theory. We assume that the equations of axion
electrodynamics, the covariant relativistic kinetic equations, and the equation for the axion field with modified Higgs-type potential are nonlinearly coupled; the gravitational field in the dyon exterior is assumed to be fixed and to be of the Reissner-Nordstrom type. We introduce the extended Lorentz force, which acts on the particles in the axionically active plasma, and analyze the consequences of this generalization. The analysis of exact solutions, obtained in the framework of this model for the relativistic Boltzmann electron-ion and electron-positron plasmas, as well as, for degenerated zero-temperature electron gas, shows that the phenomena of polarization and stratification can appear in plasma, attracting the attention to the axionic analog of the known Pannekoek-Rosseland effect.
[1] Pannekoek, A.: 1922, Bull. Astron. Inst. Neth. 1, 107118.
[2] Rosseland, S.: 1924, Monthly Notices Roy. Astron. Soc. 84, 720728.
Speaker: Mr Dmitri Groshev (KFU)
• 15:00 15:30
Coffee break 30m
• 15:30 16:00
Electromagnetic vacuum densities around a cosmic string in de Sitter spacetime 30m
We evaluate the vacuum expectation values (VEVs) of the electric and magnetic fields squared and of the energy-momentum tensor for the electromagnetic field around a cosmic string on the background of (D+1)-dimensional locally de Sitter spacetime. It is assumed that the field is prepared in the Bunch-Davies vacuum state. The topological contributions in the VEVs are explicitly separated. It is shown that in spatial dimensions other than 3 the part of the vacuum energy-momentum tensor induced by the cosmic string, in addition to the diagonal components, has a nonzero off-diagonal component corresponding to the energy flux along the radial direction. The asymptotic behavior of the VEVs is discussed near the string and at proper distances larger than the curvature radius of the de Sitter spacetime.
Speaker: Dr Vardan Manukyan (Shirak State University)
• 15:30 16:00
Second look to the Polyakov Loop Nambu-Jona-Lasinio model at finite baryonic density 30m
We revisit the Polyakov Loop coupled Nambu-Jona-Lasinio model that maintains the Polyakov loop dynamics at zero temperature, which is the most interesting for astrophysical applications. For this purpose we re-examine potential for the deconfinement order parameter at finite baryonic densities. Secondly, and the most important, we explicitly demonstrate that naive modification of this potential at any temperature is formally equivalent to assigning a baryonic charge to gluons. We develop a general formulation of the present model which is free of the discussed defect and is normalized to asymptotic of the QCD equation of state given by $\mathcal{O}(\alpha_s^2)$ perturbative results. We also demonstrate that incorporation of the Polyakov loop dynamics to the present model sizably stiffens the quark matter equation of state supporting an existence of heavy compact stars with quark cores.
Speaker: Dr Oleksii Ivanytskyi (University of Salamanca)
• 16:00 16:30
More on Complexity in Finite Cut Off Geometry 30m
It has been recently proposed that late time behavior of holographic complexity in a uncharged black brane solution of Einstein-Hilbert theory with boundary cut off is consistent with Lloyd's bound if we have a cut off behind the horizon. Interestingly, the value of this new cut off is fixed by the boundary cut off. In this paper, we extend this analysis to the charged black holes. Concretely, we find the value of this new cut off for charged small black hole solutions of Einstein-Hilbert-Maxwell theory, in which the proposed bound on the complexification is saturated. We also explore this new cut off in Gauss-Bonnet-Maxwell theory
Speaker: Seyedeh Sedigheh Hashemi
• 16:00 16:30
Third family of compact stars within a nonlocal chiral quark model equation of state 30m
A class of hybrid compact star equations of state is investigated that joins by a Maxwell construction a low-density phase of hadronic matter, modeled by a relativistic meanfield approach with excluded nucleon volume, with a high-density phase of color superconducting two-flavor quark matter, described within a nonlocal covariant chiral quark model. We find the conditions on the vector meson coupling in the quark model under which a stable branch of hybrid compact stars occurs in the cases with and without diquark condensation. We show that these hybrid stars do not form a third family disconnected from the second family of ordinary neutron stars unless additional (de)confining effects are introduced with a density-dependent bag pressure. A suitably chosen density dependence of the vector meson coupling assures that at the same time the 2 M⊙ maximum mass constraint is fulfilled on the hybrid star branch. A twofold interpolation method is realized which implements both, the density dependence of a confining bag pressure at the onset of the hadron-to-quark matter transition as well as the stiffening of quark matter at higher densities by a density-dependent vector meson coupling. For three parametrizations of this class of hybrid equation of state the properties of corresponding compact star sequences are presented, including mass twins of neutron and hybrid stars at 2.00, 1.39 and 1.20 M⊙, respectively. The sensitivity of the hybrid equation of state and the corresponding compact star sequences to variations of the interpolation parameters at the 10% level is investigated and it is found that the feature of third family solutions for compact stars is robust against such a variation. This advanced description of hybrid star matter allows to interpret GW170817 as a merger not only of two neutron stars but also of a neutron star with a hybrid star or of two hybrid stars.
Speaker: Dr David Edwin Alvarez Castillo (Joint Institute for Nuclear Research)
• 16:30 17:00
Investigations of region of extended radio source 3C315 30m
The 3C315 galaxy and its surroundings were examined. According to galaxies and quasars, there is a lack of galaxies and quasars in that domain. Only 4 of these 35 domains have a lack of quasars and galaxies. The deficit of galaxies and quasars is the reason why there is empty space around the 3C315 galaxie.
Speaker: Dr Martik Hovhannisyan (BAO)
• 17:00 17:30
Dynamics of anisotropic dark energy universe embedded in one-directional magnetized fluid 30m
This work discovers few extraordinary features of an anisotropic dark energy cosmological model in a two fluid situation such as the usual dark energy and the electromagnetic fluid. We have assumed the dark energy pressure to be anisotropic in spatial directions in terms of skewness parameters and have been studied their behavior through cosmic evolution. In order to yield a healthy mathematical formalism of the model, we have considered the scale factor as hybrid scale factor; a combination of both power law and volumetric (de Sitter) expansion law, showing a transitional phase in between. The physical parameters are derived, analyzed and found to be in agreement with recent observational data. The evolution of Equation of State parameter obtained here, presents a scenario which is consistent with three different stages of evolutionary universe, namely; radiation dominated, matter dominated and dark energy dominated era. Also this work clearly compares the effect of magnetized fluid over other cosmic fluids (discussed in our earlier works) along with dark energy fluid. Moreover, we observed that electromagnetic fluid extremely dominates the early phase of evolution than any other cosmic fluids. Whereas, the late cosmic epoch is completely ?filled and driven by dark energy fluid. Also, we diagnosed the model through state-finder parameters and compared with $\Lambda$ CDM model to convey the physical acceptability of it.
• Thursday, 19 September
• 09:00 09:40
Delineating the properties of matter in cold, dense QCD 40m
The properties of dense QCD matter are delineated through the construction of equations of state which should be consistent with QCD calculations in the low and high density limits, nuclear laboratory experiments, and the neutron star observations. These constraints, together with the causality condition of the sound velocity, are used to develop the picture of hadron-quark continuity in which hadronic matter continuously transforms into quark matter (modulo small 1st order phase transitions). The resultant unified equation of state at zero temperature and beta - equilibrium, which we call Quark-Hadron-Crossover (QHC18 and QHC19), is consistent with the measured properties of neutron stars and in addition gives us microscopic insights into the properties of dense QCD matter.
Speaker: Prof. Toru Kojo (Central China Normal University)
• 09:40 10:20
Implications of Binary Neutron Star and Black Hole-Neutron Star Mergers for Neutron Stars and Dense Matter 40m
Newly discovered binary neutron star and black hole-neutron star mergers via gravitational waves can offer interesting constraints on the properties of dense matter. There are also important implications for the structure and composition of neutron stars. In the case of black hole-neutron star mergers, it is shown how to infer information about the components from GCN announcements, long before the LIGO/VIRGO collaboration publishes their results. New information from X-ray observations of neutron stars, such as from the Neutron Star Interior Composition ExploreR (NICER), can be combined with the gravitational wave data to further constrain these properties.
Speaker: James Lattimer (Stony Brook University)
• 10:20 10:40
Coffee break 20m
• 10:40 11:20
Hydrostatic equilibrium and stellar structure in Extended Gravity 40m
We investigate the hydrostatic equilibrium of stellar structure by taking into account the modified Lane'-Emden equation coming from Extended Theories of Gravity. Such an equation is obtained in metric approach by considering the Newtonian limit of Extended Gravity, which gives rise to a modified Poisson equation, and then introducing a relation between pressure and density with polytropic index n.The modified equation results an integro-differential equation, which, in the limit of General Relativity becomes the standard Lane'-Emden equation. We find the radial profiles of gravitational potential by solving for some values of n. The comparison of solutions with those coming from General Relativity shows that they are compatible and physically relevant. A comparison with observational data of some peculiar objects is presented.
Speaker: Salvatore Capozziello (INFN - National Institute for Nuclear Physics)
• 11:20 12:00
The neutron star crust: Elasticity, breaking strength, durability and enhancement of the thermonuclear reaction rates 40m
In this talk I briefly review material properties of the neutron star crust and the plasma screening effects on the nuclear reaction rates. I start from elastic properties. In particular, I demonstrate that for pure Coulomb crystals the elasticity tensor has additional symmetry, which do not depend on the actual crystalline structure and composition. As a particular result of this symmetry, the effective (Voigh averaged) shear modulus of the polycrystalline matter can be derived from the lattice (Madelung) energy. It leads to universal upper limit for the effective shear modulus of polycrystalline or disordered neutron star crust. At the second part of the talk, I discuss current constraints on the maximal elastic deformation of the neutron star crust, crust durability at the maximal deformations and possibility of the plastic motions. The final part of the talk is devoted to plasma screening enhancement of the nuclear reaction rates, focusing attention on the requirement of the consistency with the detailed balance principle.
Speaker: Dr Andrey Chugunov (Ioffe Institute)
• 12:00 13:30
Lunch break 1h 30m
• 13:30 19:00
Excursion to Gyumri
• 13:30 19:00
Excursion to Sevan Lake
• Friday, 20 September
• 09:00 09:40
Young magnetars with fracturing crusts as fast radio burst repeaters 40m
Fast radio bursts (FRBs) are short (duration ~ ms) but intense (flux ~ Jy) flashes, generally believed to be of extragalactic origin due to their high dispersion measures, which appear in the GHz-band. Currently, there are two sources which are known to repeat, thereby suggesting that there may be at least a subclass of FRBs resulting from transient outbursts of a young, compact object. We discuss some of the statistics surrounding the repeating bursts, and explore what this might indicate about the progenitors. We consider the possibility that FRBs are instigated by crustal fractures in young (~ 100 yrs) magnetars, whose crust yields due to strong, and topologically complicated, magnetic stresses, which build up as the field evolves rapidly due to Hall drift and ambipolar diffusion.
Speaker: Dr Arthur Suvorov (University of Tübingen)
• 09:40 10:20
Quark-hadron pasta phases for two-phase approaches and the third family of compact stars 40m
The effect of pasta phases on the quark-hadron phase transition is investigated for a set of
relativistic mean-field equations of state for both hadron and quark matter. The results of the full
numerical solution with pasta phases are compared with those of an interpolating construction used
in previous works, for which we demonstrate an adequate description of the numerical results. A
one-to-one mapping of the free parameter of the construction to the physical surface tension of the
quark-hadron interface is obtained for which a fit formula is given. For each pair of quark and
hadron matter models the critical value of the surface tension is determined, above which the phase
transition becomes close to the Maxwell construction. This result agrees well with earlier theoretical
estimates. The study is extended to neutron star matter in beta equilibrium with electrons and muons
and is applied to investigate the effect of pasta phases on the structure of hybrid compact stars and
the robustness of a possible third family solution.
[1] K. Maslov et al., Phys. Rev. C 100, 025802 (2019)
Speaker: David Blaschke (University of Wroclaw)
• 10:20 10:40
Coffee break 20m
• 10:40 11:20
On the role of fluctuations in compact objects 40m
NA
Speaker: Bernd-Jochen Schaefer
• 11:20 12:00
Some aspects of the cooling of neutron stars 40m
Measurements of the low masses for the pulsar PSR J0737-3039B, for the companion of PSR J1756-2251 and for the companion of PSR J0453+1559 on the one hand and of the high masses for the pulsars PSR J1614-2230 and PSR J0348-0432 on the other demonstrate the existence of compact stars with masses in a broad range from 1.2 to 2 Msun. We show that for realistic stellar matter EoS it is possible to explain the whole set of cooling data within "nuclear medium cooling" scenario for compact stars by a variation of the star masses. We select appropriate proton gap profiles from those exploited in the literature and allow for a variation of the effective pion gap controlling the efficiency of the medium modified Urca process. Using the set of existing observational temperature-age data for neutron stars one can also extract their possible mass distribution from the cooling model, because for each of observed compact object its mass can be predicted from the model. Such analyses has been performed for a particular EoS - DD2 model and shown that indeed the interval of masses from 1.2 to 2 Msun should be equally populated.
Speaker: Dr Hovik Grigorian (JINR & AANL & YSU)
• 12:00 13:30
Lunch break 1h 30m
• 13:30 14:00
LOCV calculation of equation of state and binary neutron stars 30m
The binary system of neutron stars (NSs) has drawn a lot of astrophysicists’ attention in past years. Discovery of the gravitational wave (GW) signal from GW170817, the compact binary inspiral event, has resulted in the multi-messenger astronomy which indeed provides substantial data about the interior of dense matter. In addition, they have the potential of elucidating the information about the equations of state (EOSs) of NS matter.
Structural and tidal parameters of NSs in the observed binary neutron star merger are studied employing the realistic equations of state. It is notable to mention that we use the same EOS for each component of the merger in the case of low spin prior. The value of dimensionless tidal deformability $\Lambda$ is calculated as $216<\Lambda<314$ regarding 1.4 $M_{\odot}$ configuration of NS with the EOSs of Argonne family potentials in addition to the UV14 accompanied by TNI and applying the LOCV method [1]. Fixing the chirp mass at 1.188 $M_{\odot}$, the mass ratio of components, q, is set as the recent results obtained by the PhenomPNRT wave model: (0.73, 1) for the low spin case. Therefore, our results for weighted dimensionless tidal deformability $\widetilde{\Lambda}$ agree well with the recent constraints on its lower limits: $300_{-230}^{+420}$ [2]. Moreover, it is found that some EOSs with Argonne family potentials such as AV6' and AV8' can be ruled out due to their consequences that are far away from the credible intervals. We have also investigated the impact of quark core and the van der Waals equation of state on the tidal deformability of neutron stars in a binary system.
[1] Z. Sharifi, M. Bigdeli, submitted to the Journal of Physics G: Nuclear and
Particle Physics.
[2] B. P. Abbott et al., Phys. Rev. X. 9, 011001 (2019).
Speaker: zahra sharifi
• 14:00 14:30
Modeling anisotropic magnetized white dwarfs with γ metric 30m
The effect of magnetic fields in the equations of state (EoS) of compact objects is the splitting of the pressure in two components, one parallel and the other perpendicular to the magnetic field. This anisotropy suggests the necessity of using structure equations considering the axial symmetry of the magnetized system. In this work, we consider an axially symmetric metric in spherical coordinates, the
γ
-metric, and construct a system of equations to describe the structure of spheroidal compact objects. In addition, we connect the geometrical parameter
γ
linked to the spheroid’s radii, with the source of the anisotropy. So, the model relates the shape of the compact object to the physics that determines the properties of the composing matter. To illustrate how our structure equations work, we obtain the mass-radii solutions for magnetized white dwarfs. Our results show that the main effect of the magnetic field anisotropy in white dwarfs structure is to cause a deformation of these objects. Since this effect is only relevant at low densities, it does not affect the maximum values of magnetized white dwarf’s masses, which remain under Chandrasekhar limit.
Speaker: Ms Alvear Diana (ITF, Wroclaw University)
• 14:30 15:00
Astrophysical aspects of general relativistic mass twin stars 30m
An effective, multi-polytope equation of state (EoS) model is used to study the so-called “mass twins” scenario, where two compact stars have approximately the same mass but (significant for observation) quite different radii. Stellar mass twin configurations are obtained if a strong first-order phase transition occurs in the interior of a compact star. In the mass-radius diagram of compact stars, this leads to a third branch of gravitationally stable stars with features that are very different from those of white dwarfs and neutron stars. Rotating hybrid star sequences are discussed in the slow rotation approximation and in full general relativity and conclusions are drawn [1] for an upper limit on the maximum mass of nonrotating compact stars that has recently been deduced from the observation of the merger event GW170817.
[1] D. Blaschke et al., arXiv:1906.02522 (2019)
Speaker: Noshad Khosravi (Alzahra University, Iran)
• 15:00 15:30
Coffee break 30m
• 15:30 16:00
Hybrid Star Properties within the Nambu - Jona-Lasinio (NJL) Model for Quark Matter and Relativistic Mean Field (RMF) Model for Hadronic Matter 30m
We study the properties of compact stars by taking into account the hadron-quark phase transition, as a result of which a quark matter core is formed in the central part of the star. In order to describe the quark matter, the local version of three-flavor Nambu-Jona-Lasinio (NJL) model is used. The thermodynamic characteristics of the hadronic matter are calculated within the framework of the extended version of the relativistic mean field (RMF) model, in which the contribution of the scalar-isovector $\delta$-meson effective field is also taken into account. To determine the parameters of the phase transition, both the Maxwell and the Gibbs constructions are applied. It is shown that in case of the equation of state considered by us, the narrow central density interval, $\rho_c\in(1.71\div1.73]10^{15} g / cm^3$, corresponds to stable neutron stars with a deconfined quark matter core. Our study showed that compact stars of masses of $2 M_\odot$ are compatible with possible existence of deconfined quark matter in their core.
Speaker: Dr Grigor Alaverdyan (Yerevan State University)
• 16:00 16:30
Vacuum fluxes from a brane in de Sitter spacetime with compact dimensions 30m
We investigate the vacuum expectation value of the energy flux density for a complex scalar field in de Sitter spacetime with an arbitrary number of toroidally compact spatial dimension and in the presence of a brane. Quasiperiodicity conditions with arbitrary phases are imposed along compact dimensions and on the brane the field obeys Robin boundary condition. Depending on the values of the parameters in the problem, the flux can be directed from the brane or to the brane. The behavior of the flux density in various asymptotic regions is investigated. It has been shown that the energy flux density is an even periodic function of magnetic fluxes enclosed by compact dimensions with the period equal to flux quantum.
Speaker: Dr David Simonyan
• 16:30 17:00
Quantum effects for a spherical shell in the Milne universe 30m
The influence of a spherical boundary on the vacuum fluctuations of a massive scalar field is investigated in background of $\left( D+1\right)$-dimensional Milne universe, assuming that the field obeys Robin boundary condition on the sphere. The normalized mode functions are derived for the regions inside and outside the sphere. For the interior region, the boundary-induced contribution is explicitly extracted in the Wightman function with the help of the generalized Abel-Plana summation formula. The vacuum expectation values (VEVs) of the field squared and of the energy-momentum tensor are investigated for the conformal vacuum. They are decomposed into the boundary-free and boundary-induced contributions. For the latter, rapidly convergent integral representations are provided. In addition to the diagonal components, the vacuum energy-momentum tensor has an off-diagonal component that describes energy flux along the radial direction.
Speaker: Mr Tigran Petrosyan (Yerevan State University)
• 19:00 22:00
Social dinner
• Saturday, 21 September
• 09:00 09:20
Special Session devoted to 80th anniversary of Prof. David M. Sedrakian
• 09:20 09:50
Time-dependent Ginzburg-Landau equations for interacting neutron-proton superfluid in neutron stars 30m
We discuss a time-dependent generalization of the stationary Ginzburg-Landau theory for interacting neutron superfluid and proton superconducting condensates and its modification in the presence of rotation.
Speaker: Prof. Karen Shahabasyan (Yerevan State University )
• 09:50 10:20
Bulk viscosity of baryonic matter with trapped neutrinos 30m
We study bulk viscosity arising from weak current Urca processes in
dense baryonic matter at and beyond nuclear saturation density. We
consider the temperature regime where neutrinos are trapped and
therefore have non-zero chemical potential. We model the nuclear
matter in a relativistic density functional approach, taking into
account the trapped neutrino component. We find that the resonant
maximum of the bulk viscosity would occur at or below the neutrino
trapping temperature, so in the neutrino trapped regime the bulk
viscosity decreases with temperature as $T^{-2}$, this decrease
being interrupted by a drop to zero at a special temperature where
the proton fraction becomes density-independent and the material
scale-invariant. The bulk viscosity is larger for matter with lower
lepton fraction, i.e., larger isospin asymmetry. We find that bulk
viscosity in the neutrino-trapped regime is smaller by several
orders than in the neutrino-transparent regime, which implies that
bulk viscosity in neutrino-trapped matter is probably not strong
enough to affect the evolution of neutron star mergers.
Speaker: Dr Arus Harutyunyan (Byurakan Astrophysical Observatory, Yerevan State University)
• 10:20 10:40
Coffee break 20m
• 10:40 11:00
Special session devoted to 60th anniversary of Prof. David Blaschke
• 11:00 11:30
Bayesian Analysis for Extracting Properties of the Nuclear Equation of State 30m
We perform a Bayesian analysis for selecting the most probable equation of state under a set of constraints from compact star physics, which now include the tidal deformability from GW170817. It was considered a two-parameter family of hybrid equations of state, which produces a third family of hybrid stars in the mass-radius diagram. We present the corresponding results for compact star properties like mass, radius and tidal deformabilities and use empirical data for them in Bayesian analysis method to obtain the probabilities for the model parameters within their considered range.
Speaker: Alexander Ayriyan (JINR & AANL)
• 11:30 12:00
The fourth family of compact stars 30m
I will discuss the features pertaining to the new family of compact stars that are denser than the hybrid stars and arises if multiple phase transitions take place in the dense quark matter. The mass, radius, deformability and internal structure of these new objects will be discussed and confronted with the constraints from astrophysics.
Speaker: Armen Sedrakian (Frankfurt University)
• 12:30 14:30
Excursion to Wine Factory | 2020-04-08 06:50:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7088575959205627, "perplexity": 1754.709308462342}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00428.warc.gz"} |
https://toph.co/p/generation-gap | # Generation Gap
Criterion 2020 Round 5
Limits 1s, 512 MB · Custom Checker
There are n people standing in a queue. The age of the ith person is ai. The more the age gap between two persons, the more uncomfortable they feel with each other. We define the level of discomfort between person i and j as | ai - aj |. Also, the total discomfort of the queue is the sum of the level of discomfort between all adjacent pairs.
Now, you have been given the responsibility of rearranging the queue. A normal person would seek to minimize the total discomfort to help the poor souls. But you are not a normal person, are you? (Besides, that would make the problem very easy). You have an evil heart, so you want to rearrange the queue so that the total discomfort of the queue is as large as possible. Now you need to find the maximum total discomfort you can achieve by rearranging the queue and the rearrangement that achieves it.
Formally, you are given n integers, a1, a2, . . . an. You have to find an permutation p of integers 1, 2, . . . n such that the quantity $\sum_{i=1}^{n-1} |a_{p_i} - a_{p_{i+1}}|$ is maximized.
## Input
The first line of the input contains a single integer n (2 ≤ n ≤ 2 × 105).
The second line contains n space separated integers a1, a2, . . . an (1 ≤ ai ≤ 109).
## Output
On the first line, print a single integer, the maximum total discomfort you can achieve.
On the second line, print n integers, the permutation that achieves the maximum total discomfort.
Note that, you need to print a permutation of the indices, not the actual values. Each index from 1 to n should appear exactly once in the permutation. If there are multiple possible permutations that achieve the maximum, you may print any of them.
## Samples
InputOutput
3
2 5 9
11
1 3 2
The total discomfort for the permutation 1, 3, 2 is |a1-a3| + |a3-a2| = 7 + 4 = 11. No other permutation achieves a larger total discomfort. 2, 3, 1 is also an acceptable answer as it achieves the same total discomfort.
InputOutput
4
1 2 2 1
3
4 3 1 2
InputOutput
2
10 20
10
1 2 | 2022-11-26 16:25:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49058130383491516, "perplexity": 680.6666980515521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00104.warc.gz"} |
http://www.techtud.com/multiple-choice-question/common-data-questions-48-49-channel-0 | ##### Common Data Questions: 48 & 49 The channel...
Common Data Questions: 48 & 49
The channel resistance of an N-channel JFET shown in the figure below is 600$\Omega$
when the full channel thickness (tch) of 10μm is available for conduction. The
built-in voltage of the gate P+ N junction (Vbi) is -1 V. When the gate to source
voltage (VGS) is 0 V, the channel is depleted by 1μm on each side due to the builtin
voltage and hence the thickness available for conduction is only 8μm
Q.49 The channel resistance when VGS = 0 V is
(A) 480$\Omega$
(B) 600$\Omega$
(C) 750$\Omega$
(D) 1000$\Omega$
Hint:
<div class="tex2jax"></div> | 2018-04-22 18:03:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7127366662025452, "perplexity": 13170.758495999375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00483.warc.gz"} |
https://www.ques10.com/p/22925/find-the-displacement-thickness-the-momentum-thi-1/ | 5
4.0kviews
Find the displacement thickness, the momentum thickness and energy thickness for the velocity distribution in boundary layer given by, $\frac{\mu}{U}=2(\frac{y}{\delta})- (\frac{y}{\delta})^2.$
Subject: Fluid Mechanics 2
Topic: Boundary layer theory
Difficulty: Medium
Shivaraj narishetti civil engineering student | 2021-12-03 17:13:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424407482147217, "perplexity": 9473.582959095424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00346.warc.gz"} |
http://www.montequintoinforma.es/a572/inductor-core-materi_1622.html | # inductor core materialthe heart of an inductor power
## inductor core materialthe heart of an inductor power
### Amidons method of rating ferrite inductors and
Jan 15, 2021 · Now lets do a permeability based calculation of magnetising admittance and then power lost in the core Pcore at 83V applied for the #43 core. $$P_{core}=V^2 G_{core}=83^2 \cdot 0.00232=16.0 \; W$$ 16W is on the high side for this core in free air, more so if it is enclosed.
### Basic Inductor Design - Technical Articles
Inductor design characteristics are defined in terms of various parameters. Inductor winding is made of a conductor material which may be a single round wire or a unique multi-stranded conductor known as Litz wire. Litz wire has the main advantage of reduced skin effect. Inductor design characteristics are defined in terms of various parameters which are discussed in this technical article. CN1862720A - Coil embedded metal magnetic powder core This invention relates to coil inner push-in type metal magnetic powder core patch inductor. The coil is pushed in inner of the magnetic core. The preparation procedures are that A) the metal magnetic powder is insulated and plastic treated, the metal powder is iron powder, carbonyl powder, FeSiAl powder, FeNiMo powder or FeNi50 powder. B) Flat copper wire is used to make coil, the two leading
### CORE MATERIAL Power Electronics
Core Material. SP2006 and SP2207 Series surface-mount toroidal inductors come with MPP core material. Available in inductance values up to 330 µH, they're suited for temperature applications up to 155°C, and offer better current saturation characteristics Choosing Inductors for Energy Efficient Power Pcore = power loss in the core K, x, y = core material constants F = frequency B = flux density This equation shows that core loss depends on frequency (f) and flux density (B). Flux density depends on ripple cur - rent, so both are application-dependent variables. It also shows that the core loss is inductor-dependent, where the core material
### Comparing magnetic cores for power inductors - Power
Apr 09, 2020 · The geometry often used for power inductors is the toroid because its shape maximally constrains the magnetic field while providing a large area for windings. Both powder cores and ferrites are commonly available shaped as toroids, but tape-wound (also called strip-wound or cut wound) cores can be used as toroidal transformers as well. Construction Inductor TN ElektroNov 10, 2019 · 7) The inductor-ferrite-core / Ferrite-core inductor The ability of an inductor to resist any change in current is a measure of self-inductance of a coil. For practical purposes is usually just referred to as inductance that is symbolized by the letter L. Inductance is measured in units of Henry.
### Different Types of Inductors with Applications
Air core inductors have non-magnetic core such as plastic, ceramic or just air as suggested by its obvious name. Air core inductor uses any non-magnetic material as core to reduce the core losses i.e. eddy current & stray losses, especially when the operating frequency is very high. But the use of non-magnetic core also decreases its inductance. Embedded Planar Power Inductor in an Organic Interposer The planar power inductor using composite core was fabricated and evaluated, which had a quasi closed magnetic circuit consisting of low permeability composite core and embedded 35-m-thick, two
### Factors that Influence Electromagnetic Radiation of Power
The higher the thickness and permeability of the core material, the more effective the inductor will be at shielding the E-field. As an example, the E-field emissions of a Würth Elektronik shielded inductor was measured with a WE-LHMI ( 744 373 680 22). The transistor of the converter was operating at 400 kHz, producing the fundamental Ferrite and Metal Composite InductorsBasic Structure of Resonant Inductor Ferrite Core Conductor (Gap Spacer) Ferrite Core The resonant inductor is comprised of upper and lower core using Mn-Zn ferrite, a center conductor and non-magnetic gap spacer. Enable the lowest resistance and the lowest core loss by
### Ferrite and Powder Core Materials for Power Inductors -
Oct 09, 2014 · If the peak inductor current is less than 20 A, the ferrite core is likely to be preferred core choice, whereas the powdered iron core would be preferred at higher peak currents. This is an especially important characteristic in late generation VRM/VRD power supplies with high transient current requirements where inductor current is likely to Ferrite and Powder Core Materials for Power InductorsCore Materials for Power Inductors Leonard Crane Coilcraft Document 496-1 Revised 02/03/06 Power inductors come in all shapes and sizes and are made from a variety of core materials. For example, the Coilcraft SLC series uses a gapped ferrite core, whereas the MLC series uses a powder core
### High frequency output inductor for inverter power supply
A low self-capacitance inductor is described for use as an output inductor in high frequency inverter power supplies. A pair of channel-shaped ferrite core members are assembled with a gap of material approximating the permeability of air. The core members are arranged to How new inductor cores meet demands for smaller, quieter Aug 20, 2019 · Power inductors are crucial devices for managing the flow of energy in switching converters, to ensure smooth power delivery and help coordinate commutation. The inductance value is selected to store sufficient energy to keep current flowing for long enough to operate the circuit correctly while the main switch is off.
### Inductor Wikipedia Republished // WIKI 2
An inductor usually consists of a coil of conducting material, typically insulated copper wire, wrapped around a core either of plastic (to create an air-core inductor) or of a ferromagnetic (or ferrimagnetic) material; the latter is called an "iron core" inductor.Since power inductors require high induction levels, high permeability and low saturation points in the core materials are not ideal. Inductor - Definition, Types, Formula, Functions, Working An inductor is a passive component that is used in most power electronic circuits to store energy in the form of magnetic energy when electricity is applied to it. One of the key properties of an inductor is that it impedes or opposes any change in the amount of current flowing through it.
### Inductor - Types of Inductor - Ferromagnetic Core Inductor
Core losses:A time-varying current in a ferromagnetic inductor, which causes a time-varying magnetic field in its core, causes energy losses in the core material that are dissipated as heat, due to two processes:Eddy currents:From Faraday's law of induction, the changing magnetic field can induce circulating loops of electric current in the Inductor Design Methods With Low- Permeability RF Fig. 1 shows an inductor has been designed and fabricated under the above conditions and replaced the original coreless resonant inductor L sin a 30 MHz 2 inverter [3], [23], [24]. The magnetic-core inductor provides a substantial volumetric advantage over that achievable with a
### Inductor Loss Calculation - PSIM Software
Inductor Loss Calculation in Thermal Module 2 The Thermal Module provides the capability to calculate the winding losses, core losses, and temperature rise of inductors based on standard manufacturer cores and wires. This tutorial describes how an inductor is defined and calculated is performed. Inductor core material:The heart of an inductorAn inductor's magnetic core is made of specially formed material with soft magnetic properties. The combination of magnetic core and windings result in another property called inductance.
### Inductor core material:The heart of an inductor
An inductor's magnetic core is made of specially formed material with soft magnetic properties. The combination of magnetic core and windings result in another property called inductance. Inductors - Oregon State Universityinductor made by winding a wire through several holes in the bead shaped ferrite material. The ferrite iron alloy greatly increases the inductance without requiring more turns of wire on the form. Next to the bead is a shielded audio pot core inductor. It is surrounded by the ferrite ma-
### Magnetics - Inductor Cores:Material and Shape Choices
• Introduction Understanding the role of inductors in power electronicsIron is the classic and most recognizable magnetic material, making it the perfect choice for use in inductors. As above, iron in inductors takes the form of an iron core. They are typically used for low frequency line filtering due to their relatively large inductances. They Please explain the types of core materials for inductors A metal magnetic material or ferrite is used for the core material of inductors for power circuits.(Power Inductors) The metal magnetic material features small inductance changes by temperature and does not reach magnetic saturation as easily as ferrite, and the rated current Idc1 (current value based on inductance change) is significantly
### Powder Material for Inductor Cores
inductors as well as the different converter topologies the inductors are supposed to be operating in. 2.1 The inductor and important magnetic concepts The derivation of the inductor is taken from the book Solid State Tesla Coil by Gary L. Johnson [2]. An inductor is an electrical component which stores energy in a magnetic eld. Power Conversion Conversion &LineFilter Applications with the same permeability and somewhat higher core losses.-26 Material The most popular material. It is a cost-effec-tive general purpose material that is useful in a wide vari-ety of power conversion and line filter applications.-30 Material The good linearity, low cost, and relatively low permeability of this material make it popular in large
### Power Conversion Conversion &LineFilter Applications
with the same permeability and somewhat higher core losses.-26 Material The most popular material. It is a cost-effec-tive general purpose material that is useful in a wide vari-ety of power conversion and line filter applications.-30 Material The good linearity, low cost, and relatively low permeability of this material make it popular in large Power inductors and peak-current handling capabilityDec 23, 2012 · However, if the inductor was made with ferrite material, the inductance would drop abruptly once the peak current reached 1.4 A. In many cases the engineer would simply choose a larger component and de-rate it, such as an RL-9580-1-2.0-100M with an I
### Product Overview Power Inductor VLS Series
By optimizing the core shape of the convention-al products, these inductors have superior material. The magnetic shield structure enables high-density mounting. High quality and high productivity, and Power inductors are inductors used for power supply circuit such as DC-DC converters. They are also called power coils or power chalks. Question:Inductor (as Inductance) For Use In Powe 2 days ago · Question:Inductor (as inductance) for use in power electronic circuits magnetic circuit to be used) is asked to design. It is foreseen to use 3F3 ferrite material in the design. The ambient temperature is 35°C (degrees centigrade) and the maximum temperature is 70 °C (degrees centigrade). Inductor must operate with 10 A DC or 10 A AC current.
### The various applications of capacitors and inductors
Energy stored in an inductor is calculated in terms of current. Capacitors resist a change in voltage. Inductors resist a change in current. There are three types of capacitorsceramic, electrolytic and tantalum. The four major types of inductors are:coupled inductors, multi-layer inductors, ceramic core inductors and moulded inductors. Voltage Across Inductor - CheggVoltage through an inductor, when supplied with DC voltage, is technically zero, although there will be some voltage drop due to the resistance provided by the coil material. The inductive reactance only kicks in when the inductor is passed through the constantly changing power/signal.
### What is Inductor? - Definition & Types - Circuit Globe
The inductors are classified into two types. 1. Air Cored Inductor (wound on non-ferrite material) The inductor in which either the core is completely absent or ceramic material is used for making the core such type of inductor is known as the air-cored inductor. The ceramic material has the very low thermal coefficient of expansion. Inductor Core Material:The Heart of an Inductor Power
• What's A Magnetic Core? Inductor Cores Material and Shape ChoicesIron powder cores have higher core losses than MPP, High Flux, or Kool Mµ, but are generally less expensive. Iron powder is often the best choice for a power inductor when the highest efficiency and smallest size are not required , but cost is critical; or when the frequency is quite low; or when the
### Recent project
• #### q235 plate ms plate thickness of plates
Q235 Chequered Plate/ms Chequered Plate/thickness Of Q235 Chequered plate/ms chequered plate/thick ... | 2021-11-28 02:24:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42104458808898926, "perplexity": 3184.1667005381355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358443.87/warc/CC-MAIN-20211128013650-20211128043650-00586.warc.gz"} |
http://quant.stackexchange.com/questions/4220/integrating-log-normal?answertab=oldest | # Integrating log-normal
The usual log normal model in differential form is:
$dS = \mu S dt + \sigma S dX$
where $dX$ is the stochastic part, so
$\frac{dS}{S} = \mu dt + \sigma dX$ (1)
and we normally solve this by subbing in $Y=\log(S)$. What's to stop us just integrating (1) to get
$S = \exp\left(\mu t + \sigma\mathcal{N}(0,1)\right)$ ?
Why do we have to through all the business of subbing in for $Y$ and using Ito's lemma?
-
An intuitive explanation can be found in Financial Calculus by Rennie and Baxter. – Bob Jansen Sep 29 '12 at 18:13
$$dS_t=\mu S_t dt + \sigma S_t dW_t$$
where $W_t$ is a standard brownian motion (SBM).
You want to solve for $S_t$, so how would you proceed?
If you integrate both sides of the equation between 0 and $T$, you get:
$$S_T - S_0= \mu \int_0^T S_t dt + \sigma \int_0^T S_t dW_t$$
Okay and then what? The fact that you have $S_t$ in both integral is problematic.
The thing is, to solve for $S_t$, you in fact need to use a bit of trickery, and the substitution and the application of Ito's lemma allows you to get rid of the $S_t$ as you get:
$$dY = d(\ln S_t)=\left(\mu - \frac{\sigma^2}{2} \right) dt +\sigma dWt$$
The integration afterwards is straightforward.
So, you use the trick to get rid of the $S_t$ in the integrals an to be able to solve easily.
- | 2015-03-29 22:33:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648604989051819, "perplexity": 360.5498205595199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298755.8/warc/CC-MAIN-20150323172138-00133-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.math.purdue.edu/pow/discussion/2016/spring/13 | Spring 2016, problem 13
Consider the binomial coefficients $\binom{n}{k}=\frac{n!}{k!(n-k)!}$, where $k\in\{1,2,\ldots, n-1\}$. Determine all positive integers $n$ for which $\binom{n}{1},\binom{n}{2},\ldots ,\binom{n}{n-1}$ are all even numbers.
2 years ago
Hi, I'm Hubert from Paris ... Luca's theorem shows that $n=2^q$;
using the Wikipedia article notations with $p=2$ we want $N=\binom mn$ even for $1\leq n\leq m-1$,
• if $m$ is not a power of $2$, then $m_k=m_j=1$ for some $j\lt k$ and the given formula says $\binom m{2^j} \equiv \binom 11 (mod\; 2)$ with $n_j=1,\; n_k=0, \; k\neq j$, odd;
• now if $m=2^k,\; m_j=0,\; j\lt k$ and $n\lt m\Rightarrow~n_k=0$, and at least one of the $n_0,n_1,\cdots n_{k-1}$ is one, putting a $0=\binom 01$ in the given product, for each $n\in(0,m)$, making each $N$ even.
Note: this strange $\lt$ is the symbol "less than", can someone fix that ?
I need your help now ... :-( | 2018-12-16 21:01:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559996128082275, "perplexity": 352.70302974273767}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827992.73/warc/CC-MAIN-20181216191351-20181216213351-00455.warc.gz"} |
https://indico.obspm.fr/event/1105/ | # Romain Gautier - Accurate rotation rate measurement with a cold atom gyroscope
March 18, 2021
Paris
Europe/Paris timezone
ABSTRACT
As soon as the concept of matter-wave duality rose from the early development of quantum mechanics, the possibility of creating atomic interferometers has been studied. Measurement of rotation rates through the Sagnac effect, well known in optics, became possible with atomic waves around 1990. Nowadays, cold-atom gyroscopes can reach high sensitivities competing with optical Sagnac interferometers, like fiber gyroscopes. Cold-atom inertial sensors feature promising applications in navigation, geoscience and for tests of fundamental physics.
In our experiment, we laser-cool cesium atoms to a temperature of 2.0 $\mu$K and launch
them vertically at a velocity of 5 m$\cdot$s$^{-1}$. Light pulse atom interferometry with counter propagating Raman transitions is used to create an interferometer with a Sagnac area of 11 cm$^2$. We then detect the internal state of the atoms at the end of the interferometer using fluorescence detection.
The SYRTE cold atom gyroscope represent the state-of-the-art of atomic gyroscopes with a long term stability$^1$ of 3$\cdot10^{-10}$ rad$\cdot$s-1. The gyroscope has been used to test new methods to reach
better sensitivity, like the possibility to work without dead time by interrogating three atomic
clouds simultaneously$^2$, allowing us to reach a sampling rate of 3.75 Hz. To reach such stability, we need to understand and minimize the systematic effects, the main one coming from the coupling of an imperfect launch velocity and a misalignment between the two Raman beams used to perform the interferometer$^3$.
In this talk I will present our work on the evaluation of the scale factor of the gyroscope and how it allows us to test the validity of the Sagnac effect for matter waves. The phase shift induced by Earth rotation depends on the angle between the oriented Sagnac area of the interferometer and
the geographic north. By rotating our apparatus, we are able to vary this angle, and therefore modulate the phase shift. This allows us to perform a test of the Sagnac effect with a relative accuracy of 2$\cdot$10$^{-4}$, which represents an improvement of a factor 100 compared to previous matter
wave experiments.
1 “Interleaved Atom Interferometry for High Sensitivity Inertial Measurements” D. Savoie, M. Altorio, B. Fang, L.A. Sidorenkov, R. Geiger, A. Landragin, Science Advances, Vol. 4, no. 12, eaau7948 (2018)
2 “Continuous Cold-Atom Inertial Sensor with 1 nrad/sec Rotation Stability” I. Dutta, D. Savoie, B. Fang, B. Venon, C. L. Garrido Alzar, R. Geiger, and A. Landragin, Phys. Rev. Lett. 116, 183003 (2016)
3 “Accurate trajectory alignment in cold-atom interferometers with separated laser beams” M. Altorio, L. A. Sidorenkov, R. Gautier, D. Savoie, A. Landragin and R. Geiger, Phys. Rev. A 101, 033606 (2020)
Starts
Ends
Europe/Paris
Paris
En visioconference | 2022-09-25 08:10:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3734441101551056, "perplexity": 3328.2457839354997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00555.warc.gz"} |