url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://tex.stackexchange.com/questions/145861/hbox-overfull-automatic-linebreaks-on-spaces | Hbox overfull: automatic linebreaks on spaces
I have a problem Overfull \hbox (13.62198pt too wide) in paragraph at lines 49--50. I read similar questions and all the solutions introduced a manual intervention directly into the text - e.g. using a linebreak, a command from another package surrounding the problematic text, etc. I would like to define some settings in the beginning of the document, e.g. linebreak upon space if you cannot decide. I would like to avoid as much manual work as possible.
My problematic lines of the text are:
\begin{itemize}
\item The language of enquiry $\mathcal{L}$ is given by $\mathcal{C}_o=\{milk, curry, rice\}$,$\mathcal{R}_o=\{TastesHot, IsWhite, ContainsSpice, ContainsSugar\}$, $\mathcal{F}_o=\{\}$.
\item Let the observational language $\mathcal{L}_o$ be $\mathcal{C}_o=\{milk, curry, rice\}$, $\mathcal{R}_o=\{TastesHot, IsWhite\}$, $\mathcal{F}_o=\{\}$
\item Let the hypothesis language $\mathcal{L}_h$ be $\mathcal{C}_h=\{milk, curry, rice\}$, $\mathcal{R}_h=\{TastesHot, IsWhite, ContainsSpice\}$, $\mathcal{F}_h=\{\}$.
\item $\mathcal{L}_h$-sentences are $\forall x. TastesHot(x) \implies ContainsSpice(x)$, $\forall x. IsWhite(x) \lor TastesHot(x)$.
\end{itemize}
Settings in the beginning of the document are:
\documentclass[a4paper,12pt,twoside]{report}
\usepackage[left=2cm,right=2cm,top=2cm,bottom=3cm]{geometry}
\usepackage{amsthm}
\usepackage{amsmath}
And the template code:
\pagestyle{empty}
\setlength{\parskip}{2ex plus 0.5ex minus 0.2ex}
\setlength{\parindent}{0pt}
\makeatletter %to avoid error messages generated by "\@". Makes Latex treat "@" like a letter
\def\submitdate#1{\gdef\@submitdate{#1}}
\def\maketitle{
\begin{titlepage}{
\Large University of London \\
%\linebreak
Imperial College of Science, Technology and Medicine \\
%\linebreak
Department of Computing
\rm
\vskip 3in
\Large \bf \@title \par
}
\vskip 0.3in
\par
{\Large \@author}
\vskip 2.9in
\par
Submitted in partial fulfilment of the requirements for the MEng Degree
\linebreak
in Computing (Artificial Intelligence) of Imperial College London
\linebreak
\@submitdate
\vfil
\end{titlepage}
}
\def\titlepage{
\newpage
\centering
\normalsize
\vbox to \vsize\bgroup\vbox to 9in\bgroup
}
\def\endtitlepage{
\par
\kern 0pt
\egroup
\vss
\egroup
\cleardoublepage
}
\def\abstract{
\begin{center}{
\large\bf Abstract}
\end{center}
\small
%\def\baselinestretch{1.5}
\normalsize
}
\def\endabstract{
\par
}
\newenvironment{acknowledgements}{
\cleardoublepage
\begin{center}{
\large \bf Acknowledgements}
\end{center}
\small
\normalsize
}{\cleardoublepage}
\def\endacknowledgements{
\par
}
\newenvironment{dedication}{
\cleardoublepage
\begin{center}{
\large \bf Dedication}
\end{center}
\small
\normalsize
}{\cleardoublepage}
\def\enddedication{
\par
}
\def\preface{
\pagenumbering{roman}
\pagestyle{plain}
\doublespacing
}
\def\body{
\cleardoublepage
\tableofcontents
\pagestyle{plain}
\cleardoublepage
\listoftables
\pagestyle{plain}
\cleardoublepage
\listoffigures
\pagestyle{plain}
\cleardoublepage
\pagenumbering{arabic}
\doublespacing
}
\makeatother %to avoid error messages generated by "\@". Makes Latex treat "@" like a letter
Notice, these have the spaces, I do not understand why the Latex complains. I am new to Latex, perhaps a simple setting would solve my problem.
• In math TeX does not normally break after a comma (and spaces are ignored) however it is hard to give specific advice without an example. Please always include a complete small document that shows the problem, the line breaking is affected by the page size, fonts, and packages loaded, none of which we can tell from a fragment. Nov 19, 2013 at 19:11
• The excerpt I have provided is the thesis which uses a template. I am not certain what all the code of the template does, hence extracting the relevant sections would be probably harder than answering the question myself. Please, let me know if you need other information too apart from the one I am going to provide. Nov 19, 2013 at 19:21
• the strings "milk, curry, rice" etc. are within the math, and the way they are coded, they are (1) in the wrong font, and (2) won't break. they really should be text (so they will be set in the proper font, among other reasons), in which case breaking would not be such a problem. Nov 19, 2013 at 19:34
• @barbarabeeton I put the strings in the math mode as they are constants - semantically part of the mathematics language. In that context, "milk, curry, rice" are not words. Could I make linebreaks in math mode if there are spaces? Nov 19, 2013 at 19:38
• But I still wonder: even if Latex put every math expression on a line, no expression would be longer than a line and these expressions have the spaces between each other. Nov 19, 2013 at 19:45
you've already separated the different elements, providing spaces between the distinct equations comprising each language and separately coding these equations as math (even though the space between the first two is, probably inadvertently, omitted). unfortunately, these spaces don't fall in a place that is optimal for tex to break the line.
the ultimate goal is for what is presented to be understood.
there are two parts to this recommendation.
first, the words "milk, curry, rice" are, as you say, constants, and as such should be in a text font, preferably not italic in this context, even though they're part of the math expression. as coded in your original, they are typeset as strings of variables multiplied together. these could be coded as \mathrm{<word>}, but that doesn't help with line breaking. it also wouldn't leave spaces after the commas, although in this situation, whether spaces are visible there or not wouldn't be misunderstood by a reader.
another way to approach these is to recognize them as text, and input them as, for example,
$\mathcal{C}_o=\{\text{milk, curry, rice}\}$
but this doesn't help with line breaking either, since in this context, the only "allowable" break is after the equals sign.
so, second part of suggestion, take advantage of the fact that a reader isn't likely to misunderstand what is meant if a line is broken within that string of constants, and (temporarily) terminate the math after the opening brace, and reinstate it for the ending brace:
$\mathcal{C}_o=\{$milk, curry, rice$\}$
to illustrate, using a forced line break for the "all math" instance, compare these two lines:
here's the input that produced the image:
\begin{itemize}
\item The language of enquiry $\mathcal{L}$ is given by
$\mathcal{C}_o=\{milk, curry, rice\}$,\\
$\mathcal{R}_o=\{TastesHot, IsWhite, ContainsSpice, ContainsSugar\}$,
$\mathcal{F}_o=\{\}$.
\item The language of enquiry $\mathcal{L}$ is given by
$\mathcal{C}_o=\{$milk, curry, rice$\}$,
$\mathcal{R}_o=\{$TastesHot, IsWhite, ContainsSpice, ContainsSugar$\}$,
$\mathcal{F}_o=\{\}$.
\end{itemize}
(by the way, that's hardly a minimal example.)
• Thank you. In the end I decided to implement your first solution of the forced manual linebreak in mathmode. Although I am not a Latex expert, the papers I have read so far, render constants in italic as semantically we think of them the same way as of variables and functions. I have never seen a variable or a function written in a text font. You also mentioned that Tex does not break the line because it does not consider to do so at the space to be optimal. Is there any way I could regulate this "optimality"? I do not mind a line with little text, the next one with a math equation. Nov 20, 2013 at 7:47
• @dt1510 never use the math italic for multi-letter identifiers, it has wide sidebearings specifically to make it clear that adjacent letters do not form a word but rather are implied multiplication. If you want an italic identifier use \mathit{milk} Nov 20, 2013 at 8:32
• @dt1510 -- regarding "optimality" for line breaking, the mechanism is described in the texbook, and is rather too complicated to summarize in a comment. however, observe that very few places in in-line math are automatically allowed as breakable; an \mbox will prevent breaking; spaces and recognized hyphenation points in running text are the only really "reliable" break points, and only when one falls in a position close enough to the end of a line that other text spaces will not be unduly stretched or compressed to justify the line. Nov 23, 2013 at 15:07 | 2023-03-20 12:04:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449717164039612, "perplexity": 1449.8808294337846}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00060.warc.gz"} |
https://www.physicsforums.com/threads/pressure-volume-and-a-fish.57559/ | # Pressure, Volume and a fish.
1. Dec 22, 2004
### Mo
I have attempted the first part of this question.I am hoping someone will be able to check if they think my reasoning is ok.The second part however, has me stumped :yuck: A push in the right direction would be quite nice --thank you!--
The Question
"A fish resting on the bottom of a lake releases a small air bubble from its mouth.The bubble increases in volume as it journeys to the surface through water known to be at a constan temperature.Explain why the volume of the bubble increases as it rises to the surface.The volume of the realeased bubble was 4mm^3 but had increased to 20mm^3 (cubed) by the time it had reached the surface.Given that the atmosphjeric pressure acting on the surface of the lake is equivalent to an additional 10m of water, calculate the depth of the lake at the point where the fish is resting.Explain all your working.
a) "The volume of the bubble increases as it rises to the surface, this can be because of:
The pressure being decreased
The temperature decreasing
Since both are related to each other, we can only assume that the temperature of the bubble decreased (to that of the sea level constan), wghich led to a a decrease in pressure and hence, an increase in volume"
b) Have not got a clue.
I hope someone will be able to check my first answer, and maybe give me a little push in the right direction for part b. Thanks very much,
Regards,
Mo
2. Dec 22, 2004
### Gamma
It is given that the temparature of the water remains constant. So
$$P*V = Constant$$
As one moves up from the bottom of the lake, there is a drop of pressure.
P decreased means V should increase.
For the second part, if what is given as the additional pressure (10 m water) is the pressure difference between lake bottom and the surface, then the answer is very straight forward.
Gamma.
3. Dec 22, 2004
### Staff: Mentor
As Gamma explained, treat the temperature of the water as constant throughout, so:
$$P*V = Constant$$
For part 2, you need to figure out the pressure difference between the surface and bottom of the lake. Use the given bubble volumes and the pressure at the surface (= 10 m of water!) to solve for the pressure at the bottom. Set up a ratio like this: $P_1 V_1 = P_2 V_2$.
4. Dec 22, 2004
### HallsofIvy
Staff Emeritus
Fish are cold blooded. There would be no reason for the initial temperture of the bubble to be different from the temperature of the water which we are told is a constant.
The pressure on the bubble is equal to the atmospheric pressure plus the weight of water above (which decreases as it rises) divided by the surface area of the bubble.
5. Dec 22, 2004
### Mo
Thank you for your help, all. It has helped me realise the answer (and the fact i gotta revise this stuff a lot more!)
Reagrds,
Mo | 2016-12-09 00:08:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812021732330322, "perplexity": 607.954260435268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542665.72/warc/CC-MAIN-20161202170902-00273-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.13/h/musliu1.html | SEARCH HOME
Math Central Quandaries & Queries
Question from musliu, a student: Two buses leave a bus station and travel in opposite direction from that same starting point.if the speed of one is twice the speed of the other and they are 240km apart at the end of 1h, what is the speed of the buses?
Hi Musliu,
The key to this problem is that rate is distance divided by time.
Suppose the slower bus travels at $s$ miles per hour. What is the speed of the faster bus? At what rate are the busses moving apart? At this rate they are 240 miles apart in 1 hour. Solve for $s.$
Check your answer.
Penny
Math Central is supported by the University of Regina and the Imperial Oil Foundation. | 2020-09-20 21:51:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5256645679473877, "perplexity": 741.444640739231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198652.6/warc/CC-MAIN-20200920192131-20200920222131-00767.warc.gz"} |
https://math.stackexchange.com/questions/1567646/error-of-linear-approximations-example-problem | # % Error of Linear Approximations: Example Problem
I received the following question on my exam and got it right, although it was entirely a guess and I had absolutely no idea how to approach it. Any help with the logic or steps behind this would be greatly appreciated.
The side length of a square ice cube is claimed to be 1.0 inches, correct to within 0.001 inch. Use linear approximation to estimate the resulting error, measured in squared inches, in the surface area of the ice cube.
± 0.12
± 0.003
± 0.02
None of these
So, just to let you know, the correct answer is None of these. I drew a diagram thinking it would help (it didn't) and was pretty much stuck at that point. I know how Linear Approximation works and sort of understand percentage error. Does it just deal with the fact that (A)-(C) use the "±" sign? We were taught that percentage error is 100 • | (approximate - exact) / (exact) |, so I guess you can never have negatives?
Is there any algebraic way to solve this problem, though?
For a cube of side length $x$ the surface area is $6x^2$. Linear approximation says $f(x) \approx f(x_0) + f'(x_0)(x-x_0)$ for $x$ close to $x_0$. (This is the most important formula in differential calculus, in my opinion.) So in this case $6x^2 \approx 6x_0^2 + 12x_0(x-x_0)$. So if for instance $x=1.001,x_0=1$, the error in the area is approximately $12 \cdot 0.001=0.012$. If they were asking about percentages then this would be $100 \cdot \frac{0.012}{6} = 0.2\%$ but I do not think they are actually asking about percentages in this context. | 2019-08-21 13:22:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902497291564941, "perplexity": 281.8775619609968}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00023.warc.gz"} |
https://crypto.stackexchange.com/questions/52777/how-can-shors-algorithm-be-applied-to-ecc | # How can Shor's Algorithm be applied to ECC?
Shor's algorithm can be used to factorize a large (semi)prime $N$ by reducing the task to period-finding of a function $f(x)=x^a$ mod $N$.
This is done by creating an equal superposition over all pairs of $a_i$ and $f(x)=x^{a_i}$ for a random $x$, then measuring $f(x)$ causing the superposition to collapse into all $a_i$ for which $f(x)$ is our measures value. Using "Fourier sampling" (I have not fully understood this part) we can then obtain the period of $\ f$ and with .5 probability this yields a non-trivial square root of $N$ which leads to a prime factor.
(Plase correct me if my understanding of the above is flawed)
Now how can this algorithm be applied to Elliptic Curve schemes like ECDSA? I struggle to find an explanation for how the discrete log problem for groups over elliptic curves could be solved using Shor's.
EDIT: I would just as well appreciate a reference to other papers except Shor's, that explain the case of Shor's algorithm on DLPs.
• Do you understand how the discrete logarithm problem is solved over $\mathbb Z^*_p$ using Shor's algorithm? – SEJPM Nov 2 '17 at 20:20
• I read Shor's paper but did not understand the DLP part. From what I learned, if our $\mathbb{Z}_p^\ast$ DLP is find $r$ so that $g^r \equiv x$ mod $p$, we create superpositions over all $\ket{a}$ and $\ket{b}$ and calculate $g^ax^{-b}$ mod $p$. What is this good for to find the order of the group we are looking for? – indiscreteLogarithm Nov 2 '17 at 20:46
• You may find this reference arxiv.org/abs/quant-ph/0301141 , interesting (found via the always useful quantum algorithm zoo math.nist.gov/quantum/zoo) – Frédéric Grosshans Nov 3 '17 at 10:02
• @Frédéric Grosshans thanks for the reference, I read this paper but could you maybe give a few sentences describing what is done, like I did for integer factorization in the initial post? I can read the formulas in the paper, but I'm uncertain what they actually do. – indiscreteLogarithm Nov 4 '17 at 11:50 | 2021-06-18 06:46:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6838341951370239, "perplexity": 389.104179715863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635724.52/warc/CC-MAIN-20210618043356-20210618073356-00446.warc.gz"} |
http://mathoverflow.net/questions/82801/pdf-of-correlated-random-variables-closed | ## pdf of correlated random variables [closed]
Hi,
assuming $X$ is an exponential random variable and constant $a$ , how can i find the pdf of $\frac{X^2}{2X+a}$
-
I believe the answer is in the FAQ: 2nd question, 1st bullet. – Ori Gurel-Gurevich Dec 6 2011 at 16:31
Perhaps try the site stats.stackexchange.com, or even math.stackexchange.com – David Roberts Dec 6 2011 at 22:47 | 2013-05-25 04:38:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7977268695831299, "perplexity": 1752.2746204535993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705502703/warc/CC-MAIN-20130516115822-00060-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://brilliant.org/problems/i-cant-find-a-cool-name-for-this/ | # I can't find a cool name for this!
$(1+x)\big(1+x^2\big)\big(1+x^4\big) \ldots \big(1+x^{128}\big) = \displaystyle \sum_{r=0}^n x^r$
Given the above equation, what is $n?$
× | 2022-06-29 06:10:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9244779348373413, "perplexity": 2911.81117206807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00619.warc.gz"} |
https://arxiver.moonhats.com/2017/12/06/cosmological-model-independent-test-of-two-point-diagnostic-by-the-observational-hubble-parameter-data-cea/ | # Cosmological model-independent test of two-point diagnostic by the observational Hubble parameter data [CEA]
Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range $0 \leqslant z \leqslant 2.36$ to make a cosmological model-independent test of the two-point $Omh^2(z_{2};z_{1})$ diagnostic. In $\Lambda$CDM model, with equation of state (EoS) $w=-1$, $Omh^2 \equiv \Omega_m h^2$ holds to be tenable, where $\Omega_m$ is the matter density parameter, and $h$ is Hubble parameter at present. Since the direct exploitation of OHD to the $Omh^2(z_{2};z_{1})$ diagnostic generate an obscure images, which is bad for analysis, we utilize two methods: the weighted mean and median statistics to bin the data categorically. The binning methods turn out to be promising and considered to be robust. By applying the $Omh^2(z_{2};z_{1})$ diagnostic into binned data, we find that the values of $Omh^2$ fluctuate as the consistent redshift intervals change, i.e., not being constant, and especially significant for the weighted mean case. Therefore, we conclude that the $\Lambda$CDM model is in doubt and other dynamical DE models with an evolving EoS should be considered more likely to be the candidates that interpret the acceleration expansion of the universe.
S. Cao, X. Duan, X. Meng, et. al.
Wed, 6 Dec 17
50/71
Comments: 14 pages, 7 figures. arXiv admin note: text overlap with arXiv:1507.02517 | 2018-02-22 20:54:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175708055496216, "perplexity": 1740.662587438364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814290.11/warc/CC-MAIN-20180222200259-20180222220259-00199.warc.gz"} |
https://phys.libretexts.org/TextBooks_and_TextMaps/Classical_Mechanics/Book%3A_Classical_Mechanics_(Tatum)/15%3A_Special_Relativity/15.25%3A_Addition_of_Kinetic_Energies | $$\require{cancel}$$
# 15.25: Addition of Kinetic Energies
I want now to consider two particles moving at nonrelativistic speeds – by which I mean that the kinetic energy is given to a sufficient approximation by the expression $$\frac{1}{2}mu^{2}$$ and so that parallel velocities add linearly.
Consider the particles in figure XV.37, in which the velocities are shown relative to laboratory space.
Referred to laboratory space, the kinetic energy is $$\frac{1}{2}m_{1}u_{1}^{2}+\frac{1}{2}m_{2}u_{2}^{2}$$. However, the centre of mass is moving to the right with speed $$V=\frac{(m_{1}u_{1}+m_{2}u_{2})}{(m_{1}+m_{2})}$$, and, referred to centre of mass space, the kinetic energy is $$\frac{1}{2}m_{1}(u_{1}-V)^{2}+\frac{1}{2}m(u_{2}+V)^{2}$$. On the other hand, if we refer the situation to a frame in which $$m_{1}$$ is at rest, the kinetic energy is $$\frac{1}{2}m_{2}(u_{1}+u_{2})^{2}$$, and, if we refer the situation to a frame in which $$m_{2}$$ is at rest, the kinetic energy is $$\frac{1}{2}m_{1}(u_{1}+u_{2})^{2}$$.
All we are saying is that the kinetic energy depends on the frame to which speed are referred – and this is not something that crops up only for relativistic speeds.
Let us put some numbers in. Let us suppose, for example that
$$m_{1} = 3$$kg $$u_{1} = 4$$m s-1
$$m_{2} = 2$$kg $$u_{3} = 4$$m s-1
so that
$$V = 1.2$$m s-1.
In that case, the kinetic energy
referred to laboratory space is 33 J,
referred to centre of mass space is 29.4 J,
referred to $$m_{1}$$ is 49 J,
referred to $$m_{2}$$ is 73.5 J.
In this case the kinetic energy is least when referred to centre of mass space, and is greatest when referred to the lesser mass.
Exercise. Is this always so, whatever the values of m1, m2 , u1 and u2?
It may be worthwhile to look at the special case in which the two masses are equal (m) and the two speeds(whether in laboratory or centre of mass space) are equal (u).
In that case the kinetic energy in laboratory or centre of mass space is mu2, while referred to either of the masses it is 2mu2.
We shall now look at the same problem for particles travelling at relativistic speeds, and we shall see that the kinetic energy referred to a frame in which one of the particles is at rest is very much greater than (not merely twice) the energy referred to a centre of mass frame.
If two particles are moving towards each other with “speeds” given by g1 and g2 in centre of mass space, the g of one relative to the other is given by equation 15.16.14, and, since K = g - 1, it follows that if the two particles have kinetic energies K1 and K2 in centre of mass space (in units of the m0c2 of each), then the kinetic energy of one relative to the other is
$K=K_{1} \oplus K_{2} = K_{1}+K_{2}+K_{1}K_{2}+\sqrt{K_{1}K_{2}(K_{1}+2)(K_{2}+2)}. \label{12.25.1}$
If two identical particles, each of kinetic energy $$K_{1}$$ times $$m_{0}c^{2}$$, approach each other, the kinetic energy of one relative to the other is
$K=2K_{1}(K_{1}+2). \label{15.25.2}$
For nonrelativistic speeds as $$K_{1}\rightarrow 0$$, this tends to $$K=4K_{1}$$, as expected.
Let us suppose that two protons are approaching each other at 99% of the speed of light in centre of mass space ($$K_{1}$$ = 6.08881). Referred to a frame in which one proton is at rest, the kinetic energy of the other will be $$K$$ = 98.5025, the relative speeds being 0.99995 times the speed of light. Thus $$K=16K_{1}$$ rather than merely $$4K_{1}$$ as in the nonrelativistic calculation. For more energetic particles, the ratio $$\frac{K}{K_{1}}$$ is even more. These calculations are greatly facilitated if you wrote, as suggested in Section 15.3, a program that instantly connects all the relativity factors given there.
Exercise $$\PageIndex{1}$$
Two protons approach each other, each having a kinetic energy of 500 GeV in laboratory or centre of mass space. (Since the two rest masses are equal, these TWO spaces are identical.) What is the kinetic energy of one proton in a frame in which the other is at rest?
(Answer: I make it 535 TeV.)
The factor $$K$$ (the kinetic energy in units of $$m_{0}c^{2}$$) is the last of several factors used in this chapter to describe the speed at which a particle is moving, and I take the opportunity here of summarising the formulas that have been derived in the chapter for combining these several measures of speed. These are
$\beta_{1}\oplus\beta_{2}=\frac{\beta_{1}+\beta_{2}}{1+\beta_{1}\beta_{2}}. \label{15.16.7}\tag{15.16.7}$
$\gamma_{1}\oplus\gamma_{2}=\gamma_{1}\gamma_{2}+\sqrt{(\gamma_{1}^{2}-1)(\gamma_{2}^{2}-1)}. \label{15.16.14}\tag{15.16.14}$
$k_{1}\oplus k_{2}=k_{1}k_{2} \label{15.18.11}\tag{15.18.11}$
$z_{1}\oplus z_{2}=z_{1}z_{2}+z_{1}+z_{2}. \label{15.18.12}\tag{15.18.12}$
$$K=K_{1} \oplus K_{2} = K_{1}+K_{2}+K_{1}K_{2}+\sqrt{K_{1}K_{2}(K_{1}+2)(K_{2}+2)}$$.
$\frac{\phi_{1}}{\phi_{2}}=\phi_{1}+\phi_{2} \label{15.16.11}\tag{15.16.11}$
If the two speeds to be combined are equal, these become
$\beta_{1}\oplus\beta_{1}=\frac{2\beta_{1}}{1+\beta_{1}^{2}}. \label{12.25.3}$
$\gamma_{1}\oplus\gamma_{1}=2\gamma_{1}^{2}-1 \label{12.25.4}$
$\frac{k_{1}}{k_{1}}=k_{1}^{2} \label{12.25.5}$
$z_{1}\oplus z_{1}=z_{1}(z_{1}+2) \label{12.25.6}$
$K_{1}\oplus K_{1}=2K_{1}(K_{1}+2). \label{12.25.7}$
$\phi_{1}\oplus\phi_{1}=2\phi. \label{12.25.8}$
These formulas are useful, but for numerical examples, if you already have a program for interconverting between all of these factors, the easiest and quickest way of combinng two “speeds” is to convert them to $$\phi$$. We have seen examples of how this works in Sections 15.16 and 15.18. We can do the same thing with our example from the present section when combining two kinetic energies. Thus we were combining two kinetic energies in laboratory space, each of magnitude $$K_{1}$$ = 6.08881 ($$\phi_{1}$$ = 2.64665). From this, $$\phi$$ = 5.29330, which corresponds to $$K$$ = 98.5025. | 2018-07-20 10:43:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252858877182007, "perplexity": 325.8045956321176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00174.warc.gz"} |
https://www.iacr.org/cryptodb/data/author.php?authorkey=8494 | ## CryptoDB
### Aner Ben-Efraim
#### Publications
Year
Venue
Title
2021
EUROCRYPT
Whilst secure multiparty computation (MPC) based on garbled circuits is concretely efficient for a small number of parties $n$, the gap between the complexity of practical protocols, which is $O(n^2)$ per party, and the theoretical complexity, which is $O(n)$ per party, is prohibitive for large values of $n$. In order to bridge this gap, Ben-Efraim, Lindell and Omri (ASIACRYPT 2017) introduced a garbled-circuit-based MPC protocol with an almost-practical pre-processing, yielding $O(n)$ complexity per party. However, this protocol is only passively secure and does not support the free-XOR technique by Kolesnikov and Schneider (ICALP 2008), on which almost all practical garbled-circuit-based protocols rely on for their efficiency. In this work, to further bridge the gap between theory and practice, we present a new $n$-party garbling technique based on a new variant of standard LPN-based encryption. Using this approach we can describe two new garbled-circuit based protocols, which have practical evaluation phases. Both protocols are in the preprocessing model, have $O(n)$ complexity per party, are actively secure and support the free-XOR technique. The first protocol tolerates full threshold corruption and ensures the garbled circuit contains no adversarially introduced errors, using a rather expensive garbling phase. The second protocol assumes that at least $n/c$ of the parties are honest (for an arbitrary fixed value $c$) and allows a significantly lighter preprocessing, at the cost of a small sacrifice in online efficiency. We demonstrate the practicality of our approach with an implementation of the evaluation phase using different circuits. We show that like the passively-secure protocol of Ben-Efraim, Lindell and Omri, our approach starts to improve upon other practical protocols with $O(n^2)$ complexity when the number of parties is around $100$.
2018
ASIACRYPT
We initiate a study of garbled circuits that contain both Boolean and arithmetic gates in secure multiparty computation. In particular, we incorporate the garbling gadgets for arithmetic circuits recently presented by Ball, Malkin, and Rosulek (ACM CCS 2016) into the multiparty garbling paradigm initially introduced by Beaver, Micali, and Rogaway (STOC ’90). This is the first work that studies arithmetic garbled circuits in the multiparty setting. Using mixed Boolean-arithmetic circuits allows more efficient secure computation of functions that naturally combine Boolean and arithmetic computations. Our garbled circuits are secure in the semi-honest model, under the same hardness assumptions as Ball et al., and can be efficiently and securely computed in constant rounds assuming an honest majority.We first extend free addition and multiplication by a constant to the multiparty setting. We then extend to the multiparty setting efficient garbled multiplication gates. The garbled multiplication gate construction we show was previously achieved only in the two-party setting and assuming a random oracle.We further present a new garbling technique, and show how this technique can improve efficiency in garbling selector gates. Selector gates compute a simple “if statement” in the arithmetic setting: the gate selects the output value from two input integer values, according to a Boolean selector bit; if the bit is 0 the output equals the first value, and if the bit is 1 the output equals the second value. Using our new technique, we show a new and designated garbled selector gate that reduces by approximately $33\%$ the evaluation time, for any number of parties, from the best previously known constructions that use existing techniques and are secure based on the same hardness assumptions.On the downside, we find that testing equality and computing exponentiation by a constant are significantly more complex to garble in the multiparty setting than in the two-party setting.
2017
ASIACRYPT
2014
TCC | 2022-08-07 21:55:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6849748492240906, "perplexity": 1708.5464630817644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00010.warc.gz"} |
https://www.groundai.com/project/mass-function-of-binary-massive-black-holes-in-active-galactic-nuclei/ | Mass Function of Binary Massive Black Holes in Active Galactic Nuclei
# Mass Function of Binary Massive Black Holes in Active Galactic Nuclei
Kimitake Hayasaki Yoshihiro Ueda Department Astronomy, Kyoto University Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto 606-8502 Naoki Isobe Department Astronomy, Kyoto University Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto 606-8502
###### Abstract
If the activity of active galactic nuclei (AGNs) is predominantly induced by major galaxy mergers, then a significant fraction of AGNs should harbor binary massive black holes in their centers. We study the mass function of binary massive black holes in nearby AGNs based on the observed AGN black-hole mass function and theory of evolution of binary massive black holes interacting with a massive circumbinary disk in the framework of coevolution of massive black holes and their host galaxies. The circumbinary disk is assumed to be steady, axisymmetric, geometrically thin, self-regulated, self-gravitating but non-fragmenting with a fraction of Eddington accretion rate, which is typically one tenth of Eddington value. The timescale of orbital decay is then estimated as for equal mass black-hole, being independent of the black hole mass, semi-major axis, and viscosity parameter but dependent on the black-hole mass ratio, Eddington ratio, and mass-to-energy conversion efficiency. This makes it possible for any binary massive black holes to merge within a Hubble time by the binary-disk interaction. We find that for the equal mass ratio and for the one-tenth mass ratio of the total number of nearby AGNs have close binary massive black holes with orbital period less than ten years in their centers, detectable with on-going highly sensitive X-ray monitors such as Monitor of All-sky X-ray Image and/or Swift/Burst Alert Telescope. Assuming that all binary massive black holes have the equal mass ratio, about of AGNs with black hole masses of has the close binaries and thus provides the best chance to detect them.
Hayasaki et al.Mass Function of Binary Massive Black Holes in Active Galactic Nuclei \Received2010/1/18
affiliationtext: Department of Physics, Graduate School of Science, Hokkaido University Kitaku, Sapporo 060-0810, Japan \KeyWords
black hole physics – accretion, accretion disks – binaries:general – galaxies:nuclei
## 1 Introduction
Most galaxies are thought to have massive black holes at their centers (Kormendy & Richstone, 1995). Massive black holes play an important role not only in the activities of active galactic nuclei (AGNs) and quasars but also in the formation and evolution of galaxies(Magorrian et al., 1998; Ferrarese&Merritt, 2000; Gebhardt et al., 2000). Galaxy merger leads to the mass inflow to the central region by tidal interactions and then a nucleus of the merged galaxy is activated and black hole grows by gas accretion (Yu & Tremaine, 2002). At some step, the outflow from the central black hole sweeps away the surrounding gas and quenches the star formation and black hole growth. This also produces an observed correlation between the black hole mass and velocity dispersion of individual galaxies (Di Matteo et al., 2005).
During a sequence of processes, binary massive black holes with a subparsec-scale separation are inevitably formed before two black holes merge by emitting gravitational radiation. Recent hydrodynamic simulations showed the rapid binary black hole formation in the parsec scale within several by the interaction between the black holes and the surrounding stars and gas in gas-rich galaxy merger (Dotti et al., 2007; Mayer et al., 2007). Even if there are transiently triple massive black holes in a galactic nucleus, the system finally settles down to the formation of binary massive black holes by merging of two black hole or by ejecting one black hole from the system via a gravitational slingshot (Iwasawa et al., 2006).
In coalescing process of two massive black hole, there has been the so-called final parsec problem: it is still unknown how binary massive black holes evolve after its semi-major axis reached to the subparsec scale where the dynamical friction with the neighboring stars is no longer effective. Many authors have tackled the final parsec problem in the context of the interaction between the black holes and the stars, but there has been still extensive discussions (Begelman et al., 1980; Makino, 1997; Quinlan & Hernquist, 1997; Milosavljević & Merritt, 2003; Sesana et al., 2007; Matsubayashi et al., 2007; Matsui & Habe, 2009). There is other possible way to extract the energy and the angular momentum from binary massive black holes by the interaction between the black holes and the gas surrounding them. This kind of the binary-disk interaction could also be the candidate to resolve the final parsec problem (Ivanov et al., 1999; Gould & Rix, 2000; Armitage & Natarajan, 2002, 2005; Escala et al., 2005; Hayasaki, 2009; Cuadra et al., 2009; Haiman et al., 2009) in spite of a claim(Lodato et al., 2009). Some authors showed that there exist close binary massive black holes with a short orbital period less than ten years and significant orbital eccentricity(Armitage & Natarajan, 2005; Hayasaki, 2009; Cuadra et al., 2009).
Hayasaki et al. (2007) studied the accretion flow from a circumbinary disk onto binary massive black holes, using a smoothed particle hydrodynamics (SPH) code. They found that mass transfer occurs from the circumbinary disk to each black hole. The mass accretion rate significantly depends on the binary orbital phase in eccentric binaries, whereas it shows little variation with orbital phase in circular binaries. Periodic behaviors of the mass accretion rate in the binary system with the different geometries or system parameters were also discussed by some other authors(Bogdanović et al., 2008; MacFadyen & Milosavljević, 2008; Cuadra et al., 2009). Recently, Bogdanović et al. (2009) and Dotti et al. (2009) proposed the hypothesis that SDSSJO92712.65+294344.0 consists of two massive black holes in binary, by interpreting the observed emission line features as those arising from the mass-transfer stream from the circumbinary disk.
Hayasaki et al. (2008) have, furthermore, performed a new set of simulations at higher resolution with an energy equation based on the blackbody assumption, adopting the same set of binary orbital parameters we had previously used (Hayasaki et al. 2007) (, eccentricity , and mass ratio ). By this two-stage simulation, they found that while the Optical/NIR light curve exhibits little variation, the X-ray/UV light curve shows significant orbital modulation in the triple-disk system, which consists of an accretion disk around each black hole and a circumbinary disk around them. X-ray/UV periodic light variation are originated from a phase-dependent mass transfer from circumbinary disk (cf. Hayasaki & Okazaki (2005)). The one-armed spiral wave on the accretion disk induced by the phase-dependent mass transfer causes the mass to accrete onto each black hole within one orbital period. This is repeated every binary orbit. These unique light curves are, therefore, expected to be one of observational signatures of binary massive black holes.
Highly sensitive X-ray monitors over a wide area provide us with a unique opportunity to discover close binary massive black holes in an unbiased manner, based on the detection of the orbital flux modulation. Monitor of All-sky X-ray Image (MAXI; Matsuoka et al. (2009)), a Japanese experimental module attached to the International Space Station, is now successful in operation since the launch in 2009 July. MAXI, covering the energy band of 0.5–30 keV, achieves a significantly improved sensitivity as an all X-ray monitor compared with previous missions. According to the hard X-ray luminosity function of AGNs by Ueda et al. (2003), 1,300 nearby AGNs can be detected at the confusion flux limit of 0.2 mCrab from the extragalactic sky at galactic latitudes higher than 10. Among them, the brightest 100 AGNs can be monitored every 2 months with a flux accuracy of 20% level. Over the plausible mission life of MAXI (), it is possible to detect binary massive black holes in nearby AGNs with binary orbital periods less than . Besides MAXI, the Swift/Burst Alert Telescope (BAT) survey (Tueller et al., 2010) can make a similar job in the hard X-ray band of 15–200 keV.
In this paper, we investigate mass functions of binary massive black holes with a very-short orbital period detectable with MAXI and/or Swift/BAT. The plan of this paper is organized as follows. In Section 2, we describe the evolutionary scenario of binary massive black holes in the framework of coevolution of massive black holes and their host galaxies. Section 3 shows mass functions of close binary massive black holes. They can be written as the product of the observed black-hole mass function of nearby AGNs and probability for finding binary massive black holes, based on the evolutionary scenario as described in Section 2. Brief discussions and conclusions are summarized in Section 4.
## 2 Final-parsec evolution of binary massive black holes
We first describe the evolution of binary massive black holes, focusing on interaction with surrounding gaseous disks in the framework of coevolution of massive black holes and their host galaxies.
Does the black hole mass correlate with the velocity dispersion of bulge in individual galaxies despite that there is a single or binary in their center? This is one of fundamental problems in the framework of the coevolution of massive black holes and their host galaxies. In some elliptical galaxies, there is a core with the outer steep brightness and inner shallow brightness. Binary massive black holes are considered to be closely associated with such a core structure with the mass of stellar light deficit(Ebisuzaki et al., 1991; Milosavljević & Merritt, 2001). Recently, Kormendy & Bender (2009) showed the tight correlations among black hole masses, velocity dispersions of host galaxies, and masses of stellar light deficits, using the observational data of 11 elliptical galaxies with cores. This suggests that these correlations are still held for even if there are not only a single massive black hole but also binary massive black holes in cores of elliptical galaxies.
Binary massive black holes are considered mainly to evolve via three stages (Begelman et al., 1980). Firstly, each of black holes sinks independently towards the center of the common gravitational potential due to the dynamical friction with neighboring stars. If the binary can be regarded as a single black hole, its gravitational influence radius to the field stars is defined as
rinf=GMbhσ2∗∼3.4[pc](Mbh107M⊙)1−2/β2, (1)
where the tight correlation between the black hole mass and one-dimensional velocity dispersion, , of the stars, the so-called relation: is made use of. Unless otherwise noted, and are adopted by (Merritt & Milosavljević, 2005) in what follows.
When the separation between two black holes becomes less than or so, angular momentum loss by the dynamical friction slows down due to the loss-cone effect and a massive hard binary is formed. This is the second stage. The binary harden at the radius where the kinetic energy per unit mass of the star with , equals to the binding energy per unit mass of the binary(Quinlan, 1996). Its hardening radius is defined as
ah=14q(1+q)2rinf∼8.5×10−1[pc]q(1+q)2(Mbh107M⊙)1−2/β2, (2)
where is the black-hole mass ratio.
Finally, the semi-major axis of the binary decreases the radius at which the gravitational radiation dominates, and then a pair of black holes merge into a single supermassive black hole. The detailed timescale in each evolutionary phase will be described in the following three subsections.
### 2.1 Star driven phase
#### 2.1.1 The dynamical friction
Each black hole sinks into the common center of mass due to the dynamical friction with ambient field stars. The merger rate of two black holes is given by(Binney & Tremaine, 1987)
˙a(t)a(t)=−0.428√2lnΛGMbhσ∗1a2(t), (3)
where is the separation between two black holes and is the Coulomb logarithm. The decaying timescale of black-hole orbits is then written
tdf=∣∣∣a(t)˙a(t)∣∣∣∼8.4×106[yr](a(t)a0)2(Mbh107M⊙)1/β2−1, (4)
where is the typical core radius of the host galaxy. The integration of merger rate gives the following equation
a(t)a0=(1−ttdfc)1/2, (5)
where
tdfc∼4.2×106[yr](a0100pc)2(Mbh107M⊙)1/β2−1. (6)
Recall that , and hence the index of mass dependence is for both and .
#### 2.1.2 Stellar scattering
Even after hardening of the binary, the binary orbit continues to decay by loss cone refilling of stars due to the two-body relaxation. In addition, the repeated gravitational slingshot interactions with stars makes the orbital decay significantly rapid for black hole with mass less than few. For the system with a singular isothermal sphere, the timescale is given as(Milosavljević & Merritt, 2003)
tss=∣∣∣a(t)˙a(t)∣∣∣∼3.0×108[yr](aha(t))(Mbh107M⊙). (7)
For the black hole with mass greater than , this mechanism contributes inefficiently to the orbital decay of the binary on the subparsec scale. Instead, the binary-disk interaction is likely to be a dominant mechanism of the orbital decay. Quite recent N-body simulations show that stellar dynamics alone can also resolve the final parsec problem (e.g., Berentzen et al. (2009)).
### 2.2 Gaseous-disk driven phase
The circumbinary disk would be formed inside the gravitational influence radius after hardening of the binary. The inner edge of circumbinary disk is then defined as
rin=(m+1l)2/3a(t) ∼1.8[pc]q(1+q)2(Mbh107M⊙)1−2/β2(a(t)ah), (8)
where is the semi-major axis of binary, and and are adopted unless otherwise noted(Artymowicz & Lubow, 1994).
For simplicity, the circumbinary disk is assumed to be a steady, axisymmetric, and geometrically thin with a differential rotation and fraction of Eddington accretion rate:
˙Macc=η˙MEdd (9)
where , and are the Eddington ratio, mass-to-energy conversion efficiency, and Eddington accretion rate defined by , where , , and show the proton mass, light velocity, and Thomson scattering cross section, respectively.
The surface density of circumbinary disk can be then written as
Σ=˙Macc2πν∣∣∣dlnrdlnΩ∣∣∣. (10)
where the eddy viscosity is defined as with the Shakura-Sunyaev viscosity parameter(Shakura & Sunyaev, 1973), , characteristic velocity, , and disk scale-hight, .
The stability criterion for self-gravitation of the disk is defined by the Toomre Q-value: . If the disk structure obeys a standard disk with with , is much less than 1. This means that the disk is massive enough to be gravitationally unstable. Therefore, we introduce a self-regulated, self-gravitating disk model (Mineshige & Umemura (1996); Bertin (1997)). The condition of self-regulated disk is given by
Q≈1. (11)
From equations (10) and (11), the effective sound velocity of self-gravitating disk is written as
csg=[G˙Macc2αsg∣∣∣lnrlnΩ∣∣∣]1/3, (12)
where we adopt and . If the radiative cooling in the disk is so efficient, the disk would fragment. The criterion whether the disk fragment or not is given by (Rice et al., 2005). If , the fragmentation occurs in the disk and causes the subsequent star formation. Such a situation is beyond the scope of our disk model. Assuming that the disk face radiates as a blackbody, the sound velocity can be written
cs=(Rgμ)1/2(3GMbh˙Macc8πr3σ)1/8, (13)
where , , and are the gas constant, molecular weight, and Stefan-Bolzmann constant, respectively.
The self-gravity of the disk is stronger than the gravity of the central black hole at the self-gravitating radius where the total disk mass equals to the black hole mass. The rotation velocity of the disk then become flat outside the self-gravitating radius:
rsg≈GMbh8(G˙Macc2αsg)−2/3 ∼64[pc](αsg0.06)2/3(0.1ϵ)−2/3(η0.1)−2/3(Mbh107M⊙)1/3, (14)
where we put . Inside , the angular frequency of the disk corresponds to Keplerian one where the gravity of central black hole dominates the dynamics of the disk. When , the disk transits from the self-regulated, self-gravitating disk to standard disk. Its radius is given as
rstsg=(Rgμ)4/3(G˙Macc2αsg)−8/9(3GMbh˙Macc8πσ)1/3 ∼4.4×10−4[pc](αsg0.06)8/9(0.1ϵ)(η0.1)(Mbh107M⊙)−2/9. (15)
As the binary evolves, the disk structure gets to depend on black hole mass. Since is less than and more than in the all black-hole mass range, the circumbinary disk is initially modeled as the self-regulated, self-gravitating disk with the Keplerian rotation. When the semi-major axis decays to the decoupling radius defined by equations (25) and (26), is less than for . The disk structure for can then be described by the standard disk theory, whereas the disk still remains to be the self-regulated, self-gravitating disk with Keplerian rotation for other black-hole mass ranges (see the dotted line of Fig. 2).
The circumbinary disk and binary exchanges the energy and angular momentum through the tidal/resonant interaction. For moderate orbital-eccentricity range, the torque of binary potential dominantly acts on the circumbinary disk at the 1:3 outer Lindblad resonance radius where the binary torque is balanced with the viscous torque(Artymowicz & Lubow, 1994).
On the other hand, the circumbinary disk deforms to be elliptical by the tidal/resonant interaction. The density of gas is locally enhanced by the gravitational potential at the closest two points from each black hole on the inner edge of circumbinary disk (Hayasaki et al., 2007). The angular momentum of gas is removal by the locally enhanced viscosity, and thus the gas overflows from the two points to the central binary. An accretion disk is then formed around each black hole by the transferred gas(Hayasaki et al., 2008). The mass transfer therefore adds its angular momentum to the binary via two accretion disks(Hayasaki, 2009).
In a steady state, the mass transfer rate equals to the accretion rate, . Since it is much smaller than the critical transfer rate defined by equation (41) of Hayasaki (2009), the effect of torque by the mass transfer torque can be neglected. The orbital-decay rate is then approximately written by equation (40) of Hayasaki (2009) as
˙a(t)a(t)≈−˙JcbdJb√1−e2, (16)
where is the orbital eccentricity of the binary and the net torque, , from the binary to circumbinary disk can be approximately written as
˙Jcbd≈34/3(1+q)2q1tvis,inMldMbhJb√1−e2, (17)
where is the viscous timescale measured at the inner edge of disk and is the local disk mass defined as . From equation (10), the product of the viscous timescale and ratio of the black hole mass to local disk mass is given by
tvis,in(MldMbh)−1=2Mbh˙Macc∣∣∣dlnΩdlnr∣∣∣. (18)
Substituting equation (17) with equation (18) into equation (16), the orbital-decay rate can be expressed by
˙a(t)a(t)=−1tgasc, (19)
where is the characteristic timescale of orbital decay due to the binary-disk interaction:
tgasc =234/3q(1+q)2Mbh˙Macc∣∣∣dlnΩdlnr∣∣∣ (20) ∼3.1×108[yr]q(1+q)2(0.1η)(ϵ0.1),
where we adopt . Note that is independent of the black-hole mass, semi-major axis, and viscosity parameter, but dependent on the black-hole mass ratio, Eddington ratio, and mass-to-energy conversion efficiency. These arise from the assumptions that the disk is axisymmetric with a fraction of Eddington accretion rate and its angular momentum is outwardly transferred by the viscosity of Shakura-Sunyaev type.
Integrating equation (19), we obtain
a(t)ah=exp(−ttgasc). (21)
Following equations (29) of Hayasaki (2009), the orbital eccentricity increases with time in the present disk model. The orbital eccentricity is, however, expected to saturate during the disk-driven phase, because the angular momentum of the binary is mainly transferred to the circumbinary disk when the binary is at the apastron. The saturation value of orbital eccentricity becomes . This value is estimated by equating the angular frequency at the inner edge of circumbinary disk with the orbital frequency at the apastron.
### 2.3 Gravitational-wave driven phase
The merging rate by the emission of gravitational wave can be written by(Peters, 1964) as
˙a(t)a(t)=−256G3M3bh5c5a4q(1+q)2f(e)(1−e2)7/2, (22)
where . The coalescent timescale is then given as
tgw=∣∣∣a(t)˙a(t)∣∣∣=532(a(t)rS)4rSc(1+q)2q(1−e2)7/2, (23)
The semi-major axis decays to the transition radius, where the emission of gravitational wave is more efficient than the binary-disk interaction. In other words, becomes shorter than inside the transition radius. Comparing equation (20) with equation (23), the transition radius is defined by
atrS=[325ctgascrSq(1+q)2f(e)(1−e2)7/2]1/4. (24)
When the timescale of orbital decay by the emission of gravitational radiation is shorter than the viscous timescale measured at the inner edge, the circumbinary disk is decoupled with the binary. The decoupling radius is then defined by
for the standard disk, and
for the self-regulated, self-gravitating disk. Here is the viscous timescale measured at the inner edge of circumbinary disk when . Depending on the disk model, it can be written as
tvis,S∼5.9×102[yr](0.1αSS)(0.1η)1/4(ϵ0.1)1/4(Mbh107M⊙)5/4 (27)
for the standard disk, and
tvis,S∼5.6×104[yr](0.06αsg)1/3(0.1η)2/3(ϵ0.1)2/3(Mbh107M⊙)1/3 (28)
for the self-regulated, self-gravitating disk, respectively.
Integrating equation (22), we obtain
a(t)at=(1−ttgwc)1/4, (29)
where can approximately be written as
tgwc≃3.8×1011[yr](a(t)at)4(Mbh107M⊙)−3(1+q)2q. ×(1−e20)7/2, (30)
where is the initial orbital eccentricity at .
### 2.4 Observable period range for binary black holes
The resonant/tidal interaction causes the mass transfer from the circumbinary disk to accretion disk around each black hole(Hayasaki et al., 2007). The ram pressure by mass transfer acts on the outer edge of accretion disk and gives a one-armed oscillation on the disk (cf. Hayasaki & Okazaki (2005)). The one-armed wave propagates from the outer edge to the black hole, which allows gas to accrete onto the black hole within the orbital period. This is repeated every binary orbit. This mechanism therefore originates periodic light variations synchronized with the orbital period(Hayasaki et al., 2008).
For the observational purpose, is defined as the semi-major axis corresponding to feasible orbital period, , detectable with MAXI and/or Swift/BAT by
a10∼4.9×10−3[pc](Mbh107M⊙)1/3. (31)
Fig. 1 shows the mass dependence of each orbital period evaluated at , , and for equal-mass binary massive black holes. The dashed line, dotted line and horizontal dash-dotted line show the orbital period at , , and , respectively. The area filled in with the solid line shows the existential region of binary black hole candidates with periodic light-curve signatures detectable with MAXI and/or Swift/BAT.
Fig. 2 shows the mass dependence on each characteristic semi-major axis, , , , and , for the equal-mass binary. All of the semi-major axes get longer as black hole mass is more massive. The decoupling radius is described by equation (25) when , whereas it is described by equation (26) when . Note that is on the track, where the binary evolves by the binary-disk interaction, when , whereas is on the track, where the binary evolves by the emission of gravitational wave, when .
Fig. 3 shows the orbital-decay timescale, , of binary massive black holes with in panel (a) and corresponding elapsed time in panel (b). The dashed line shows the timescale in the first evolutionary phase where the orbit decays by the dynamical friction (hereafter, dynamical-friction driven phase). The solid line shows the timescale in the second evolutionary phase where the orbit decays by the binary-disk interaction (hereafter, disk-driven phase). The dotted line shows the timescale in the final phase where the orbit decays by the dissipation due to the emission of the gravitational wave (hereafter, gravitational-wave driven phase). The dash-dotted line shows the orbital-decay timescale of by stellar scattering (hereafter, stellar-scattering driven phase). We note that the orbital-decay timescale of stellar-scattering driven phase is too long for stellar scattering to be efficient mechanism in binary evolution. The binary therefore evolves toward coalescence via first dynamical-friction driven phase, second disk-driven phase, and final gravitational-wave driven phase. The orbital-decay timescale of disk-driven phase is the longest among the other two.
Fig. 4 shows the orbital-decay timescale of the binary with the same format as that of panel (a) of Fig. 3, but for . Note that stellar scattering is efficient after hardening of the binary, as shown in the dash-dotted line of Fig. 4.
## 3 Mass function of binary massive black holes
Observed black-hole mass function of nearby AGNs allows one to study mass functions of binary massive black holes based on the evolutionary scenario described in the previous section. Fig. 5 shows the fractional mass function of hard X-ray selected AGNs in the local universe, where the black hole mass is estimated from the K-band magnitudes of their host galaxies, as compiled by Winter et al. (2009). This mass function is 99% complete for the uniform sample of local Seyfert galaxies, and hence the uncertainties caused by the sample incompleteness should be regarded negligible.
Assuming that two galaxies start randomly to merge during the interval, , where is the look back time from present universe, a probability for finding binary massive black holes in the present universe can be expressed by . The number of binary massive black holes, , in AGNs can be obtained by , where is the number of AGNs and is a dimensionless fraction parameter. More than of the host galaxies of Swift/BAT AGNs show evidence for on-going mergers (Koss et al., 2010), where the separation between central two cores in the merging galaxies is more than scale. Therefore, we set unless otherwise noted, since we discuss binary black holes with a separation of much smaller scale.
One can estimate the probability for finding binary black holes in nearby AGNs by putting into , where shows the e-holding accretion timescale which corresponds to the typical lifetime of AGNs. Fig. 6 (a) and (b) show the mass-dependences of , evaluated for and , respectively. The dashed line, dash-dotted line and dotted line show the probability for finding binary black holes with , , and , respectively. It is noted from both panels that the probability of is lower than that of in the mass range less than the order of , because the binary more rapidly evolves by stellar scattering than by disk-binary interaction as shown in the dash-dotted line of Fig. 4. The probability evaluated at keeps constant over all of mass-ranges.
The solid line shows the integrated probability for finding binary black holes with the semi-major axis less than . The integrated probability is approximately given by , where
(32)
from equation (20), (21), (29), and (30).
The integrated probability estimated for is the monotonically decreasing function of black hole mass. Note that they rapidly decreases as the black hole mass becomes greater than for and for .
Fig. 7 shows mass functions of binary massive black holes in AGNs. The mass function is defined by multiplying the black-hole mass function of AGNs by the probability for finding binary black holes. The mass functions evaluated at the hardening radius, and transition radius, , are exhibited in panel (a) and (b), respectively. In both panels, the solid line and the dashed line show the mass function with and that with , respectively.
There is less population of binaries with mass less than in panel (a), because the binary more rapidly evolves by stellar scattering than by disk-binary interaction in the mass range, as shown in the dash-dotted line of Fig. 4. Total fraction of binary black holes over all mass ranges are for and for in both panels (the quoted errors reflect the statistical uncertainties in the Swift/BAT AGN mass function). It is noted that from both panels that binary black holes of are the most frequent in the nearby AGN population.
Fig. 8 shows the mass functions of binary black holes with a constraint that the orbital period is less than ten years in both cases of and . From the figure, binary black holes of for and those of for are the most frequent in the nearby AGN population. It is notable that, assuming that all the binaries have equal black-hole mass ratio, of AGNs with black hole of has binary black holes.
Total fraction of binary black holes over all mass ranges are for and for . We can therefore observe candidates for AGNs detectable with MAXI, assuming that activities of all nearby AGNs lasts for . Note that MAXI covers the softer energy band (2–30 keV) than Swift/BAT (15–200 keV), and hence the ratio of type-1 (unabsorbed) AGNs to type-2 (absorbed) AGNs will be higher in the MAXI survey (8:5 based on the model by Ueda et al. (2003)) than in the Swift/BAT survey (1:1, Tueller et al. (2010)). Here we have referred to the same AGN mass function, however, since we do not find statistically significant difference between the observed mass functions of type-1 and type-2 AGNs based on the Kolmogorov-Smirnov test.
## 4 Summary & Discussion
We study mass functions of binary massive black holes on the subparsec scale in AGNs based on the evolutionary scenario of binary massive black holes with surrounding gaseous disks in the framework of coevolution of massive black holes and their host galaxies.
As a very recent progress in observations of binary massive black holes with the Sloan Digital Sky Survey (SDSS), there is a claim that two broad emission line quasars with multiple redshift systems are subparsec binary candidates(Borson & Lauer, 2009). The temporal variations of such the emission lines are attributed to the binary orbital motion(Loeb, 2009; Shen & Loeb, 2009). These can be used as complementary approaches to search for binary massive black holes with MAXI and/or Swift/BAT.
Recently, Volonteri et al. (2009) predicted the fraction of binary quasars at based on the theoretical scenario for the hierarchical assembly of supermassive black holes in a CDM cosmology. They adopted the merging timescale of binary black holes with a circumbinary disk estimated by Haiman et al. (2009), in order to explain the observed paucity of binary quasars in the SDSS sample (2 out of 10000; Bogdanović et al. (2009); Dotti et al. (2009); Borson & Lauer (2009)). For the black hole mass range of , which these SDSS quasars likely have, our calculation gives a similar merging timescale ( year, independent of mass). Hence, our model will also be compatible with the SDSS results when applied to the same cosmological model. In a lower mass range, however, we predict a significantly longer merging timescale, by a factor of 10 at , than that by Haiman et al. (2009), which rapidly decreases with the decreasing mass. Hence, much larger fractions of subparsec binary black holes are expected in our model than in Volonteri et al. (2009) if low mass black-holes are considered.
Kocsis & Sesana (2010) studied the nHz gravitational wave background generated by close binary massive black holes with orbital periods between 0.1-10 years, taking account of both the cosmological merger rate and such the binary-disk interaction as the planetary (type II) migration(Haiman et al., 2009). The orbital-decay timescale for low black-hole mass binaries () is much shorter than that of our model. This suggests that little stochastic gravitational wave background is attenuated by applying our model for their scenario, because the amplitude of gravitational wave background is proportional to the root of ratio of the orbital-decay timescale of the disk-driven phase to that of the gravitational-wave driven phase.
Our main conclusions are summarized as follows:
1. Binary massive black holes on the subparsec scale can merge within a Hubble time by the interaction with triple disk consisting of an accretion disk around each black hole and a circumbinary disk surrounding them. Assuming that the circumbinary disk is steady, axisymmetric, geometrically thin, self-regulated, self-gravitating but non-fragmenting with a fraction of Eddington accretion rate, its orbital-decay timescale is given by , where , , and show the black-hole mass ratio, Eddington ratio, and mass-to-energy conversion efficiency, respectively.
2. Binary black holes of in the disk-driven phase are the most frequent among the AGN population. Assuming that activities of all nearby AGNs lasts for the accretion timescale, , the total fraction of binaries with the semi-major axis evaluated at the hardening radius and transition radius are estimated as and , respectively.
3. Assuming that all binary massive black holes have the equal mass ratio (), of AGNs with harbor binary black holes with orbital period less than ten years in their center. This black-hole mass range therefore provides the best chance to find such close binary black holes in AGNs.
4. The total fraction of close binary massive black holes with orbital period less than ten years, as is detectable with MAXI and/or Swift/BAT, can be estimated as for and for .
We thank anonymous referee for the useful comments and suggestions. We also thank Mike Koss and Richard Mushotzky for providing us with the latest result on the merging rate of Swift/BAT AGNs before publication. KH is grateful to Atsuo T. Okazaki, Takahiro Tanaka, and Stanley P. Owocki for helpful discussions. This work has been supported in part by the Ministry of Education, Science, Culture, and Sport and Technology (MEXT) through Grant-in-Aid for Scientific Research (19740100, 18104003, 21540304, 22340045, 22540243 KH and 20540230 YU, and 22740120 IN), and by the Grant-in-Aid for the Global COE Program “The Next Generation of Physics, Spun from Universality and Emergence” from MEXT of Japan.
## References
• Armitage & Natarajan (2002) Armitage, P. J., & Natarajan, P. 2002, \apj, 567, L9
• Armitage & Natarajan (2005) Armitage, P. J., & Natarajan, P. 2005, \apj, 634, 921
• Artymowicz & Lubow (1994) Artymowicz, P., & Lubow, S. H. 1994, \apj, 421, 651
• Begelman et al. (1980) Begelman, M. C., Blandford, R. D., & Rees, M. J. 1980, Nature, 287, 307
• Berentzen et al. (2009) Berentzen, I., Preto, M., Berczik, P., Merritt, D., & Spurzem, R, 2009, \apj, 695, 455
• Binney & Tremaine (1987) Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton University Press), 428
• Bertin (1997) Bertin,G. 1997, \apj, 478, L71
• Bogdanović et al. (2008) Bogdanović, T., Smith, B. D., Sigurdsson, S., & Eracleous, M. 2008, \apj, 174, 455
• Bogdanović et al. (2009) Bogdanović, T., Eracleous, M., & Sigurdsson, S. 2009, \apj, 697, 288
• Borson & Lauer (2009) Borson, T. A., & Lauer, T. R. 2009, Nature, 458, 53
• Cuadra et al. (2009) Cuadra, J., Armitage, P. J., Alexander, R. D., & Begelman, M. C. 2009, \mnras, 393, 1423
• Di Matteo et al. (2005) Di Matteo, T., Springel, V., & Hernquist, L. 2005, Nature, 433, 604
• Dotti et al. (2007) Dotti, M., Colpi, M., Haardt, F., & Mayer, L. 2007, \mnras, 379, 956
• Dotti et al. (2009) Dotti, M., Montuori, C., Decarli, R., Volonteri, M., Colpi, M., & Haardt, F. 2009, \mnras, 398, 73
• Ebisuzaki et al. (1991) Ebisuzaki, T., Makino, J., & Okamura, S. K. 1991, Nature, 354, 212
• Escala et al. (2005) Escala, A., Larson, R. B., Coppi, P. S., & Mardones, D. 2005, \apj, 630,152
• Ferrarese&Merritt (2000) Ferrarese, L., & Merritt, D. 2000, \apj, 539. L9
• Gould & Rix (2000) Gould, A., & Rix, H. 2000, \apj, 532, L29
• Gebhardt et al. (2000) Gebhardt, K., et al. 2000, \apj, 539, L13
• Haiman et al. (2009) Haiman, Z., Kocsis, B., & Menou, K. 2009, \apj, 700, 1952
• Hayasaki et al. (2008) Hayasaki, K., Mineshige, S., & Ho, C. L. 2008, \apj, 682, 1134
• Hayasaki et al. (2007) Hayasaki, K., Mineshige, S., & Sudou, H. 2007, \pasj, 59, 427
• Hayasaki (2009) Hayasaki, K. 2009, \pasj, 61, 427
• Hayasaki & Okazaki (2005) Hayasaki, K., & Okazaki, A.T. 2005, \mnras, 360, L15
• Ivanov et al. (1999) Ivanov, P. B., Papaloizou, J. C. B., & Polnarev, A. G. 1999, \mnras, 307, 79
• Iwasawa et al. (2006) Iwasawa, M., Funato, Y., & Makino, J. 2006, \apj, 651, 1059
• Kocsis & Sesana (2010) Kocsis, B., & Sesana, A. 2010, arXive1002.0584
• Komossa et al. (2003) Komossa, S., Burwitz, V., Hasinger, G., Predehl, P., Kaastra, J. S., & Ikebe, Y. 2003, \apj, 582, L15
• Kormendy & Richstone (1995) Kormendy, J., & Richstone, D. 1995, \araa, 33, 581
• Kormendy & Bender (2009) Kormendy, J., & Bender, R. 2009, \apj, 691, L142
• Koss et al. (2010) Koss, M., Mushotzky, R., Veilleux, S., & Winter, L. M. 2010, \apj, 716, L125
• Lodato et al. (2009) Lodato, G., Nayakshin, S., King, A. R., & Pringle, J. E. 2009, 398, 1392
• Loeb (2009) Loeb, A. 2010, PhRvD, 81, 047503
• MacFadyen & Milosavljević (2008) MacFadyen, I. A., & Milosavljević, M. 2008, ApJ, 672, 83
• Magorrian et al. (1998) Magorrian, J., et al. 1998, \aj, 115, 2285
• Makino (1997) Makino, J. 1997, \apj, 478, 58
• Matsubayashi et al. (2007) Matsubayashi, T., Makino, J., & Ebisuzaki, T. 2007, \apj, 656, 879
• Matsui & Habe (2009) Matsui, M., & Habe, A. 2009, \pasj, 61, 421
• Matsuoka et al. (2009) Matsuoka, M., et al. 2009, \pasj, 61,999
• Mayer et al. (2007) Mayer, L., Kazantzidis, S., Madau, P., Colpi, M., Quinn, T., & Wadsley, J. 2007, Science, 316, 1874
• Merritt & Milosavljević (2005) Merritt, D., & Milosavljević, M. 2005, Living Rev. Relativity, 8, 8
• Milosavljević & Merritt (2001) Milosavljević, M., & Merritt, D. 2001, \apj, 563, 34
• Milosavljević & Merritt (2003) Milosavljević, M., & Merritt, D. 2003, \apj, 596, 860
• Mineshige & Umemura (1996) Mineshige, S., & Umemura, M. 1996, \apj, 469, L49
• Peters (1964) Peters, P. C. 1964, Physical Review, 136, 1224
• Quinlan (1996) Quinlan, G. D. 1996, New Astron, 1, 35
• Quinlan & Hernquist (1997) Quinlan, G. D., & Hernquist, L. 1997, New Astron, 2, 533
• Rice et al. (2005) Rice, W.K., Lodato, G., & Armitage, P. J. 2005, \mnras, 364, L56
• Shakura & Sunyaev (1973) Shakura, N. I. & Sunyaev, R. A. 1973, \aap, 24, 337
• Sesana et al. (2007) Sesana, A., Haardt, F., & Madau, P. 2007, \apj, 660, 546
• Shen & Loeb (2009) Shen, Y., & Loeb, A. 2009, arXiv:0912.0541
• Tueller et al. (2010) Tueller, J., et al. 2010, \apjs, 186, 378
• Ueda et al. (2003) Ueda, U., Akiyama, M., Ohta, K., & Takamitsu, T. 2003, \apj, 598, 886
• Volonteri et al. (2009) Volonteri, M., Miller, J. M., & Dotti, M. 2009, \apj, 703, L86
• Winter et al. (2009) Winter, L. M., Mushotzky, R. F., Reynolds, C. S., & Tueller, J. 2009, \apj, 690, 1322
• Yu & Tremaine (2002) Yu, Q., & Tremaine, S. 2002, \mnras, 335, 965
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2019-12-06 20:53:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8608958721160889, "perplexity": 2602.1640275791206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490972.13/warc/CC-MAIN-20191206200121-20191206224121-00358.warc.gz"} |
https://socratic.org/questions/the-function-y-2x-3-3ax-6-is-decreasing-only-on-the-interval-0-3-find-a | # The function y=2x^3-3ax^2 + 6 is decreasing only on the interval (0,3). Find a?
Sep 10, 2015
The answer is $a = 3$.
#### Explanation:
If the function is decreasing on the interval (0, 3), that means its derivative must be negative on this interval. So first let's calculate the derivative, using the power rule:
$\frac{\mathrm{dy}}{\mathrm{dx}} 2 {x}^{3} - 3 a {x}^{2} + 6 = 6 {x}^{2} - 6 a x$
So we know that the derivative $6 {x}^{2} - 6 a x$ is negative on the interval $\left(0 , 3\right)$ and positive everywhere else. That means this function must cross the $x$ axis at 0 and at 3. Or in other words, when $x = 0$ or when $x = 3$, the function must be equal to 0. We can use this information to solve backwards for $a$.
No matter what $a$ is, when $x = 0$, the function will be 0, because $6 \cdot {0}^{2} - 6 \cdot a \cdot 0 = 0$. So that doesn't help us.
But we also know that when $x = 3$, the function also has to be 0, so we can write:
$6 \cdot 9 - 6 \cdot 3 a = 0$
$54 = 18 a$
$a = 3$
To double check this, we can plug in $a = 3$ and graph the function $y = 2 {x}^{3} - 9 {x}^{2} + 6$:
graph{2x^3 - 9x^2 + 6 [-3, 5, -26.6, 13.4]}
And we see that it is decreasing on the interval (0, 3). | 2022-05-19 18:32:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 17, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527018427848816, "perplexity": 142.76525107238564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00307.warc.gz"} |
https://math.stackexchange.com/questions/778412/boolean-algebras | # Boolean Algebras [closed]
simplify:
(1) $f= pq+r$
(2) $g=a+bc+a'bc'd$
## closed as off-topic by user99914, Somos, Lord Shark the Unknown, Jyrki Lahtonen, Xander HendersonJul 27 '18 at 13:00
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Community, Somos, Lord Shark the Unknown, Jyrki Lahtonen, Xander Henderson
If this question can be reworded to fit the rules in the help center, please edit the question.
$$pq + r = (p+r)(q+r)$$
\begin{align} a + bc + a'bc'd & = a + bc + bc'd \\ \\ & = a + b(c + c'd)\\ \\ & = a + b(c + d) \\ \\ & = a + bc+bd\end{align} | 2019-08-25 11:17:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996138215065002, "perplexity": 1182.456604615067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00329.warc.gz"} |
https://www.qtcentre.org/threads/71688-how-to-reset-a-polygone-picker-programatically?s=5135ea0c7c99707b626933009fc3de46&goto=nextoldest | 1. Beginner
Join Date
May 2021
Posts
4
Thanks
2
Qt products
Platforms
Hi, I'm trying to build qwt.lib and qwtd.lib from source files on Windows 10, using Qt 5.15 MSVC 2019 kit. I have Windows 10 SDK also installed.
After running qmake, I'm running nmake. nmake can not find the file <type_traits>. This file is located in C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.28.29910\in clude which is in the PATH. This file is being called from C:\Qt\5.15.4\msvc2019\include\QtCore\qglobal.h.
Running into build issues even with Qt Creator -> Build option after running qmake once.
How to solve this issue?
Last edited by ManiQt; 21st May 2021 at 12:13.
2. Uwe
Expert
Join Date
Feb 2006
Location
Munich, Germany
Posts
3,278
Thanked 872 Times in 820 Posts
Qt products
Platforms
As the include is made in qglobal.h this problem should have nothing to do with Qwt and you will have it with any type of Qt application.
HTH,
Uwe
3. ## The following user says thank you to Uwe for this useful post:
ManiQt (24th May 2021)
4. Guru
Join Date
Mar 2009
Location
Brisbane, Australia
Posts
7,734
Thanks
13
Thanked 1,610 Times in 1,537 Posts
Qt products
Platforms
Wiki edits
17
Originally Posted by ManiQt
How to solve this issue?
Are you running this from a command prompt set up by the batch file that is supplied with the Microsoft compiler (vcvars32.bat or vsvar32.bat or something like that)? Include files are not searched on the PATH. The Microsoft compiler uses a whole bunch of environment variables, one of which probably sets the include search path.
5. ## The following user says thank you to ChrisW67 for this useful post:
ManiQt (24th May 2021)
6. Beginner
Join Date
May 2021
Posts
4
Thanks
2
Qt products
Platforms
Hi Uwe and Chris, this problem didn't come up by using 6.2 version of qwt somehow. I tried building 6.1.4 on Qt Creator and CMD either of them gave compilation errors. In this forum I guess someone suggested to use qwt 6.2 instead and it worked. This time I have only tried on Qt Creator and build was successful with some handful of warnings regarding qwt_math.h as I recollect.
To Chris- I tried regular method (nmake in Path) and also using vcvars32.bat method. Error occurred either way.
7. Uwe
Expert
Join Date
Feb 2006
Location
Munich, Germany
Posts
3,278
Thanked 872 Times in 820 Posts
Qt products
Platforms
Again: an error in qglobal.h is totally unrelated to whatever Qwt version you are using.
Uwe
8. Beginner
Join Date
May 2021
Posts
4
Thanks
2
Qt products
Platforms
Originally Posted by Uwe
Again: an error in qglobal.h is totally unrelated to whatever Qwt version you are using.
Uwe
Yes, I understood qglobal.h is related to Qt and not Qwt. I am just clarifying what fixed the issue although the solution is very unnatural . | 2021-09-26 09:03:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359690308570862, "perplexity": 9147.329955972142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00387.warc.gz"} |
https://wirelesspi.com/on-the-link-between-gardner-timing-error-detector-and-early-late-timing-error-detector/ | # On the Link Between Gardner Timing Error Detector and Early-Late Timing Error Detector
This post is written on an advanced topic mainly for practitioners and researchers in the design of wireless systems. For learning about wireless communication systems from a DSP perspective (the idea behind SDRs), I recommend you have a look at my book.
F. M. Gardner described his well known Timing Error Detector (TED) — known as Gardner TED — in his often cited article [1]. Gardner was a pioneer in the area of synchronization and Phase Locked Loops (PLL). Later, M. Oerder (a student of Heinrich Meyr) derived this scheme from the maximum likelihood principle in [2]. Heinrich Meyr is the founder of the Institute for Integrated Signal Processing Systems at RWTH Aachen and one of the most respected scientists in communication engineering. His research results and methodology had a great impact on the design philosophy of single carrier systems in the last few decades.
In this article, I will mention an error in Oerder’s derivation that has the potential to cause confusion in understanding timing synchronization algorithms, which can be traced back to a longstanding misconception regarding the slope sign of the S-curve of a timing error detector. Let us explore it in detail.
## Notations
Before we start this topic, I recommend that you read about Pulse Amplitude Modulation (PAM) for an introduction to pulse amplitude modulated systems, the framework in which timing synchronization algorithms are described here. The notations for the main parameters are the following.
• Sample time (inverse of sample rate): $T_S$
• Symbol time (inverse of symbol rate): $T_M$
• Data symbols: $a[m]$
• Square-root Raised Cosine pulse shape: $p(nT_S)$
• Raised Cosine pulse shape: $r_p(nT_S)$ (as it is the auto-correlation of a Square-Root Raised Cosine pulse)
• Timing error: $\epsilon _\Delta$
• Timing error estimate: $\hat \epsilon _\Delta$
## System Model
Based on the notations above, such a pulse amplitude signal can be represented as
$s(t) = \sum _{i} a[i] p(t-iT_M)$
In the presence of a symbol timing offset $\epsilon _\Delta$, the received waveform is given by
$r(t) = \sum _{i} a[i] p(t-iT_M-\epsilon _\Delta)$
The receiver samples this waveform at times $nT_S+\hat \epsilon _\Delta$ where $\hat \epsilon _\Delta$ is its estimate of timing offset at that time. Therefore, the sampled received waveform in the absence of noise can be written as
\begin{equation*}
r(nT_S+\hat \epsilon _\Delta) = \sum _{i} a[i] p(nT_S+\hat \epsilon _\Delta-iT_M-\epsilon _\Delta)
\end{equation*}
In the above equation, we have ignored every other distortion at the Rx except for a symbol timing offset $\epsilon _\Delta$. This signal is input to a matched filter $h(nT_S) = p(-nT_S)$ and the output is written as
\begin{equation*}
z(nT_S+\hat \epsilon _\Delta) = \sum \limits _i a[i] r_p(nT_S+\hat \epsilon _\Delta -iT_M -\epsilon _\Delta)
\end{equation*}
Here, $r_p(nT_S)$ is the corresponding Nyquist pulse.
## Maximum Likelihood Timing Error Detector (TED)
For unknown symbols, the likelihood function is maximum when the energy in the matched filter output $|z(nT_S+\hat \epsilon _\Delta)|^2$ is maximum. Taking its derivative at symbol $m$ and ignoring an irrelevant constant, the maximum likelihood Timing Error Detector (TED) is
\label{eqSymbolCentricIdea}
e[m] = z(mT_M+\hat \epsilon _\Delta) \cdot \underbrace{z'(mT_M+\hat \epsilon _\Delta)}_{\text{drive this term towards zero}}
It is evident that the maximum likelihood occurs where the derivative term approaches zero which coincides with the peak of the pulse for a single symbol and maximum eye opening for a shaped symbol stream.
## Early-Late Timing Error Detector (TED)
For a Rx operating at $L=2$ samples/symbol, the second term in the maximum likelihood TED can be approximated with a differentiator computing only the first central difference, i.e., the term $z'(mT_M+\hat \epsilon _\Delta)$ can be approximated from one sample to the right (at $+T_M/2$) and one sample to the left (at $-T_M/2$) of the current time instant.
$z'(mT_M+\hat \epsilon _\Delta) \approx z\left(mT_M+\frac{T_M}{2}+\hat \epsilon _\Delta\right) – z\left(mT_M-\frac{T_M}{2}+\hat \epsilon _\Delta\right)$
From Eq (\ref{eqSymbolCentricIdea}), this leads to an early-late approximation to the maximum likelihood TED known as Early-Late Timing Error Detector (EL-TED) as
\begin{align}\label{eqEL}
e[m] = z(mT_M+\hat \epsilon _\Delta) \left\{z\left(mT_M+\frac{T_M}{2}+\hat \epsilon _\Delta\right) –
z\left(mT_M-\frac{T_M}{2}+\hat \epsilon _\Delta\right)\right\}
\end{align}
The problem is that at the startup, the Rx does not know which sample out of $L=2$ samples corresponds to the symbol estimate $z(mT_M+\hat \epsilon _\Delta)$. Another sample like the one at $+T_M/2$ or $-T_M/2$ can easily be mistaken as the one corresponding to $T_M$.
To answer this question, consider the figure below and apply the early-late equation for $b$, $c$ and $d$.
\begin{equation*}
e[m] = c\cdot (b-d) > 0
\end{equation*}
Consequently, it is treated as a case of early sampling $\hat \epsilon _\Delta<\epsilon _\Delta$. The timing loop would have shifted the sampling instant forward until $d$ is identified as the sample at $mT_M-T_M/2$. However, if the three TED samples were chosen as $c$, $d$ and $e$ at the start, $$\label{eqELTEDexample} e[m] = d\cdot (c-e) < 0$$ and $d$ would have gone to instant $(m-1)T_M$ setting an underflow flag. We can say that in this game of $3$ samples, the middle sample always approaches the symbol center.
## Zero Crossing Timing Error Detector (TED)
Now let us see what happens when a different TED is formed from the same samples as in Eq (\ref{eqELTEDexample}) but with a negative derivative term.
\label{eqZCTEDexample}
e[m] = d\cdot\big\{-(c-e)\big\} = d\cdot (e-c) > 0
Thus, the sampling instant will be pulled forwards until the middle sample $d$ reaches $mT_M-T_M/2$, i.e., in this game of $3$ samples, the middle sample approaches the zero crossing. Then, the left neighbouring sample $e$ will coincide with symbol $a[m-1]$ while the right neighbouring sample $c$ will coincide with $a[m]$. From this observation, Eq (\ref{eqEL}) and Eq (\ref{eqZCTEDexample}), we can write the expression for the new TED as
\begin{align}\label{eqGardner}
e[m] = z\left(mT_M-\frac{T_M}{2}+\hat \epsilon _\Delta\right) \cdot\Big\{z\left((m-1)T_M+\hat \epsilon _\Delta\right) – z\left(mT_M+\hat \epsilon _\Delta\right)\Big\}
\end{align}
which is nothing but the Gardner TED, a non-data-aided version of a general idea known as a Zero Crossing TED (ZC-TED).
The converging locations of the EL-TED and the ZC-TED are shown in the figure below. We conclude from Eq (\ref{eqZCTEDexample}) that negating the slope in a maximum likelihood TED makes the algorithm target the zero crossings of the waveform. While one of the samples aligns with the zero crossings, the other sample automatically aligns with the maximum eye opening due to $T_M/2$ spacing between them.
As opposed to the maximum likelihood perspective in Eq (\ref{eqSymbolCentricIdea}), the zero crossing idea works as follows.
\label{eqZeroCrossingIdea}
e[m] = \underbrace{z(mT_M+\hat \epsilon _\Delta)}_{\text{drive this term towards zero}} \cdot \big\{-z'(mT_M+\hat \epsilon _\Delta)\big\}
Clearly, Gardner TED is not an approximation of the maximum likelihood TED. It exploits the fact that the real purpose of a TED is not necessarily finding the maximum of the likelihood function but instead generating an error signal $e[m]$ that converges towards zero.
## The Error and Clarification
However, [2] and numerous other references, including Gardner himself [1], have expressed the TED as
\begin{align}\label{eqGardner2}
e[m] = z\left(mT_M-\frac{T_M}{2}+\hat \epsilon _\Delta\right) \cdot
\Big\{z\left(mT_M+\hat \epsilon _\Delta\right) – z\left((m-1)T_M+\hat \epsilon _\Delta\right)\Big\},
\end{align}
a negative version of the one in Eq (\ref{eqGardner}). Applying a similar analysis as in Eq (\ref{eqELTEDexample}), such an expression eventually converges towards the EL-TED form in Eq (\ref{eqEL}).
Having these two different expressions, i.e., Eq (\ref{eqGardner}) and Eq (\ref{eqGardner2}), has been a source of confusion and has sometimes lead to erroneous application of the Gardner TED. Next, we discuss the root cause behind this difference in these expressions.
An S-curve for a carrier phase and carrier frequency error detector always has a positive slope at the origin. While many scientists also link the proper operation of a timing error detector with a positive slope at the origin, many others treat the S-curve as having a negative slope. The reason is as follows.
• A timing error $\epsilon _\Delta$ is introduced in a PAM waveform as
\begin{equation*}
z(t) = \sum \limits _i a[i] r_p(t -iT_M -\epsilon _\Delta)
\end{equation*}
It is sampled at $t = nT_S+\hat \epsilon _\Delta$ which yields
\begin{align}
z(nT_S +\hat \epsilon _\Delta) &= \sum \limits _i a[i] r_p\Big[ nT_S+\hat \epsilon _\Delta -iT_M-\epsilon _\Delta\Big]\nonumber \\
&= \sum \limits _i a[i] r_p\Big[nT_S -iT_M-\epsilon _{\Delta:e}\Big]\label{eqTimingSyncNegTimingError}
\end{align}
where
\begin{equation*}
\epsilon _{\Delta:e} \equiv \epsilon _\Delta-\hat \epsilon _\Delta
\end{equation*}
When $\epsilon _{\Delta:e}>0$, i.e., $\epsilon _\Delta > \hat \epsilon _\Delta$, our estimate $\hat \epsilon _\Delta$ should increase. Similarly, when $\epsilon _{\Delta:e} < 0$, i.e., $\epsilon _\Delta<\hat \epsilon _\Delta$, our estimate $\hat \epsilon _\Delta$ should decrease: the resulting S-curve has a positive slope at the origin.
• On the other hand, some omit $\epsilon _\Delta$ from the incoming waveform as
\begin{equation*}
z(t) = \sum \limits _i a[i] r_p(t -iT_M)
\end{equation*}
and assume that the Rx has the responsibility of properly adjusting the timing shift $\hat \epsilon _\Delta$. In this case, the matched filter output, again sampled at $t=nT_S+\hat \epsilon _\Delta$, is
\begin{equation*}
z(nT_S +\hat \epsilon _\Delta) = \sum \limits _i a[i] r_p\Big[nT_S-iT_M+ \hat \epsilon _\Delta\Big]
\end{equation*}
The overall timing error, while having a negative sign in Eq (\ref{eqTimingSyncNegTimingError}), appears with a positive sign here. When our estimate $\hat \epsilon _\Delta<0$, it should increase. Similarly, when $\hat \epsilon _\Delta > 0$, it should decrease: the resulting S-curve has a negative slope at the origin.
The scientists following the former approach define Gardner TED as in Eq (\ref{eqGardner}) and EL-TED as in Eq (\ref{eqGardner2}) that converges to Eq (\ref{eqEL}). On the other hand, those following the latter approach define the Gardner TED as in Eq (\ref{eqGardner2}). The confusion between a Gardner TED and an EL-TED is thus clarified.
This lead [2] to mistaking the Gardner TED as another approximation to the maximum likelihood TED, although the real approximation was the EL-TED. Interestingly, this is what lead to Gardner himself deriving the TED in [1] through the former approach but then reversing the sign of the error signal at the last moment so that the TED slope could become negative at the origin. According to him, "the reversal of sign has no significance in the formal manipulations or in the processor’s computation burden, but assures negative slope at the tracking point of the detector output".
### References
[1] F. M. Gardner, A BPSK/QPSK timing-error detector for sampled receivers, IEEE Transactions on Communications, Vol. 34, No. 5, May 1986.
[2] M. Oerder, “Derivation of Gardner’s timing-error detector from the maximum likelihood principle,”, IEEE Transactions on Communications, Vol. 35, No. 6, 1987. | 2022-09-28 09:30:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9143478274345398, "perplexity": 1736.248711608002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00136.warc.gz"} |
http://www.participatorymuseum.org/r6radmy/tangent-in-physics-6f52d3 | ## tangent in physics
Also, it will cover many other geometrical shapes like circles. You start with the magnitude of the angular acceleration, which tells you how […] As the name suggests, tangential velocity describes the motion of an object along the edge of this circle whose direction at any gi… Our tips from experts and exam survivors will help you through. Yes a tangent is a straight line thattouches a curve at only one point But there is a tangent ratio used in trigonometry What is the tangent of 62? The law of tangents describes the relationship between the tangent of two angles of a triangle and the lengths of the opposite sides. Tangent Physics is on Facebook. I have thought that maybe electrons experience some sort of tangent … At the point of tangency, a tangent is perpendicular to the radius. Similarly, if you are finding the gradient to the curve of a velocity-time graph, then you are calculating the acceleration of the object at that particular time. Teacher’s copy of the handout includes complete notes and answers to questions. Trigonometry for Physics There are 3 trig functions that you will use on a regular basis in physics problems: sine, cosine and tangent. The tangential velocity is measured at any point tangent to a rotating wheel. The tangent law of magnetism is a way to contrast the strengths of two magnetic fields that are perpendicular to each other. Select any two points on the tangent. 1 decade ago why do we usually use sin instead of cosine or tangent in physics? PCB MADE EASY. GCSE physics worksheet/handout on tangent on distance time graph. The new wavefront is a line tangent to all of the wavelets. The tangent line represents the instantaneous rate of change of the function at that one point. Time should be in the x-direction and displacement should be in the y-direction. In a right angled triangle, the tangent of an angle is: The length of the side opposite the angle divided by the length of the adjacent side. Point of tangency is the point where the tangent touches the circle. In other words, we can say that the lines that intersect the circles exactly in one single point are Tangents. Tangent definition, in immediate physical contact; touching. Thus angular velocity, ω, is related to tangential velocity, Vt through formula: Vt = ω r. Here r is the radius of the wheel. To construct the tangent to a curve at a certain point A, you draw a line that follows the general direction of the curve at that point. Not Now. +91 8826514003; Once the tangent is found you can use it to find the gradient of the graph by using the following formula: $\text{Gradient to the curve =}~\frac {y_2-y_1} {x_2-x_1}$ The abbreviation is tan. We wil… Trigonometry is an important branch of Mathematics. Illustrated definition of Tangent (line): A line that just touches a curve at a point, matching the curves slope there. Join Facebook to connect with Tangent Physics and others you may know. Phone . At one point he was a good swimmer and likes to draw cartoon sheep as he can't quite get the hang of people. Estimate the velocity of the car at $$\text{t = 6.5 s}$$. You can choose any coordinate on the tangent. I have chosen (2.5 , 4) and (4 , 60). Log In. Similarly, any vector in the tangent space at a point could be a le The tangent to these wavelets shows that the new wavefront has been reflected at an angle equal to the incident angle. Gradient to the curve $$= \frac {-3~-9} {0~-~(-4)}~=~\frac {-12} {4}~=~{-3}$$. Contact. There's your trig. Let us learn it! Plz answer as simple as possible, i am not studying a very advance level of physics :) $$\frac {y_2-y_1} {x_2-x_1}$$ where $$({x_1,~y_1})~=~({-4},{~9})$$ and $$({x_2,~y_2})~=~({0},~{-3})$$ are any two points on the tangent to the curve. Log In. The inverse hyperbolic functions are: It is useful to remember that all lines and curves that slope upwards have a positive gradient. (noun) Its working is based on the principle of the tangent law of magnetism. Then use the formula below: $\frac {2000~-~0} {2.5~-~1}~=~\frac {2000} {1.5}~=~1333.33$. Read about our approach to external linking. InvestigatoryProject Physics Royal Gondwana Public School & Junior College Rushikesh Shendare Class XII 2. a straight line touching a curve at a single point without crossing it at that point, a straight line touching a curve at a single point without crossing it at that point. Whenever you deal with vectors in physics, you probably need to use trig. For example, a capacitor incorporated in an alternating-current circuit is alternately charged and discharged each half cycle. Estimate the velocity of the car at, Constructing and using tangents - Higher tier only - WJEC, Home Economics: Food and Nutrition (CCEA). The coordinates that we are using are (1, 0) and (2.5, 2000). Huygens' Principle. If a particle moves around on a curved surface (in any manner so long as it stays on the surface), it's velocity vector will always lie in the tangent space to the point where it is at. In circular motion, acceleration can occur as the magnitude of the velocity changes: a is tangent to the motion. And, as his wife puts it, he likes goblins and stuff, though is a little stronger in her language. When a current is passed through the circular coil, a magnetic field (B) is produced at the center of the coil in a direction perpendicular to the plane of the coil. B 1 and B 2 are two uniform magnetic fields acting at right angles to each other. There is a „Tangent” option in Comsol’s Geometry —> Operations toolbar for 2D drawings. All lines and curves that slope downwards have a negative gradient. Physics Earth magnetic field using tangent galvanometer 1. Create New Account. Select any two points on the tangent. It's called the tangent function. When a magnet is exposed to a magnetic field B that is perpendicular to the Earth’s horizontal magnetic field (Bh), the magnetic field will rest at an angle theta. First draw the tangent at $$\text{x = -2}$$. Instananeous Velocity: A Graphical Interpretation, Simple Harmonic Motion and Uniform Circular Motion, Instantaneous velocity is the velocity of an object at a single point in time and space as calculated by the slope of the, The velocity of an object at any given moment is the slope of the, The velocity at any given moment is deï¬ned as the slope of the, In circular motion, there is acceleration that is, In circular motion, acceleration can occur as the magnitude of the velocity changes: a is, It should also be noted that at any point, the direction of the electric field will be, Thus, given charges q1, q2 ,... qn, one can find their resultant force on a test charge at a certain point using vector addition: adding the component vectors in each direction and using the inverse, We know that the electric field vanishes everywhere except within a cone of opening angle $1/\gamma$, so a distance observer will only detect a significant electric field while the electron is within an angle $\Delta \theta/2 \sim 1/\gamma$of the point where the path is, Furthermore, we can see that the curves of constant entropy not only pass through the corresponding plots in the plane (this is by design) but they are also, so the Mach numbers on each side of the shock are given by the ratio of the slope of the secant to the slope of the, Because all of the adiabats are concave up in the $p-V-$plane, the slope of the secant must be larger than that of the, Conversely at $(p_2,V_2)$the slope of the secant must be small than that of the, An overall resultant vector can be found by using the Pythagorean theorem to find the resultant (the hypotenuse of the triangle created with applied forces as legs) and the angle with respect to a given axis by equating the inverse, Velocity v and acceleration a in uniform circular motion at angular rate Ï; the speed is constant, but the velocity is always, We see from the figure that the net force on the bob is, (The weight mg has components mgcosθ along the string and mgsinθ, A consequence of this is that the electric field may do work and a charge in a pure electric field will follow the. Some stuff about functions. Related Pages. or. ... TANGENT AND NORMAL. Answer: The tangent law of magnetism is a way of measuring the strengths of two perpendicular magnetic fields. It is defined as: A tangent line is a straight line that touches a function at only one point. The coordinates that we are using are (-4, 9) and (0, -3). In physics, tangential acceleration is a measure of how the tangential velocity of a point at a certain radius changes with time. Whether it is to complete geometrical work on circles or find gradients of curves, being able to construct and use tangents as well as work out the area under graphs are useful skills in mathematics. hyperbolic tangent "tanh" (/ ˈ t æ ŋ, ˈ t æ n tʃ, ˈ θ æ n /), hyperbolic cosecant "csch" or "cosech" (/ ˈ k oʊ s ɛ tʃ, ˈ k oʊ ʃ ɛ k /) hyperbolic secant "sech" (/ ˈ s ɛ tʃ, ˈ ʃ ɛ k /), hyperbolic cotangent "coth" (/ ˈ k ɒ θ, ˈ k oʊ θ /), corresponding to the derived trigonometric functions. During the alternation of polarity of the plates, the charges must In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Facebook gives people the power to share and makes the world more open and connected. See more. i always wonder what is so special about sin in trigonometry, we usually use sin in physics for example in refraction etc. tan (θ) = opposite / adjacent. We want to find the gradient of the curve at $$\text{x = -2}$$. Forgot account? Estimate the gradient to the curve in the graph below at point A. Just keep in mind that this software has limited capabilities when it comes to modeling and it might be easier to create geometry in CAD software and the import it to Comsol (maybe you should use different format or change the way you model parts in SolidWorks). Dielectric loss, loss of energy that goes into heating a dielectric material in a varying electric field. Hence using the coordinated below the … More precisely, a straight line is said to be a tangent of a curve y = f(x) at a point x = c if the line passes through the point (c, f(c)) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space. We want to find the gradient of the curve at, $$= \frac {-3~-9} {0~-~(-4)}~=~\frac {-12} {4}~=~{-3}$$, The following graph shows the car journey from Chelsea’s house to her mother’s house. Several theorems are related to this because it plays a significant role in geometrical constructionsand proofs. Here we will study about the Tangent Law and Tangent Galvanometer Experiment with Construction & Working. Learning objective: Calculate the speed of an object from the gradient of a tangent on a distance-time graph. The tangent has been drawn for you. $\frac {140~-~20} {9~-~4}~=~\frac {120} {5}~=~24~ \text{m/s}^{2}$. Standard position diagram Sine Cosine Tangent Reciprocal functions Cosecant Secant Cotangent This topic will explain the tangent formula with examples. Definition of tangent (Entry 2 of 2) 1 a : meeting a curve or surface in a single point if a sufficiently small interval is considered straight line tangent to a curve. THREE DIMENSIONS GEOMETRY. are any two points on the tangent to the curve. where $$({x_1,~y_1})$$ and $$({x_2,~y_2})$$ are any two points on the tangent to the curve. Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required. It provides the relationships between the lengths and angles of a triangle, especially the right-angled triangle. or. He likes physics, which should tell you all you need to know about him. If ‘ P 1 ‘ be the projection of the point P on the x-axis then TP 1 is called the sub-tangent (projection of line segment PT on the x-axis) and NP 1 is called the sub normal (projection of line segment … An easy way to remember them is: SOH CAH TOA opposite sinθ = hypotenuse adjacent cosθ = hypotenuse opposite tanθ = adjacent The Pythagorean theorem is another formula that you will use frequently in physics. Now that you have drawn a tangent at the point that we want (3,27) you will need to choose any two coordinates on the tangent line. Physics made easy. After drawing the curve (which is the right side of an upward parabola), place a ruler so that it touches the curve only at the data point (0.2sec, 3.0cm). Science > Physics > Magnetic Effect of Electric Current > Tangent Galvanometer In this article, we shall study, the principle, construction, working, sensitivity, and accuracy of the tangent … An example of this can be seen below. Here's how I like to think about it. Leibniz defined it as the line through a pair of infinitely close points on the curve. Tangential acceleration is just like linear acceleration, but it’s specific to the tangential direction, which is relevant to circular motion. The following graph shows the car journey from Chelsea’s house to her mother’s house. Create New Account. Once the tangent is found you can use it to find the gradient of the graph by using the following formula: $\text{Gradient to the curve =}~\frac {y_2-y_1} {x_2-x_1}$. I have included both the PDF and DOC version of the same handout for your ease of use. See more of Physics made easy on Facebook. In physics or mathematics tangent has same concept. Tangential velocity is the component of motion along the edge of a circle measured at any arbitrary instant. The new wavefront is tangent to the wavelets. First draw the tangent at the point given. The question is more of where are tangent waves found in nature/this universe? Today at 12:43 AM. See more of Physics made easy on Facebook. PT is called length of the tangent and PN is called the length of the normal. If you are finding the gradient to the curve of a distance-time graph then you are calculating the velocity that the object is moving at that particular time. The line that joins two infinitely close points from a point on the circle is a Tangent. This series includes 30 questions.As it is self evaluation portal of your physics knowledge, So there ... Tangent Learning Platform to practice your knowledge Learn new concepts and technologies Various test series for different exams. b (1) : having a common tangent line at a point … I'm in high school, just finished Grade 11 and I have learned about sine, cosine, and tangent waves in my math & physics classes. As he ca n't quite get the hang of people graph below at a... Do we usually use sin instead of cosine or tangent in physics example. Is just like linear acceleration, but it ’ s specific to the in... Facebook to connect with tangent physics and others you may know of use and... Capacitor incorporated in an alternating-current circuit is alternately charged and discharged each half cycle right angles to each other included! 2.5~-~1 } ~=~\frac { 2000 } { 2.5~-~1 } ~=~\frac { 2000 } { 2.5~-~1 } ~=~\frac 2000. 60 ) 2.5~-~1 } ~=~\frac { 2000 } { 1.5 } ~=~1333.33\ ] a negative.! { t = 6.5 s } \ ) leibniz defined it as the line a! 4.0 with attribution required think about it of people and answers to questions wavelets shows that the lines that the. Downwards have a positive gradient is alternately charged and discharged each half cycle illustrated definition of tangent line. Linear acceleration, but it ’ s copy of the wavelets { 1.5 } ~=~1333.33\ ] at a,. Constructionsand proofs, 9 ) and ( 0, -3 ) to think about it also, it will many. The tangent law and tangent Galvanometer Experiment with Construction & Working alternating-current circuit is alternately charged and each. Of tangent ( line ): a is tangent to the radius physical! In her language though is a straight line that touches a curve at a point, matching the curves there.: the tangent law and tangent Galvanometer Experiment with Construction & Working in nature/this universe the. Point he was a good swimmer and likes to draw cartoon sheep as he ca n't quite get the of! Is more of where are tangent waves found in nature/this universe and stuff though... Power to share and makes the world more open and connected, immediate. The car journey from Chelsea ’ s copy of the car at \ ( \text { =! Both the PDF and DOC version of the function at only one point he was a good swimmer likes., in immediate physical contact ; touching on this site are licensed under CC BY-SA 4.0 with attribution required because... This because it plays a significant role in geometrical constructionsand proofs with Construction & Working and should... Triangle, especially the right-angled triangle and stuff, though is a little stronger her! The component of motion along the edge of a triangle, especially the right-angled triangle defined. Point of tangency, a tangent on a distance-time graph a function at only point... Is useful to remember that all lines and curves that slope tangent in physics a! Positive gradient intersect the circles exactly in one single point are tangents lengths of the velocity of velocity! Of tangent ( line ): a tangent is perpendicular to the tangential velocity is measured any... Magnitude of the wavelets lines and curves that slope downwards have a positive.... To a rotating wheel tangent of two perpendicular magnetic fields acting at right angles to each other incident.! In physics for example, a tangent line represents the instantaneous rate of change of the sides! Physics Royal Gondwana Public School & Junior College Rushikesh Shendare Class XII 2 ; touching rotating wheel line represents instantaneous. We want to find the gradient of the function at that one point he was a good swimmer and to. Capacitor incorporated in an alternating-current circuit is alternately charged and discharged each half cycle and..., 4 ) and ( tangent in physics, 2000 ) shows the car from... Of use b 2 are two uniform magnetic fields acting at right angles each! { 1.5 } ~=~1333.33\ ] circuit is alternately charged and discharged each half cycle tangential acceleration is just linear. Tangential velocity is the point of tangency, a capacitor incorporated in an alternating-current circuit alternately! 2000 } { 1.5 } ~=~1333.33\ ] 2000~-~0 } { 2.5~-~1 } ~=~\frac { 2000 } { 2.5~-~1 } {. Infinitely close points on the tangent to the curve in the y-direction and version. Straight line that touches a curve at \ ( \text { x = -2 } \.. Do we usually use sin instead of cosine or tangent in physics for example in etc... I always wonder what is so special about sin in physics ago why do usually! Will explain the tangent to these wavelets shows that the lines that intersect the exactly! A straight line that touches a function at only one point in geometrical constructionsand proofs should tell you you. Useful to remember that all lines and curves that slope upwards have a negative gradient content and user contributions this! The gradient of a triangle and the lengths of the car at \ ( \text { t = 6.5 }. I have chosen ( 2.5, 4 ) and ( 4, 60 ) your ease of use velocity the. Here 's how i like to think about it remember that all lines curves! That are perpendicular to each other point of tangency is the point of tangency a! Tangential acceleration is just like linear acceleration, but it ’ s house always wonder what is so special sin... Lengths of the velocity of the handout includes complete notes and answers to questions 0 ) and (,... Circle measured at any arbitrary instant user contributions on this site are licensed CC. & Junior College Rushikesh Shendare Class XII 2 answers to questions the point of,. Curves that slope upwards have a negative gradient changes: a tangent on a distance-time.... Ca n't quite get the hang of people tangent waves found in nature/this?... To remember that all lines and curves that slope upwards have a negative gradient at one! Also, it will cover many other geometrical shapes like circles } ~=~1333.33\ ] here we study... The function at that one point, it will cover many other geometrical shapes like circles tangency a... Journey from Chelsea ’ s house to her mother ’ s house each... Especially the right-angled triangle curve in the graph below at point a Galvanometer Experiment with Construction & Working quite the. Tangent of two angles of a triangle, especially the right-angled triangle say. Answers to questions reflected at an angle equal to the curve cosine tangent Reciprocal functions Cosecant Secant Cotangent is! You through cover many other geometrical shapes like circles, which is relevant to circular motion each tangent in physics.. A triangle, especially the right-angled triangle others you may know ) and ( 0, )... Have chosen ( 2.5, 4 ) and ( 2.5, 4 and. Magnetic fields acting at right angles to each other, it will cover many other geometrical shapes circles. A little stronger in her language except where noted, content and user contributions on this are... Constructionsand proofs rate of change of the function at that one point relevant to circular.! Two points on the tangent touches the circle is so special about sin physics. Investigatoryproject physics Royal Gondwana Public School & Junior College Rushikesh Shendare Class XII 2 tangent definition, in immediate contact. Right-Angled triangle triangle and the lengths and angles of a tangent is perpendicular to the radius to about! Of tangency, a tangent line is a straight line that just touches a curve at a point matching! Will study about the tangent law and tangent Galvanometer Experiment with Construction &.! Between the tangent to a rotating wheel sheep as he ca n't quite get the of. Each half cycle \text { x = -2 } \ ) School & Junior College Rushikesh Class! Of tangents describes the relationship between the tangent of two angles of a tangent is perpendicular to curve. Uniform magnetic fields acting tangent in physics right angles to each other tangent line is a little stronger in language! Any arbitrary instant that just touches a tangent in physics at only one point 2000~-~0 } { 2.5~-~1 } {... The PDF and DOC version of the same handout for your ease of use shows that the lines intersect. The same handout for your ease of use ; tangent in physics triangle and the lengths of the function that. Specific to the tangential velocity is measured at any arbitrary instant curves that slope upwards have a positive gradient this! Is a way to contrast the strengths of two angles of a circle measured at arbitrary., acceleration can occur as the magnitude of the function at that one point he was a swimmer... Car journey from Chelsea ’ s house to her mother ’ s.. Then use the formula below: \ [ \frac { 2000~-~0 } 1.5... Graph shows the car at \ ( \text { t = 6.5 s \... Point, matching the curves slope there standard position diagram Sine cosine Reciprocal. I always wonder what is so special about sin in trigonometry, we usually sin... And answers to questions 0 ) and ( 0, -3 ) point tangent to a rotating wheel want find... 2000 ), 9 ) and ( 4, 60 ) { 2000 } 1.5. At \ ( \text { x = -2 } \ ) physics for example, tangent! T = 6.5 s } \ ) these wavelets shows that the new has! Changes: a is tangent to these wavelets shows that the new wavefront is little. = -2 } \ ) tangent is perpendicular to each other is a way of measuring the strengths of magnetic... Coordinates that we are using are ( -4, 9 ) and ( 4, )! Wil… GCSE physics worksheet/handout on tangent on a distance-time graph to a rotating wheel 6.5 s } \ ) close. Occur as the line through a pair of infinitely close points on the.! Complete notes and answers to questions and answers to questions why do usually! | 2021-04-19 03:43:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5647042989730835, "perplexity": 610.8526419684595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038863420.65/warc/CC-MAIN-20210419015157-20210419045157-00240.warc.gz"} |
http://book.imt-decal.org/Appendix/B%20Complex%20Numbers.html | Complex Numbers – Appendix
Complex Numbers
To thoroughly introduce complex numbers, we need to talk about imaginary numbers. Imaginary numbers stem from the fact that the square root of a negative number is undefined over the set of real numbers – there is no real number $x$ such that $x = \sqrt{-25}$. In order to accomodate square roots of negative numbers, we introduce the imaginary unit $i$.
Definition: $\mathbb{i}$, the imaginary unit
The imaginary unit $i$ is defined by the property $i^2 = -1$.
Often you see $i$ being defined as $i = \sqrt{-1}$, however this notation introduces confusion, as it may imply that $i^2 = \sqrt{-1} \cdot \sqrt{-1} = \sqrt{(-1) \cdot (-1)} = \sqrt{1} = 1$, which is not true. It turns out that the regular radical manipulation rules $\sqrt{xy} = \sqrt{x} \cdot \sqrt{y}$ only apply for non-negative real values of $x$ and $y$.
There is a cyclical nature of $i$ and its powers: $i^2 = -1$, $i^3 = i^2 \cdot i = -i$, and $i^4 = i^2 \cdot i^2 = (-1)^2 = 1$. This pattern repeats: $i^5 = i$, $i^6 = -1$, and so on. (We can formalize this once we learn about modular arithmetic.)
Definition: Imaginary numbers
An imaginary number is a number of the form $bi$, where $b$ is a real number.
Imaginary numbers, then, are scalar multiples of $i$. The imaginary numbers $bi$ and $-bi$ are the roots of the polynomial $x^2 + b^2 = 0$, as $(bi)^2 = (-bi)^2 = b^2 \cdot (-1) = -b^2$, and $-b^2 + b^2 = 0$. The idea of polynomial roots is one we will extend further in the polynomials chapter.
Definition: Complex numbers
The set of complex numbers, denoted by $\mathbb{C}$, is the set of all numbers of the form $a + bi$, where $a$ and $b$ are real numbers and $i$ is the imaginary unit. We say $a$ is the real component of $a + bi$, whereas $bi$ is the imaginary component.
$\mathbb{C} = \{ a + bi : a, b \in R, i^2 = -1 \}$
With regards to other number sets, $\mathbb{N} \subset \mathbb{N_0} \subset \mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R} \subset \mathbb{C}$.
This form is known as rectangular form (as opposed to polar form, which is out of the scope of this course). We can easily write any real number as a complex number by setting $b = 0$. We can write any imaginary-only number as a complex number by setting $a = 0$.
The question of whether or not the complex numbers are countable is now simple – since the set of complex numbers is a superset of the set of real numbers, and the real numbers already are not countable, the complex numbers are also not countable.
One issue we run into with complex numbers is plotting them – we could represent any real number as a point on the number line, but how do we represent a complex number? For that, we introduce the complex plane.
Perpendicular to the real number line, also known as the real axis, is the imaginary axis. The number $a + bi$ is represented by the point $(a, b)$ in the complex plane, as the example from above shows. Even though the examples all have integral $a, b$, it doesn’t mean that $\pi + \sqrt{5}i$ isn’t a valid complex number – we can plot it just the same.
Complex Arithmetic
We will now look at basic arithmetic operations on the complex numbers.
Suppose we have two complex numbers $z = a + bi$ and $w = c + di$. Then,
$z + w = (a + bi) + (c + di) = (a+c) + (b+d)i$
$z - w = (a-c) + (b-d)i$
as you might expect. Notice that the sum of two complex numbers is always complex, but is real only when $b + d = 0$, or $b = -d$.
Multiplication
$z \cdot w = (a + bi)(c + di) = ac + adi + bci + bdi^2 = (ac - bd) + (ad + bc)i$
Notice that the product of two complex numbers is always complex, but is real only when $ad + bc = 0$. For example, suppose $z = a + bi$ and $w = a - bi$: then, the product $zw$ is real. It turns out that there is a special relationship between the given examples of $z$ and $w$.
Definition: Complex conjugate
The complex conjugate of the number $z = a + bi$ is defined by $\bar{z} = a - bi$.
Complex conjugates are defined this way for the following reason:
$z\bar{z} = (a + bi)(a - bi) = a^2 - abi + bai - b^2i = a^2 - b^2i = a^2 + b^2$
When multiplying a complex number by its conjugate, the imaginary parts fall out and we’re left with a real number. In particular, this real number $z\bar{z} = |z|^2$ is the square of the magnitude of the complex number, i.e. the squared distance from 0.
The product of two complex numbers may just look like gibberish, but it turns out that an interesting property holds: the magnitude of the product of two complex numbers is the product of the magnitudes of the two numbers – that is, $|z \cdot w| = |z| \cdot |w|$. Let’s show that this is true:
$|z \cdot w| = |(ac - bd)^2 + (ad + bc)^2| \\ = a^2c^2 - 2abcd + b^2d^2 + a^2d^2 + 2abcd + b^2c^2 \\ = a^2(c^2 + d^2) + b^2(c^2 + d^2) \\ = (a^2 + b^2)(c^2 + d^2) \\ = |z| \cdot |w|$
This result is part of a theorem known as De Moivre’s Theorem, which we won’t talk about here, but you may research for your own pleasure.
Division
Conjugates allow us to perform division with complex numbers. Again, consider $z = a + bi$ and $w = c + di$. Then:
$\frac{z}{w} = \frac{a + bi}{c + di} = \frac{a + bi}{c + di} \cdot \frac{c-di}{c-di} \\ = \frac{(ac + bd) + (bc - ad)i}{c^2 + d^2} \\ = \frac{ac + bd}{c^2 + d^2} + \frac{bc - ad}{c^2 + d^2}i$
We used the fact that $w \cdot \bar{w}$ simplifies to a real number in order to simplify the quotient.
Connection to Set Theory
Here is a more complete diagram depicting the sets we’ve learned about in this course:
There is a lot more to complex numbers than we described here; this introduction is far from complete. However, for the purposes of our course, this will suffice. | 2019-01-22 00:28:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9208317995071411, "perplexity": 121.70248816480867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00268.warc.gz"} |
https://brilliant.org/problems/storm-in-a-bottle/ | # Storm in a bottle
The contraption shown above forms an airtight seal between two soda bottles, allowing water from one to flow into the other. The flow can start in one of two ways:
• Flip the system over at $$t=0$$ and let gravity pull the water down.
• Flip the system over at $$t=0$$, and give the system a slight twirl, forming a vortex. The center of a vortex is an empty hole through which air can pass.
Which of the two flows will let the water reach the bottom first?
× | 2017-09-19 20:59:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528847575187683, "perplexity": 1261.838939906883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686034.31/warc/CC-MAIN-20170919202211-20170919222211-00594.warc.gz"} |
https://tex.stackexchange.com/questions/361947/how-to-make-subequations | # How to make subequations
I need equation (1) followed by equation (1.a), (1.b), etc. But I am getting equation (1), then (2.a) (2.b), etc.
\documentclass{llncs}
\usepackage{amsmath}
\usepackage{placeins}
\begin{document}
$$Result = X + \sum\limits_{i=0}^{n}Y$$
\begin{subequations}
\begin{align}
X &= ab \\ Y &= cd
\end{align}
\end{subequations}
\end{document}
• @Mico it worked. Why not adding this as answer? – user6875880 Apr 3 '17 at 17:18
• Because the method in egreg's answer is far more general. \addtocounter{equation}{-1} is quite hackish in comparison. – Mico Apr 3 '17 at 17:22
You can use \ref:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{subequations}\label{whatever}
\begin{gather}\tag{\ref{whatever}}
\mathrm{Result} = X + \sum\limits_{i=0}^{n}Y
\\
\begin{align}
X &= ab \\ Y &= cd
\end{align}
\end{gather}
\end{subequations}
\end{document}
Note how you can get the right vertical spacing instead of using equation and align with align nested in gather.
It's the same with llncs. | 2020-09-20 04:16:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 1.0000096559524536, "perplexity": 3832.194614974975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00488.warc.gz"} |
https://peeterjoot.wordpress.com/tag/connection-formulas/ | • 358,605
# Posts Tagged ‘connection formulas’
## An updated compilation of notes, for ‘PHY452H1S Basic Statistical Mechanics’, Taught by Prof. Arun Paramekanti
Posted by peeterjoot on March 3, 2013
That compilation now all of the following too (no further updates will be made to any of these) :
February 28, 2013 Rotation of diatomic molecules
February 28, 2013 Helmholtz free energy
February 26, 2013 Statistical and thermodynamic connection
February 24, 2013 Ideal gas
February 16, 2013 One dimensional well problem from Pathria chapter II
February 15, 2013 1D pendulum problem in phase space
February 14, 2013 Continuing review of thermodynamics
February 13, 2013 Lightning review of thermodynamics
February 11, 2013 Cartesian to spherical change of variables in 3d phase space
February 10, 2013 n SHO particle phase space volume
February 10, 2013 Change of variables in 2d phase space
February 10, 2013 Some problems from Kittel chapter 3
February 07, 2013 Midterm review, thermodynamics
February 06, 2013 Limit of unfair coin distribution, the hard way
February 05, 2013 Ideal gas and SHO phase space volume calculations
February 03, 2013 One dimensional random walk
February 02, 2013 1D SHO phase space
February 02, 2013 Application of the central limit theorem to a product of random vars
January 31, 2013 Liouville’s theorem questions on density and current
January 30, 2013 State counting
## One dimensional well problem from Pathria chapter II
Posted by peeterjoot on February 16, 2013
[Click here for a PDF of this post with nicer formatting]
Problem 2.5 [2] asks to show that
\begin{aligned}\oint p dq = \left( { n + \frac{1}{{2}} } \right) h,\end{aligned} \hspace{\stretch{1}}(1.0.1)
provided the particle’s potential is such that
\begin{aligned}m \hbar \left\lvert { \frac{dV}{dq} } \right\rvert \ll \left( { m ( E - V ) } \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.2)
I took a guess that this was actually the WKB condition
\begin{aligned}\frac{k'}{k^2} \ll 1,\end{aligned} \hspace{\stretch{1}}(1.0.3)
where the WKB solution was of the form
\begin{aligned}k^2(q) = 2 m (E - V(q))/\hbar^2\end{aligned} \hspace{\stretch{1}}(1.0.4a)
\begin{aligned}\psi(q) = \frac{1}{{\sqrt{k}}} e^{\pm i \int k(q) dq}.\end{aligned} \hspace{\stretch{1}}(1.0.4b)
The WKB validity condition is
\begin{aligned}1 \gg \frac{-2 m V'}{\hbar} \frac{1}{{2}} \frac{1}{{\sqrt{2 m (E - V)}}} \frac{\hbar^2}{2 m(E - V)}\end{aligned} \hspace{\stretch{1}}(1.0.5)
or
\begin{aligned}m \hbar \left\lvert {V'} \right\rvert \ll \left( {2 m (E - V)} \right)^{3/2}.\end{aligned} \hspace{\stretch{1}}(1.0.6)
This differs by a factor of $2 \sqrt{2}$ from the constraint specified in the problem, but I’m guessing that constant factors of that sort have just been dropped.
Even after figuring out that this question was referring to WKB, I didn’t know what to make of the oriented integral $\int p dq$. With $p$ being an operator in the QM context, what did this even mean. I found the answer in [1] section 12.12. Here $p$ just means
\begin{aligned}p(q) = \hbar k(q),\end{aligned} \hspace{\stretch{1}}(1.0.7)
where $k(q)$ is given by eq. 1.0.4a. The rest of the problem can also be found there and relies on the WKB connection formulas, which aren’t derived in any text that I own. Quoting results based on other results that I don’t know the origin of it’s worthwhile, so that’s as far as I’ll attempt this question (but do plan to eventually look up and understand those WKB connection formulas, and then see how they can be applied in a problem like this).
# References
[1] D. Bohm. Quantum Theory. Courier Dover Publications, 1989.
[2] RK Pathria. Statistical mechanics. Butterworth Heinemann, Oxford, UK, 1996. | 2020-02-25 10:35:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7268203496932983, "perplexity": 3344.7431926894187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00478.warc.gz"} |
https://www.mathstoon.com/sn-dey-class-11-set-theory-multiple-choice-questions-solutions/ | # SN Dey Class 11 Set Theory Multiple Choice Questions Solutions
SN Dey Class 11 Solutions | SN Dey Class 11 Set Theory Solutions | Class 11 Set Theory MCQ | SN Dey Solutions | S N Dey Math Solutions | Set Theory S N Dey Solutions | WBCHSE Class 11 Math Solutions
## SN Dey Class 11 Set Theory MCQ Solutions
Choose the correct option:
Ex 1: The number of subsets in a set consisting of four distinct elements is-
(A) 4 (B) 8 (C) 16 (D) 64
We know that the number of subsets in a set consisting of n distinct elements is equal to 2n.
= 2= 16.
∴ option (C) is correct.
Ex 2: The number of proper subsets in a set consisting of five distinct elements is-
(A) 5 (B) 10 (C) 32 (D) 31
We know that the number of proper subsets in a set consisting of n distinct elements is equal to 2n-1.
= 25-1= 32-1 = 31.
∴ option (D) is correct.
Ex 3: If x ∈ A ⇒ x ∈ B then –
(A) A=B (B) A⊂B (C) A⊆B (D) B⊆A
As x ∈ A ⇒ x ∈ B, then we must have that either A is a proper subset of B or they are equal. So the answer is A ⊆ B.
∴ option (C) is correct.
Ex 4: If A⊆B and B⊆A then –
(A) A=Φ (B) A∩B=Φ (C) A=B (D) None of these
As A⊆B and B⊆A, then it follows that A=B.
So option (C) is correct.
Ex 5: For two sets A and B, if A∪B = A∩B then –
(A) A⊆B (B) B⊆A (C) A=B (D) None of these
Let then
As
…..(i)
Similarly, if then, =
Thus, we get that B…..(ii)
So from (i) and (ii), we obtain that
∴ option (C) is correct.
Ex 6: A – B = Φ iff – [Council Sample Question ’13]
(A) A≠B (B) A⊂B (C) B⊂A (D) A∩B=Φ
We know that A-B ={x: x ∈ A but x ∉B}. Thus A-B=Φ means that A is fully contained in B, that is, A⊂B.
∴ option (B) is correct.
Ex 7: If A∩B = B then –
(A) A⊆B (B) B⊆A (C) A=B (D) A=Φ
As B=A∩B, we have B ⊆ A∩B.
⇒ B ⊆ A and B ⊆ B.
⇒ B ⊆ A is true.
∴ option (B) is correct.
Ex 8: If A and B are two disjoint sets then n(A∪B)=
(A) n(A)+n(B) (B) n(A)-n(B) (C) 0 (D) None of these
As A and B are disjoint sets, so we have A∩B = Φ. Thus n(A∩B)=0.
Now, we know that
n(A∪B) = n(A)+n(B)-n(A∩B)
= n(A)+n(B)+0
= n(A)+n(B)
∴ option (A) is correct.
Ex 9: For any two sets A and B, n(A)+n(B)-n(A∩B)=
(A) n(A∪B) (B) n(A)-n(B) (C) Φ (D) None of these
As n(A∪B) = n(A)+n(B)+n(A∩B), the option (A) is correct.
Ex 10: The dual of A∪U = U is-
(A) A∪U=U (B) A∪Φ=Φ (C) A∪Φ=A (D) A∩Φ=Φ
The dual of A∪U = U is A∩Φ=Φ.
∴ option (D) is correct.
Ex 14: State which of the is the set of factors of the number 12-
(A) {2, 3, 4, 6} (B) {2, 3, 4, 6, 12}
(C) {2, 3, 4, 8, 6} (D) {1, 2, 3, 4, 6, 12} | 2022-05-19 08:43:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606395125389099, "perplexity": 6641.638937168394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00210.warc.gz"} |
https://socratic.org/questions/mass-of-candle-0-72-g-before-burning-what-is-the-mass-of-the-products-of-this-re#418063 | # Mass of candle 0.72 g before burning. What is the mass of the products of this reaction? Would that be the same thing as weighing the candle again (the mass is 0.27 g after burning)?
May 5, 2017
Refer to explanation
#### Explanation:
Reweighing the candle before and after burning just shows how much fuel was burnt.
Typical candle wax is an alkane such as hentriacontane, so write a chemical formula for the combustion of this alkane.
${C}_{31} {H}_{64} + 47 {O}_{2} \to 31 C {O}_{2} + 32 {H}_{2} O$
Work out the moles for this alkane.
$\frac{0.72 g}{436}$= 0.00165 moles
The mole ratio of the alkane to $C {O}_{2}$ is 1:31 so multiply the moles of the alkane by 31 to get the number of moles for $C {O}_{2}$.
$0.00165 \cdot 31$ =0.0511 moles
Multiply the moles of $C {O}_{2}$ by $24 {\mathrm{dm}}^{3}$ and then by 1000 to get it in $c {m}^{3}$.
$1000 \left(0.0511 \cdot 24\right) = 1266.4 c {m}^{3}$
Hence for water, do 0.0511 moles times by its formula mass, 18.
#0.0511*18= 0.92g of water.
(I may have overcomplicated it a bit) | 2022-05-25 01:19:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804372251033783, "perplexity": 2368.7021114851736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00140.warc.gz"} |
http://www.exambeat.in/aptitude/clock/723/ | # Aptitude:: Clock
@ : Home > Aptitude > Clock > General Question
### Exercise
"
#### A watch which gains uniformly is 2 minutes low at noon on Monday and is 4 min. 48 sec fast at 2 p.m. on the following Monday. When was it correct?
A. 2 p.m. on Tuesday B. 2 p.m. on Wednesday C. 3 p.m. on Thursday D. 1 p.m. on Friday
#### How many times in a day, the hands of a clock are straight?
A. 22 B. 24 C. 44 D. 48
#### How many times do the hands of a clock coincide in a day?
A. 20 B. 21 C. 22 D. 24
#### At what time, in minutes, between 3 o'clock and 4 o'clock, both the needles will coincide each other?
A. $\fn_cm&space;5\tfrac{1}{11}$ B. $\fn_cm&space;12\tfrac{4}{11}$ C. $\fn_cm&space;13\tfrac{4}{11}$ D. $\fn_cm&space;16\tfrac{4}{11}$
#### At what time between 9 and 10 o'clock will the hands of a watch be together?
A. 45 min. past 9 B. 50 min. past 9 C. D. $\fn_cm&space;48\tfrac{2}{11}\;&space;min.&space;\;&space;past\;&space;9$
Page 1 of 4
Submit* | 2018-07-16 14:07:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4972519278526306, "perplexity": 1797.5058230023687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00584.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-2y-3-3xy-3-3x-2y-4-and-write-it-using-only-positive-exponent | # How do you simplify (2y^3*3xy^3)/(3x^2y^4) and write it using only positive exponents?
May 4, 2017
$\frac{2 {y}^{3} \cdot 3 x {y}^{3}}{3 {x}^{2} {y}^{4}} = \textcolor{b l u e}{\frac{2 {y}^{2}}{x}}$
#### Explanation:
Simplify:
$\frac{2 {y}^{3} \cdot 3 x {y}^{3}}{3 {x}^{2} {y}^{4}}$
Gather like terms.
$\frac{2 \cdot 3 x {y}^{3} {y}^{3}}{3 {x}^{2} {y}^{4}}$
Divide whole numbers.
$\frac{6 x {y}^{3} {y}^{3}}{3 {x}^{2} {y}^{4}}$
Simplify.
$\frac{2 x {y}^{3} {y}^{3}}{{x}^{2} {y}^{4}}$
Apply the product rule of exponents: ${a}^{m} \cdot {a}^{n} = {a}^{m + n}$
$\frac{2 x {y}^{3 + 3}}{{x}^{2} {y}^{4}}$
Simplify.
$\frac{2 x {y}^{6}}{{x}^{2} {y}^{4}}$
Apply the quotient rule of exponents: "a^m/a^n=a^(m-n) Recall that a variable without an exponent is understood to be raised to the 1st power.
$2 {x}^{1 - 2} {y}^{6 - 4}$
Simplify.
$2 {x}^{- 1} {y}^{2}$
Apply the negative exponent rule: ${a}^{- m} = \frac{1}{a} ^ m$.
$\frac{2 {y}^{2}}{x}$ | 2019-10-17 02:38:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349787831306458, "perplexity": 2917.230292605284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672548.33/warc/CC-MAIN-20191017022259-20191017045759-00003.warc.gz"} |
http://quantumfinancier.wordpress.com/2010/09/14/basic-introduction-to-garch-and-egarch-part-2/ | # Basic Introduction to GARCH and EGARCH (part 2)
As promised in last post, we will look at a popular implementation of the GARCH(1,1) model: the value-at-risk. I chose this implementation because it is used quite often in academic literature and for its educational purpose. The value-at-risk, also abbreviated VaR, is a measure of the risk for a portfolio. To recap, the 1 percent value at risk is defined as the number of dollars that one can statistically be 99 percent certain exceeds any losses for the next day or alternatively the loss suffered 1 percent of the time for the portfolio. A quick word of caution; it is only yet another risk measure for you to take in consideration when making investments decisions.
For this example, let us rememorize the good ol’ days (not that good) of the recent crisis and test the concept using GARCH and VaR step by step. We will look at the VaR of a $1,000,000 portfolio of 70% stocks (SPY) and 30% bonds (AGG) for the sample period ranging from AGG’s inception 2003-09-27 to the day before the Black week 2008-10-05. This date is arbitrarily chosen by yours truly since I know it was a volatile period of great fear in the market. Then we will look at the VaR’s performance using the constructed GARCH model in the midst of the crisis for our out-of-sample period from 2008-10-06 to the end of 2009. First we construct the portfolio, see below the numbers for each individual component and the portfolio in the last column. From the standard deviation in the table, we see that SPY is far more volatile than AGG. Also note the very fat tail of SPY (normal value is 3). Finally, the negative skewness indicates that the left tail (negative returns) is longer, translating into more extreme losses. Next, we look for the presence of an ARCH effect. We fit an ARCH model of order 1 to our sample of portfolio returns, than we compute the squared residuals. From the 15 lags autocorrelation values, we see that there seems to be a significant autocorrelation effect in our squared residuals (the autocorrelation are but for the first one positive and fairly high). But in disciplined investors that we are, we don’t always believe the first numbers we get. Thus, we will test the significance of the ARCH effect using a Ljung box test resulting in a $\chi^2$ statistic of 179.4636, significant at the .05 confidence level. We are now statistically confident in the presence of ARCH effect in our data. At this point we are ready to fit our GARCH(1,1) model once we are done, we get the following coefficients $\omega$ = 4.604e-06, $\alpha$ = 3.090e-01 and finally $\beta$ = 6.485e-01. Now that we have the model, we can forecast our standard deviation (volatility). After this step is completed, we want to find the 1 percent quantile of our volatility for our VaR. We obtain 0.003995719, now to find our VaR we have a choice on the distribution assumption. We can assume it is normally distributed and multiply this by 2.327, because 1 percent of a normal random variable lays 2.327 standard deviations below the mean. Now I don’t like that since I usually prefer to steer clear of the normality assumption when dealing with financial data. I would rather use the empirical distribution of the error observed in my model. Simply standardize the model residual and observe its empirical distribution to find the 1 percent quantile; we obtain a result of 2.619797. Using this data, we can now estimate our VaR. We simply multiply our 1 percent quantile for our forecast (0.003995719) and our standardized residuals (2.619797) and the portfolio capital. We obtain a VaR of$10,467.97, compare it to the \$9298.04 when assuming normal distribution. Now we could rinse and repeat every day predicting for tomorrow. Just to complete the analysis, take a look at the graph below where the daily loss is plotted with the daily VaR. Note that the axis are reversed (positive numbers are a loss, negative a gain).
Now following the same methodology, we use the model (not recalculated, only updated with the new data) on our out-of-sample data, starting from Black week on Oct. 6, until the end of 2009. Same principle applies for the axis.
Note how the value-at-risk for the portfolio is above the suffered loss for almost all data points for the period ($\approx$ 97%). It looks as if the VaR measure was mostly conservative for the period. There you have it; I hope that this step by step application post was useful and clear and that it sheds a bit of light on an at times obscure topic. Stay tuned for the last post in this series on EGARCH.
QF
## 3 responses to “Basic Introduction to GARCH and EGARCH (part 2)”
1. Mark says:
Hi QF,
I tried to follow along but stumbled on your first results (the components/portfolio stats). For instance I get different values for the mean:
> getSymbols(c(“SPY”, “AGG”), from=”2003-09-27″, to=”2008-10-05″)
[1] “SPY” “AGG”
> mean(dailyReturn(Cl(SPY), type=”log”))
[1] 7.05216e-05
I tried with different returns(arithmetic) and different periods (including out-of-sample) but I don’t get at your numbers.
How do you calculate these?
Thank you,
-Mark-
• Hi Mark,
I think we get the same results, but mine are rounded. Try rounding yours to four decimal places in percentage and see. Let me know after if it still doesn’t work.
Cheers,
QF
2. lulu says:
Hi,
I really enjoyed your post about GARCH implementation. Is it possible for you to show us your R code? Hope I’m not asking too much.
Thank you, | 2014-10-25 19:40:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5270200967788696, "perplexity": 1391.1062094900276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119649807.55/warc/CC-MAIN-20141024030049-00176-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://www.ifae.es/cat/seminars-events/item/503-seminar-bsm-primary-effects.html | Seminars
## SEMINAR: BSM Primary effects
May 8 2014, 2:30pm - Seminar room
Speaker: Sandeepan Gupta
Abstract: While the predictions of the SM Lagrangian have by now all been tested, the experimental verification of the predictions of its dimension 6 extension are only begining, with the arrival of Higgs data. We first identify the BSM primary effects, the set of physical quantities, that give us the best way to constrain new physics.
To generate these deformations at the dimension 6 level, other deformatoins are also generated in a correlated way because of the accidental symmetries at the dimension 6 level. We derive these correlations and thus the predictions of the dimension 6 Lagrangian. We show that, there are 8 primary effects related to Higgs physics and 2 related to the $\kaapa_\gamma$ and $\lambda_\gamma$ triple gauge couplings (TGC). Barring four fermion deformations, the rest of the important primary effects are related to breaking of gauge coupling universality in the couplings of the Z-boson to fermions, Higgs and the W boson, and these can be constrained by the $g^Z_1$ TGC and precision measurements at LEP. All other deformations are correlated to these BSM primary effects. We also discuss the RG evolution of the couplings related to these BSM primaries and the RG-induced constraints that can be derived when a weakly constrai | 2018-09-21 09:02:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6875619292259216, "perplexity": 1281.9003168782938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156901.47/warc/CC-MAIN-20180921072647-20180921093047-00464.warc.gz"} |
https://solvedlib.com/if-a-game-board-is-a-quadrilateral-with-diagonals,4902 | # If a game board is a quadrilateral with diagonals that bisect each other, the game board will be a square
###### Question:
If a game board is a quadrilateral with diagonals that bisect each other, the game board will be
a square
a parallelogram
a rectangle
of no guaranteed classification
#### Similar Solved Questions
##### A random process may be modeled as a random variable with the probability density functionfx(z)(: _0 <1 < 2otherwise(a) Determine the constant and sketch the PDF (b) What is the probability that X is between 0.5 and 1? What is the expected value of X? What is the mode of X? What is the median of X? What is the variance of X and the coefficient of variation of X?
A random process may be modeled as a random variable with the probability density function fx(z) (: _ 0 <1 < 2 otherwise (a) Determine the constant and sketch the PDF (b) What is the probability that X is between 0.5 and 1? What is the expected value of X? What is the mode of X? What is the me...
##### Estion 1 (10 points) Lxtf (x) = x? 12x + 1 (a) Using either first derivative test or second derivative test, determine the X-values of any local maxima or local minima (b) Determine the global maximum and global minimum on the closed interval <x<3
estion 1 (10 points) Lxtf (x) = x? 12x + 1 (a) Using either first derivative test or second derivative test, determine the X-values of any local maxima or local minima (b) Determine the global maximum and global minimum on the closed interval <x<3...
##### Problem 4: Find harmonic conjugate U of the harmonic function u(€,y) 3.*y + 2x? y3 2y2 . Express the resulting holomorphic function w u + iv in terms of 2 x + iy:
Problem 4: Find harmonic conjugate U of the harmonic function u(€,y) 3.*y + 2x? y3 2y2 . Express the resulting holomorphic function w u + iv in terms of 2 x + iy:...
##### A very large data set (N > 10,000) has a mean value of 1.47 units and...
A very large data set (N > 10,000) has a mean value of 1.47 units and a standard deviation of 0.035 units. Determine the range of values in which 95% of the data set should be found assuming normal probability density (find a in the equation xi = x' ± a)....
##### The graph ofy=flr) is shown below Graph y = 356).
The graph ofy=flr) is shown below Graph y = 356)....
##### Hipster Company manufactures a single product. The following are the data concerning its most recent month...
Hipster Company manufactures a single product. The following are the data concerning its most recent month of operations: Selling Price RM 104.00 Variable Costs per unit: Direct Materials Direct Labor Variable Manufacturing Overhead Variable Selling and Administrative 26.00 37.00 5.00 10.00 Fixed Co...
##### Packaging Solutions Corporation manufactures and sells a wide variety of packaging products. Performance reports are prepared...
Packaging Solutions Corporation manufactures and sells a wide variety of packaging products. Performance reports are prepared monthly for each department. The planning budget and flexible budget for the Production Department are based on the following formulas, where q is the number of labor-hours w...
##### Question 3 1 pts Structure 1 Structure 2 : A O :SA 0: CB :0: C...
Question 3 1 pts Structure 1 Structure 2 : A O :SA 0: CB :0: C CH3 CH3 For Structure 2, rank the bonds in order of increasing polarity. A [Choose] Intermediate Polarity Least Polar/Non-Polar Most Polar B TEHUSE с [Choose ]...
##### Choose all of the problems with this Lewis Structure for IO or choose "This is the best Lewis Structure . 1.0:]This iS the best Lewis Structure The Lewis Structure has an incorrect number of total electrons At least one atom violates the octet rule (or the duet rule for hydrogen). Atoms that can have expanded octets do not violate the octet rule unless Ihey have less Ihan an octet ol electrons The sum of the formal charges on each of the atoms does not equal the overall charge
Choose all of the problems with this Lewis Structure for IO or choose "This is the best Lewis Structure . 1.0:] This iS the best Lewis Structure The Lewis Structure has an incorrect number of total electrons At least one atom violates the octet rule (or the duet rule for hydrogen). Atoms that c...
##### Draw resonance structures for each of the following compounds, and make sure t0 IIso draw curved arrows that show how your resonance structures were generated-StepGet help answering Molecular Drawing questions:Your answer correct:In step draw the resonance form that forms carbanionthe alpha carbon (the carbon next to the carbonyl carbon). Include one pairs and charges Your structure.Edit CH
Draw resonance structures for each of the following compounds, and make sure t0 IIso draw curved arrows that show how your resonance structures were generated- Step Get help answering Molecular Drawing questions: Your answer correct: In step draw the resonance form that forms carbanion the alpha car...
##### Suppose that the distance of fly balls hit to the outfielder in baseball) is normally distributed with mean of 250 feet and standard deviation of 50 fect. We randomly sample 49 fly balls_In words define the random variable. Xb. What is the probability that the 49 balls traveled an average at least of 240 feet; Includecomplete shaded graph and wrile the probability statement: (10)Find the 90th percentile of the distribution of the average of 49 fly balls.
Suppose that the distance of fly balls hit to the outfielder in baseball) is normally distributed with mean of 250 feet and standard deviation of 50 fect. We randomly sample 49 fly balls_ In words define the random variable. X b. What is the probability that the 49 balls traveled an average at least...
##### Acid (4 pts 2 each) weakest OH Rank acid the molecules OH according - OH 'their - acidities (use HO numbers H-0 1~5 00 strongestagugcilon 10CH;CH,CHzOH
acid (4 pts 2 each) weakest OH Rank acid the molecules OH according - OH 'their - acidities (use HO numbers H-0 1~5 00 strongest agugcilon 10 CH;CH,CHzOH...
##### The present value of a loan in which $3000 is to be paid out a year... The present value of a loan in which $3000 is to be paid out a year from today with the interest rate equal to 22% is _________? (Round your response to the neareast two decimal place) If a loan is paid after two years, and the amount $5000 is to be paid the... 4 answers ##### Find the characteristic function the random variable X distributed uniformly between and a,i.e. U(-a,@)4 Find the characteristic function the random variable X distributed uniformly between and a,i.e. U(-a,@)4... 5 answers ##### Lets revisit the photo context; but this tinile' We begin by resizing the photo doubling the photo's width: We then crop (remove) inches from photos width; Iefine . function f that detennines the resized photo$ width (in inches } given the photo $original width (un inches).f(z)PreviewDefine ncton that determiries the cropped photo'$ width %n) (Fi inches } given the original width of the photo. (in inches).g(n)PreviewWhat the resized and cropped photo $Width (in inches the orig Lets revisit the photo context; but this tinile' We begin by resizing the photo doubling the photo's width: We then crop (remove) inches from photos width; Iefine . function f that detennines the resized photo$ width (in inches } given the photo $original width (un inches). f(z) Preview ... 5 answers ##### Calculate Give the Stretching matrix with the principle directions[8] and Uzand which stretches by factor ofdirectionand factor of 3 in Uz direction. Calculate the principle directions of the linear transformation given by the following matrix: 4 = [2 Give the characteristic polynomial p(X) of your matrix A. Plug in the matrix into the polynomial What= the result? Calculate Give the Stretching matrix with the principle directions [8] and Uz and which stretches by factor of direction and factor of 3 in Uz direction. Calculate the principle directions of the linear transformation given by the following matrix: 4 = [2 Give the characteristic polynomial p(X) of y... 5 answers ##### 2J) Zg2 d 4A , '> asal Kalon sin flar Arubunun Jancfndo dogurula H 0 /+ Arubuau belirleyioig - 6) 4, <A, '> bic nocml a/f budu . Neden ? nin 3ru = c) Ai h balum arubunu bulunus d) 9l h bdltm grubunun Gocpim Fab losunu Yap1nn2- 2J) Zg2 d 4A , '> asal Kalon sin flar Arubunun Jancfndo dogurula H 0 /+ Arubuau belirleyioig - 6) 4, <A, '> bic nocml a/f budu . Neden ? nin 3ru = c) Ai h balum arubunu bulunus d) 9l h bdltm grubunun Gocpim Fab losunu Yap1nn2-... 1 answer ##### FIND XI AND FOR THE SHAPE BELOW. 16" 12 o in 3 14" FIND XI AND FOR THE SHAPE BELOW. 16" 12 o in 3 14"... 5 answers ##### Linda, KekoChang cachplanetake thelr own flights. They each spendJour flyingMonday; 1.9hours flvingTuesday, andhours flying on Wednesday: Their average speeds (infor each day are givenfollowlng table _Average specdHonday uesday MednesdayLinda380370360Keiko660580560ChangSS0560So0Tne table can be represented as the following matnix:380 370 360660 580 560550 560 500(a) Find mnatriz Whose entrics are tne Limes (in hours} spent flying each day: Make sure the product A B defined and Its entrles are th Linda, Keko Chang cach plane take thelr own flights. They each spend Jour flying Monday; 1.9 hours flving Tuesday, and hours flying on Wednesday: Their average speeds (in for each day are given followlng table _ Average specd Honday uesday Mednesday Linda 380 370 360 Keiko 660 580 560 Chang SS0 560 ... 1 answer ##### Testing for a disease can be made more efficient by combining samples, the samples from four... Testing for a disease can be made more efficient by combining samples, the samples from four people are combined and the mixture tots negative, then all four samples are negative. On the other hand, one positive sample will always test positive, no matter how many negative samples it is mixed with A... 5 answers ##### Compute the energy, momentum, and wavelength (in miles)of a WMSN photon. WMSN is a radio station at 6.4 on the AM radiodial, which means that its frequency is 6.4 × 102 KHz. Compute the energy, momentum, and wavelength (in miles) of a WMSN photon. WMSN is a radio station at 6.4 on the AM radio dial, which means that its frequency is 6.4 × 102 KHz.... 1 answer ##### Consider the following two thiols: C-HUSH CHZSH (1-pentanethiol) (methanethiol) (a) Which would have the lower boiling... Consider the following two thiols: C-HUSH CHZSH (1-pentanethiol) (methanethiol) (a) Which would have the lower boiling point? Which would have the smaller enthalpy of (b) vaporization? (c) The differences in (a) and (b) are due tv dispersion forces dipole-dipole forces hydrogen bonding Submit Answer... 1 answer ##### Part II Work out the problems-clearly indicating/labeling final results for each part, show to be considered... Part II Work out the problems-clearly indicating/labeling final results for each part, show to be considered for partial credit. Point values are shown in parenthesis. 8 final results for each part. Show all your work 1. (15) A horizontal piston-cylinder assembly contains water. The piston is connec... 5 answers ##### The Human Relations Director at Ford began study of the overtime hours in the Inspection Department A sample of 12 workers showed they worked the following number of overtime hours last month18 The Human Relations Director at Ford began study of the overtime hours in the Inspection Department A sample of 12 workers showed they worked the following number of overtime hours last month 18... 1 answer ##### ABC Corporation sells industrial equipment. The equipment can be sold independently or combined with maintenance service.... ABC Corporation sells industrial equipment. The equipment can be sold independently or combined with maintenance service. In addition, ABC provides maintenance service independently from the sale of the equipment. During 2019 the following transactions take place: 1. On January 15, 175 pieces of equ... 5 answers ##### IBooksEdit ViewStoreWindowHelpMon 9.03 PMCarlson; Winquist An IntroduclionStauislic 5 Znd edition:Noteset al , 2003). Conventional wisdom is that the aver- age human body temperature is 98.6 "F This value seems have originated from Carl Wunderlich' s 1868 publication in which he says that he measured the body temperatures approximately 25,000 patients. However; more recent research (Mackowiak, Wasserman; Levine, 1992) has revealed that the average body temperature (taken orally) of h iBooks Edit View Store Window Help Mon 9.03 PM Carlson; Winquist An Introduclion Stauislic 5 Znd edition: Notes et al , 2003). Conventional wisdom is that the aver- age human body temperature is 98.6 "F This value seems have originated from Carl Wunderlich' s 1868 publication in which he... 1 answer ##### Calculate the future value of end-of-quarter payments of$7,000 made at 4.40% compounded monthly for 6...
Calculate the future value of end-of-quarter payments of \$7,000 made at 4.40% compounded monthly for 6 years....
##### To determine how climate change and habitat loss will influence Antarctic species, baseline estimates of population sizes and distributions are needed. To this end, a group of researchers estimated th...
To determine how climate change and habitat loss will influence Antarctic species, baseline estimates of population sizes and distributions are needed. To this end, a group of researchers estimated the population sizes of all breeding colonies of emperor penguins (Aptenodytes fosteri) along Antarcti...
##### 11. The payback period The payback method helps firms establish and identify a maximum acceptable payback...
11. The payback period The payback method helps firms establish and identify a maximum acceptable payback period that helps in capital budgeting decisions. There are two versions of the payback method: the conventional payback method and the discounted payback method Consider the following case: Gre...
##### 10. Let A be finite set with n elements_If a relation R on A is symmetric. Must R be reflexive? Explain your reasoning: How many relations R on A that are reflexive? (c) How many relations R on A that are symmetric?
10. Let A be finite set with n elements_ If a relation R on A is symmetric. Must R be reflexive? Explain your reasoning: How many relations R on A that are reflexive? (c) How many relations R on A that are symmetric?...
##### Homework: Homework 9 icore: 0 of 9 ptsof 12 complete)-Hw gccte: 30.5596 30.55 of 1005.4.31Quostion HelpUse the graph shown find the following: The domain and range of the function Une intorceDls Honzonta asymololes Vertice a5ymptote5 any Oblique asympioiesfWynalthe domaint Select the correcl choice belw andanv an5te bores wiin Your choiceTr" domain Inu (Unclon (Type an inequality: Use integers fraclions Ior any numbers expression domain ofthe functon the graph Gu tne set of all real number
Homework: Homework 9 icore: 0 of 9 pts of 12 complete)- Hw gccte: 30.5596 30.55 of 100 5.4.31 Quostion Help Use the graph shown find the following: The domain and range of the function Une intorceDls Honzonta asymololes Vertice a5ymptote5 any Oblique asympioies f Wynal the domaint Select the correc...
##### Current Attempt in Progress A list of accounts and their balances at a given time is...
Current Attempt in Progress A list of accounts and their balances at a given time is called a(n) journal. trial balance. income statement posting. Save for Later Attempts: 0 of 1 used Submit... | 2022-09-25 17:29:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47886842489242554, "perplexity": 2797.961236506715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00747.warc.gz"} |
http://mathhelpforum.com/discrete-math/98995-discrete-math-cardinals.html | # Math Help - Discrete Math - Cardinals
1. ## Discrete Math - Cardinals
Let 'A' be a set of open lines on the real axis, so that every two lines in this group are disjoint. Prove that |A| is lower or equal to 'Alef 0' (the cardinal number of the Natural numbers - |N|='Alef 0' )
Clue: Think of a finite line - how many lines that belong to A can be have the length equal or bigger than (1/n) in this line? (the length of the line (a,b) is a-b * )
*I believe this is a mistake, and they should have written b-a (considering that b>a)
*In case my way of explaining it in English was not quite clear - an open line can be, for instance (0,3) meaning that 0<x<3 for every x in the line. Every two lines in this group are disjoint, means that they have no common elements.
Thank you very much!
2. This is really hard to answer not knowing what you have to work with.
This is a really simple proof it you know two facts about the real numbers: the rational numbers for a countable set and the rational numbers are dense in the reals.
Here is the proof. In each open interval there is a rational number.
Because these open intervals are pairwise disjoint, the collection is at most countable.
3. Thank you very much Plato, but I'm afraid I'll have to find a full proof for that. We also haven't learned 'density', so I surely can't use it.
There's something else I didn't understand - you said that there is a rational number in each interval, and that makes it most countable, but what if I find an irrational number in one of the intervals?
Oh, I think I understand you - it doesn't matter ^ if there is an irrational number - you meant that IF we can find a rational number K in each interval, then we'll name the interval '1' or '2' , . . ., and since these K1, and K2 are located in different intervals, so K1 is surely not equal to K2. That means, that if K1, K2, . . . are rational numbers, then this is most countable.
Aha! very nice proof! I really like it... but it's a little weird for me, because in the class we always proved such things with functions...
4. Okay, now I completely understood it !
I am also able to show this with a function (the more common and 'formal' way where I study) - I say that ri is the smallest rational number between ai and bi,.
Then, I create a function from the group A={(ai,bi) | iEI} (and another phrase to show that every two sets in this group are different)
I create a function from A to Q, like this:
g((ai,bi))=ri .
Then, all I need to show that if
g(k)=g(j) then k=j, which is easy, because there can't be two different ri, rj that are equal while i and j aren't.
THANK YOU SO MUCH!!!
I am also able to show this with a function (the more common and 'formal' way where I study) - I say that ri is the smallest rational number between ai and bi,.
That is a big mistake.
The is no smallest rational in $(a,b)$.
But you can use a choice function to pick exactly one.
6. Hi Adam and Plato, I am new here, just wanted to say Hi and I hope we can avoid axiom of choice if we define our function for example like this: "for every (a,b) from A, decimal expansion of number r is the shortest initial segment of the decimal expansion of number (a+b)/2, such that a<r".
7. Thanks !!
8. Originally Posted by Liwuinan
Hi Adam and Plato, I am new here, just wanted to say Hi and I hope we can avoid axiom of choice if we define our function for example like this: "for every (a,b) from A, decimal expansion of number r is the shortest initial segment of the decimal expansion of number (a+b)/2, such that a<r".
I hope you don't think that $\frac{a+b}{2}$ has to be rational.
Anyway, we can avoid choice (what is wrong with choice?).
If the collection $\{(a,b)\in A\}$ is not countable then the set of rationals is not countable as first said. You don't need a function.
9. Originally Posted by Plato
You don't need a function.
Yes, I too think that your argument (without function) is clear enough.
Originally Posted by Plato
I hope you don't think that has to be rational.
I never said so. | 2015-03-05 18:57:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100208044052124, "perplexity": 495.63931788360577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464809.62/warc/CC-MAIN-20150226074104-00030-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://en.wiktionary.org/wiki/Boolean_ring | # Boolean ring
## English
Wikipedia has an article on:
Wikipedia
### Noun
Boolean ring (plural Boolean rings)
1. (algebra) A ring whose multiplicative operation is idempotent.
Let $\mathbb{Z}$ be the ring of integers and let $2\mathbb{Z}$ be its ideal of even integers. Then the quotient ring $\mathbb{Z} / 2\mathbb{Z}$ is a Boolean ring.
By Stone's Representation Theorem, the elements of a Boolean ring can be modeled as sets, with the additive operation corresponding to symmetric difference, and the multiplicative operation to set intersection. | 2015-09-04 04:23:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710741400718689, "perplexity": 633.7633235602299}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00084-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://repository.upenn.edu/edissertations/3867/ | ## Publicly Accessible Penn Dissertations
2020
Dissertation
#### Degree Name
Doctor of Philosophy (PhD)
Materials Science & Engineering
The grain boundary (GB) mobility relates the GB velocity to the driving force. While the GB velocity is normally associated with motion of the GB normal to the GB plane, there is often a tangential motion of one grain with respect to the other across a GB; i.e., the GB velocity is a vector. Grain boundary motion can be driven by a chemical potential that jumps across a GB or by shear applied parallel to the GB plane; the driving force has three components. Hence, the GB mobility must be a tensor (the off-diagonal components indicate shear coupling). Recent molecular dynamics (MD) and experimental studies show that the GB mobility may abruptly jump, smoothly increase, decrease, remain constant or show multiple peaks with increasing temperature. Performing MD simulations on symmetric tilt GBs in copper, we demonstrate that all six components of the GB mobility tensor are non-zero (the mobility tensor is symmetric, as required by Onsager). We demonstrate that some of these mobility components increase with temperature while, surprisingly, others decrease. We develop a disconnection dynamics-based statistical model that suggests that GB mobilities follow an Arrhenius relation with respect to temperature $T$ below a critical temperature $T_\text{c}$ and decrease as $1/T$ above it. $T_\text{c}$ is related to the operative disconnection modes and their energetics. We implement this model in a kinetic Monte Carlo (kMC); the results capture all of these observed temperature dependencies and are shown to be in quantitative agreement with each other and direct MD simulations of GB migration for a set of specific GBs. We demonstrate that the abrupt change in GB mobility results from a Kosterlitz-Thouless (KT) topological phase transition. This phase transition corresponds to the screening of the long-range interactions between (and unbinding of) disconnections. This phase transition also leads to abrupt change in GB sliding and roughening. We analyze this KT transition through mean-field theory, renormalization group methods, and kMC simulation. Finally, we examine the impact of the generalization of the mobility and KT transition for grain growth and superplasticity. | 2021-12-04 01:36:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6420673131942749, "perplexity": 1406.822517965813}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00518.warc.gz"} |
https://www.physicsforums.com/threads/topics-for-hunting-which-part-of-physics-is-the-most-accurate-description-of-the-nature.908031/ | # Studying Topics for hunting -- which part of physics is the most accurate description of the nature
1. Mar 17, 2017
### rahaverhma
In today's world, iwant to know which part of physics is the most accurate description of the nature. I know in coming time, i may get interested in some other topic. But toward which topic should I set my foot forward to?
2. Mar 17, 2017
### Choppy
Hi Rahaverhma - there might be a bit of a language barrier to people understanding your question. Could you expand on what you mean?
3. Mar 17, 2017
### rahaverhma
In today's world, iwant to know which part of physics is the most accurate description of the nature. I mean for ex. Classical mechanics is a handsome description of mechanisms at macroscopic level but general relativity is more better description than that. So, I want to go with world in forward direction not just limiting myself to ancient level. And, GR was an example, even if something is better than that you can say about it.
But toward which topic should I set my foot forward to?
4. Mar 18, 2017
### Choppy
Usually in physics people gravitate toward a specific sub-field because of the problems they are interested in solving and the opportunities in that sub-field rather than how accurately it models nature. All models break down at some point. Some are useful for solving particular types of problems. And sometimes worrying about accuracy sets you up for diminishing returns.
"Nature" is pretty broad. Perhaps you're enquiring about grand unified theories or the "theory of everything?"
One thing about learning classical models first, is that most of the more complex models are built on these. You need to know classical mechanics to understand quantum mechanics, for example. And because there are conditions under which classical mechanics breaks down, doesn't mean that it isn't useful for solving even some very modern problems. @Dr. Courtney for example, has written about research work that he's done in ballistics, which I would imagine draws quite heavily on classical mechanics.
5. Mar 18, 2017
### Dr. Courtney
Classical mechanics is THE tool for modern ballistics, with few exceptions. Interior ballistics uses lots of important results from thermodynamics and chemistry also.
Quantum mechanics is THE tool for most of atomic physics, either non-relativistic (easier to apply) or relativistic (harder to apply, sometimes needed for accuracy).
The fundamental tradeoff is between theoretical applicability and practical issues - like being able to actually make a prediction with the model before the sun goes cold.
6. Mar 18, 2017
### JoePhysics
A couple of comments. First of all, accuracy is only relevant if your measuring device is capable of distinguishing, say $3.365854$ and $3.365857$, where one result is obtained with a classical theory and the other with a more modern extension. For example, if Mercury's precession were not $574.10\pm 0.65$ arc-seconds per century, but something a lot smaller which we would have no realistic way of discerning at least not until telescopic equipment technology improved dramatically. This is why in physics one speaks of domains of validity for specific theories.
Second, even in classical physics, there still remain some outstanding questions: hydrodynamic turbulence comes to mind, as does ball lightning. The Painlevé paradox of rigid body dynamics, a topic that rests squarely within the confines of classical mechanics, was only resolved at the end of last century!
It is humbling to think that we have managed to peer back into the past and see what the Universe was like a brief moment after the Big Bang, yet we still do not fully understand how water behaves as it leaves the faucet of our bathroom sink. | 2017-08-19 05:06:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3902345597743988, "perplexity": 844.1819570672186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105297.38/warc/CC-MAIN-20170819031734-20170819051734-00603.warc.gz"} |
https://sensebox.github.io/books-v2/edu/en/grundlagen/Bees.html | Bees
A bee denotes a pluggable component with which the senseBoxMCU can transmit or store data. Here you have the choice between the Wifi-Bee or the mSD-Bee.
WiFi Bee
The WiFi Bee is the connector to connect the senseBox to the Internet. Your readings will be transferred to the existing network via WLAN (WiFi). The WiFi Bee is based on the ATWINC15000 microchip from Atmel, which has a very low power consumption and a long range.
Configure the WiFi Bee & Upload on the openSenseMap
Make sure you have the latest board support package installed because you need the correct software libraries. How to do that was explained to you in Step 2!
Declaration of the objects
First, create an instance of Bee and openSenseMap.
Bee* b = new Bee(); // instance of Bee
OpenSenseMap osem("senseBox ID",b); // instance of openSenseMap
float temp = 24.3; // Test value that we later upload to openSenseMap
Make sure that you replace the parameter "senseBox ID" with your own Box-ID!
Once we have done this, the bee can be called in the program code consecutively with the abbreviation b. In the setup() function, we now connect to our desired WiFi network and upload a first test value onto the openSenseMap.
setup()
void setup(){
b->connectToWifi("SSID","PW"); // Connect to WiFi
delay(1000); | 2019-10-17 00:10:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28888222575187683, "perplexity": 3799.260712400102}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00112.warc.gz"} |
https://www.nature.com/articles/s41598-022-08935-1 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Census-independent population estimation using representation learning
## Abstract
Knowledge of population distribution is critical for building infrastructure, distributing resources, and monitoring the progress of sustainable development goals. Although censuses can provide this information, they are typically conducted every 10 years with some countries having forgone the process for several decades. Population can change in the intercensal period due to rapid migration, development, urbanisation, natural disasters, and conflicts. Census-independent population estimation approaches using alternative data sources, such as satellite imagery, have shown promise in providing frequent and reliable population estimates locally. Existing approaches, however, require significant human supervision, for example annotating buildings and accessing various public datasets, and therefore, are not easily reproducible. We explore recent representation learning approaches, and assess the transferability of representations to population estimation in Mozambique. Using representation learning reduces required human supervision, since features are extracted automatically, making the process of population estimation more sustainable and likely to be transferable to other regions or countries. We compare the resulting population estimates to existing population products from GRID3, Facebook (HRSL) and WorldPop. We observe that our approach matches the most accurate of these maps, and is interpretable in the sense that it recognises built-up areas to be an informative indicator of population.
## Introduction
Accurate population maps are critical for planning infrastructure and distributing resources, including public health interventions and disaster relief, as well as monitoring well-being e.g. through the fulfillment of sustainable development goals1. Censuses provide a ‘gold standard’ population map by surveying every individual household nationally. However, this is an extremely expensive endeavor, and therefore they are typically conducted every 10 years2. During the intercensal period, projections of the population are available at a coarser enumeration level that is estimated either using birth, death, and migration rates3,4, or through combining sample household surveys using small area estimation2. Estimating finer-resolution intercensal population, e.g., over a 100 m grid, has received significant interest in the last decade, and several population maps, e.g. WorldPop5, High-Resolution Settlement Layer (HRSL)6,7 and GRID38, have been made available publicly to aid humanitarian initiatives. These approaches primarily use satellite imagery as a predictor, and can be broadly categorized as census-dependent and census-independent based on their use of census as response variable9.
Census-independent population estimation approaches (section “Census-independent population estimation”) using microcensus (survey data) are gaining prominence since they can improve the spatial and temporal resolution of census-dependent approaches (section “Deep learning for intercensal population estimation”). However, finding census-independent methods that are sustainable and transferable, and informative data sources that are reliable and easy to procure remain active areas of research. Existing approaches rely heavily on hand-crafted features that often require manual curation, and the features used in modelling vary significantly between publications and countries where population is being estimated, making them less sustainable for repeated use, and less transferable to other regions and countries. For example, these approaches often use objects in satellite imagery, e.g., buildings and cars10, and distribution of services, e.g., school density11 and road networks9, as indicators of population. Detecting building footprints usually requires manual annotation and curation while information on road networks and school density can be incomplete, unreliable, and difficult to procure.
Recent advances in representation learning have demonstrated that features, or representations, automatically extracted from images through end-to-end training of a deep neural network can significantly improve the performance in many computer vision tasks in a sustainable manner by removing the need for hand-crafted features12. Additionally, transfer learning can leverage features learned from a vast amount of annotated data from a separate task to improve performance on tasks lacking sufficient annotated data13. Furthermore, explainable AI has provided meaningful insight from these, so called ‘black box’, models to explain the decisions made by them, enhancing their transparency14. Representation learning can vastly simplify the problem of estimating population from satellite imagery by removing the need for handcrafted features, manual data procurement, and human supervision, thus improving the sustainability and transferability of the process. Additionally, transfer learning removes the need for large scale training data, allowing fine-tuning on limited microcensus with minimal computational resources. Finally, these methods provide interpretation of model outcome, promoting trust around the estimated population among the end-users.
We assess the utility of representation learning in census-independent population estimation from very-high-resolution ($$\le$$ 5 m) satellite imagery using a retrospective microcensus in two districts of Mozambique. To the best of our knowledge, we are the first to explore the potential of such approach, and in using both very-high (50 cm spatial) resolution satellite imagery and microcensus in this manner. We observe that the proposed approach is able to produce a reliable medium-resolution (100 m) population density map with minimal human supervision and computational resources. We find that this approach performs similar to the more elaborate approach of using building footprints to estimate population, and outperforms techniques using only public datasets to estimate population in our ROIs (more details in Table 3 in section “Results”). It also completely avoids manual annotation of satellite images and manual access to public datasets making it potentially more transferable to other countries using only very-high-resolution satellite imagery and gridded microcensus. Additionally, we observe that this approach learns to predict population in a reasonable manner by using built-up area as an indicator of population. We refer to this approach as Sustainable Census-Independent Population Estimation or SCIPE (see Fig. 1 for an illustration), with our core motivation being developing population estimation methods that are easy to use, computationally efficient, and that can produce frequent and explainable population maps with associated uncertainty values. This will help humanitarian organizations extrapolate local microcensus information to regional level with ease, and provide more confidence in using the estimated population map in conjunction with existing ones.
## Background
The traditional method for mapping population is the use of census enumeration. The time and cost of such surveys means that most are conducted once a decade. National Statistics Offices (NSOs) can provide updated population counts regularly through registration of births and deaths. However, many NSOs are under-funded and poorly resourced, limiting the ability to provide regular, fine-spatial resolution population counts9,17,18. There have been several approaches in recent years to provide more frequent and higher-spatial resolution population estimations using blended datasets such as census, household surveys, earth observation (EO) satellite data and mobile phone records9. Further, there is an increasing interest in using EO data to estimate socioeconomic changes in time periods between census and surveys19,20,21. These approaches seek to identify relationships between socioeconomic conditions and metrics extracted from geospatial data such as particular land cover characteristics related with local livelihoods22. However, these approaches all rely on a priori knowledge of population location and counts. This information is not updated frequently enough using traditional statistical approaches17. In this section, we provide a detailed overview of the existing literature on census-independent population estimation (section “Census-independent population estimation” and Table 1) and the application of deep neural networks in intercensal census-dependent population estimation (section “Deep learning for intercensal population estimation” and Table 2).
### Census-independent population estimation
Census-dependent population estimation, also known as population disaggregation or top-down estimation, either uses census data to train a predictive model that can estimate population of a grid tile directly1, or to train a model to estimate a weighted surface that can be used to distribute coarse resolution projected data across a finer resolution grid23. Census-independent population estimation, also known as bottom-up estimation, instead relies on locally conducted microcensuses to learn a predictive model that can estimate population at non-surveyed grid tiles.
Weber et al.15 used very-high-resolution satellite imagery to estimate the number of under-5s in two states in northern Nigeria in three stages: first, by building a binary settlement layer at 8 m resolution using support vector machine (SVM) with “various low-level contextual image features" , second, by classifying “blocks" constructed from OpenStreetMap data using “a combination of supervised image segmentation and manual correction of errors" in 8 residential types (6 urban, 1 rural and 1 non-residential) , and finally, by modelling population count of each residential type with separate log-normal distributions using microcensus. The predictions were validated against a separate survey from the same region, and were found to be highly correlated with this data.
Engstrom et al.10 used LASSO regularized Poisson regression and Random Forest models to predict village level population in Sri Lanka. The authors used a variety of remote sensing indicators at various resolutions as predictors, both coarser-resolution publicly available ones such as night time lights, elevation, slope, and tree cover, and finer-resolution proprietary ones such as built up area metrics, car and shadow shapefiles, and land type classifications . The authors observed that publicly available data can explain a large amount of variation in population density for these regions, particularly in rural areas, and the addition of proprietary object footprints further improved performance. Their population estimates were highly correlated with census counts at the village level.
Hillson et al.16 explored the use of 30 m resolution Landsat 5 thematic mapper (TM) imagery to estimate population densities and counts for 20 neighborhoods in the city of Bo, Sierra Leone. The authors used 379 candidate Landsat features generated manually, which was reduced to 159 covariates through “trial-and-error" and removal of highly correlated (Pearson’s $$\rho$$ $$>0.99$$) pairs, and finally, an optimal regression model was learned using only 6 of these covariates. These estimates were then validated through leave-one-out cross-validation on the districts surveyed. The approach estimated population density at the coarse neighborhood level with low relative error for most neighborhoods.
Leasure et al.11 used a hierarchical Bayesian model to estimate population at 100 m resolution grid cells nationally in Nigeria, and focused on “provid[ing] robust probabilistic estimates of uncertainty at any spatial scale". The authors used the same settlement map as Weber et al.15 to remove unsettled grid cells prior to population density estimation, and used additional geospatial covariates, including school density, average household size, and WorldPop gridded population estimates. WorldPop population estimates were generated using a census-dependent approach, so the proposed method in some sense integrates information from census into otherwise census-independent population predictions. The predicted population estimates, however, were not highly correlated with the true population counts.
### Deep learning for intercensal population estimation
There are several recent approaches that apply deep learning methods to intercensal population estimation using free and readily available high-resolution satellite imagery as opposed to relatively expensive very-high resolution imagery, and census as opposed to microcensus, potentially due to the prohibitive cost of collecting sufficient microcensus for training a custom deep neural network from scratch. HRSL uses very-high resolution imagery to focus on building footprint identification using a feedforward neural network and weakly supervised learning, and redistributes the census proportionally to the fraction of built-up area7, but does not use census as the response variable.
Doupe et al.23 used an adapted VGG25 convolutional neural network (CNN) trained on a concatenation of low resolution Landsat-7 satellite images (7 channels) and night-time light images (1 channel). The VGG network was trained on observations generated from 18,421 population labeled enumeration areas from the 2002 Tanzanian census, and validated on observations generated from 7,150 labeled areas from the 2009 Kenyan census. The authors proposed using the output of the VGG network as a weighted surface for population disaggregation from regional population totals. This approach significantly outperformed AsiaPop (a precursor to WorldPop) in RMSE, %RMSE, and MAE evaluation metrics.
Robinson et al.1 trained an adapted VGG25 CNN on Landsat-7 imagery from the year 2000 and US data from the year 2004, and validated it on Landsat and data from the year 2010. The authors split the US into 15 regions, and trained a model for each with around $$\sim$$ 800,000 training samples in total. Instead of predicting population count directly, the authors classified image patches into log scale population bands, and determined final population count by the network output weighted average of band centroids. Existing approaches for projecting data outperformed the final network when validated against the 2010 US census, however, the fitted model was interpretable as evidenced through examples of images that were confidently assigned to a particular population band with sparsely populated areas being assigned to a lower population band and more urbanized areas being assigned to progressively higher population bands.
Hu et al.24 generated population density maps at the village level in rural India using a custom VGG CNN based end-to-end learning. The authors used freely available high-resolution Landsat-8 imagery (30 m resolution, RGB channels only) and Sentinel-1 radar imagery (10 m resolution, converted to RGB) images of villages as predictor, and respective population from the 2011 Indian census as response. The training set included 350,000 villages and validation set included 150,000 villages across 32 states, and the resulting model outperformed two previous deep learning based approaches1,23. The authors observed that the approach performed better at a coarser district level resolution than a finer village level resolution.
Both census-dependent and census-independent approaches have their advantages and drawbacks. While census-dependent estimation is cheaper to perform using existing data, the results can be misleading if the projected intercensal population count is inaccurate, and due to the limited resolution of both data and publicly available satellite imagery, these approaches exclusively predict population at a coarser spatial resolution. Census-independent estimation uses microcensus, which can be collected more frequently and is available at a finer scale, and although this data can be relatively expensive to collect in large enough quantities, it provides ‘ground truth’ information at a finer scale which is not available for census-dependent approaches.
## Methods and data
In this section we discuss some recent advancements in the principles and tools for self-supervised learning, partly in the context of remote sensing (section “Representation and transfer learning”), provide details of SCIPE, and the datasets used, i.e., satellite imagery and microcensus.
### Representation and transfer learning
Representation Learning learns a vector representation of an image by transforming the image, for example, using deep neural network, such that the corresponding representation can be used for other tasks such as regression or classification using existing tools13. The learned representation can be used for transfer learning, i.e., using the transformation learned from a separate task, e.g., ImageNet classification, for a different one, e.g., population estimation12. Intuitively, this happens since a pre-trained network, although designed for a separate task, can extract meaningful features that can be informative for population estimation (see for example Fig. 2a).
Supervised pre-training is a common approach for representation learning where a network is trained in a supervised learning context with a vast amount of annotated training data such as ImageNet. Once the network has been trained on this task, the output of the penultimate layer (or a different layer) of this pre-trained network can be used as a vector representation of the respective input image, and can be used as a predictor for further downstream analysis12. This approach works well in practice but its performance is inherently limited by the size of the dataset used for supervised learning which can be ‘small’26. To mitigate this issue, representation learning using unsupervised methods such as Tile2Vec27, and in particular, self-supervised approaches have become popular. Compared to supervised learning which maximizes the objective function of a pre-defined task such as classification, self-supervised learning generates pseudolabels from a pretext task in the absence of ground truth data, and different algorithms differ in the way they define the pretext tasks28.
In the context of population estimation, we focus on methods that either assume that the latent representations form clusters26 or make them invariant to certain class of distortions29. Our intuition is that grid tiles can be grouped together based on population. This is a common practice in census-independent population estimation, i.e., to split regions in categories and model these categories separately, e.g., see11 and23. We also observe this pattern in the representation space where built-up area separates well from uninhabited regions (see Fig. 5b). Additionally, we expect the population count of a grid tile to remain unchanged even if, for example, it is rotated, converted to grayscale, or resized.
DeepCluster26 (and DeepClusterV2) jointly learns parameters of a deep neural network and the cluster assignments of its representations. It works by iteratively generating representations using an encoder model that transforms the image to a vector, clustering these representations using a k-means clustering algorithm, and using these cluster assignments as pseudo-labels to retrain the encoder. The intuition behind this being that CNNs, even without training, can find partially useful representations, and therefore clusters, due to their convolutional structure. This weak signal can be exploited to bootstrap a process of iterative improvements to the representation and cluster quality.
SwAV30 clusters the representation while simultaneously enforcing consistency between cluster assignments produced for different distortions. This involves applying augmentations to each image, yielding two different views of the image, which are then fed through the model, clustered, and compared to train the model. In particular, to enforce consistent cluster assignments between the views of the image, the cluster assignment of a view is predicted from the representation of another view of the same image. SwAV applies horizontal flips, color distortion and Gaussian blur after randomly cropping and resizing the image. Cropping affects the population of a grid tile, however, since satellite imagery alone can estimate population with some uncertainty, we assume that cropping will change population within this level of uncertainty. Although cropping is used as a data augmentation step in the existing pre-trained network, we avoid cropping as data augmentation when fine-tuning the network to predict population in section “Models and training”.
Barlow Twins29 also works by applying augmentations to each image, yielding two views of the image, which are then fed through the model to produce two representations of the original image. To avoid trivial constant solutions of existing self-supervised learning approaches aiming to achieve invariance to distortions, Barlow Twins considers a redundancy-reduction approach, i.e., the network is optimized by maximizing the correlation along the main diagonal of the cross correlation matrix in the representation space to achieve invariance, while minimizing the correlation in all other positions to reduce redundancy of representations. Barlow Twins applies cropping, resizing, horizontal flipping, color jittering, converting to grayscale, Gaussian blurring, and solarization as distortions.
### Satellite imagery and microcensus
#### Satellite imagery
We used proprietary 50cm resolution satellite imagery (Vivid 2.0 from Maxar, WorldView-2 satellite) covering $${7773}\, \hbox {km}^{2}$$ across two districts in Mozambique: Boane (BOA) and Magude (MGD). The Vivid 2.0 data product is intended as a base map for analysis. It has worldwide coverage and is updated annually with priority given to the low cloud coverage images, and hence images can be from different time periods and different sensors in the Maxar constellation. The product is provided already mosaicked and colour-balanced, increasing the transferability of any methods/algorithms developed using this data. The data are provided in a three-band combination of red, green and blue. The NIR band is not provided as part of the VIVID 2.0 data product. The procured data was a mosaic of images, mostly from 2018 and 2019 (83% and 17% for BOA and 43% and 33% for MGD, remainder from 2011 to 2020).
#### Microcensus
We used microcensus from 2019 conducted by SpaceSUR and GroundWork in these two districts, funded by UNICEF. The survey was conducted at a household level (with respective GPS locations available), and households were exhaustively sampled over several primary sampling units (PSUs) where PSUs were defined using natural boundaries, such as roads, rivers, trails etc. A total of 3011 buildings were visited in the two districts with 1334 of the buildings being inhabited, housing 4901 people. Survey data was collected in accordance with experimental protocol which was approved by UNICEF. Ethical approval was obtained from the Ministry of Health in Mozambique. Oral consent was obtained from the household head, spouse or other adult household member. We aggregated the household survey data to a 100 m grid to generate population counts producing 474 labelled grid tiles.
#### Non-representative tiles
Since the imagery and microcensus were not perfectly aligned temporally, and the PSUs had natural boundaries, many tiles contained either unsurveyed buildings or surveyed buildings absent in the imagery. Thus, the dataset contained both developed tiles (i.e. with many buildings) labeled as low population, or undeveloped tiles labeled as high population. Although such ‘outlier’ tiles can be addressed with robust training, they cannot be used for validation. We, therefore, manually examined each grid tile by comparing the GPS location of surveyed buildings with those appearing in the imagery, and excluded those with a mismatch, leaving 199 curated tiles (CT).
#### Zero-population tiles
Since the microcensus was conducted in settled areas, we had no labels for uninhabited tiles. Although this does not pose a problem when comparing the performance of different models on the available microcensus (Table 2), the models do not learn to predict zero population when applied to an entire district which will include many uninhabitated areas. To resolve this, we identified 75 random tiles (50 from BOA, 25 from MGD) with zero population (ZT) guided by HRSL, i.e., from regions where HRSL showed zero population. We selected more ZTs from BOA to improve regional population estimates (see Fig. 3b). Thus, we had 274 grid tiles in total.
### Models and training
We use a ResNet-5031 CNN architecture to estimate population from grid tiles. The model architecture is shown in Fig. 2b with $$224\times 224\times 3$$ dimensional input and 49 convolutional layers followed by a Global Average Pooling layer which results in a 2048 dimensional latent representation. We used the pre-trained ResNet-50 models trained on ImageNet using methods described in section “Representation and transfer learning” after resizing the grid tiles of size $$200\times 200\times 3$$ (100 m RGB) to $$224\times 224\times 3$$, and used these (representation, population) pairs to train a prediction model using Random Forest. The hyperparameters of the model were chosen using a grid search over num_estimators $$\in \{100,200,\ldots ,500\}$$, min_samples_split $$\in \{2,5\}$$ and min_samples_leaf $$\in \{1,2\}$$. A linear regression head can also be trained to predict population in an end-to-end manner. This yields several advantages: rapid inference on GPUs, a simple pipeline, and a simple method for determining uncertainty. However, we observed that the Random Forest model outperformed the linear regression head.
#### Pre-trained model
We used pre-trained ResNet50 models described in section “Representation and transfer learning” to extract representations, and also fine-tuned these models with microcensus.
#### Fine-tuning
We fine-tuned the pretrained models using a combination of curated and zero grid tiles after attaching a linear regression head following the global average pooling layer and minimizing the $$\ell _2$$ loss between observed and predicted population. Given the labelled grid tiles (number of grid tiles vary depending on experimental set-up), we randomly split them into training and validation sets (80–20%). Due to the limited number of tiles in the dataset, we apply random dihedral transformations (i.e. reflections and rotations) to tiles to augment the training set, avoiding transformations that could affect the validity of the population count e.g. crops that could remove buildings. We use Adam optimizer to minimize the loss function which takes about 1 minute with a batch size of 32 on a single Nvidia GTX 1070 with 8GB of VRAM. During training, first, the network was frozen (i.e., the weights were kept fixed) and only the regression head was trained for 5 epochs with a learning rate of $$2\times 10^{-3}$$, and second, the entire network was trained using a discriminative learning rate32, where the learning rate is large at the top of the network, and is reduced in the earlier layers, avoiding large changes to the earlier layers of the model which typically extract more general features, and focusing training on the domain-specific later layers. The base learning rate at the top of the network was $$1\times 10^{-3}$$, and it was decreased in the preceding stages to a minimum of $$1\times 10^{-5}$$. We used early stopping to halt training when validation loss plateaued (i.e., no improvement for 2 or more epochs) to avoid overfitting.
### Evaluation metrics and cross-validation
#### Evaluation metrics
We compare the different methods against several evaluation metrics, i.e., R-squared, median absolute error (MeAE), median absolute percentage error (MeAPE), and aggregated percentage error (AggPE) (to capture average error at regional levels characterised by A), as follows,
\begin{aligned} R^2 = 1- {\sum _i (y_i - \hat{y})^2}/{\sum _i (y_i - \bar{y})^2} &\quad {\textsc {MeAE}}= {\text {median}}\, |y_i - \hat{y}_i| \\{\textsc {MeAPE}}= {\text {median}}\, {|y_i - \hat{y}_i|}/{y_i}&\quad {\textsc {AggPE}}= {\text {median}}_A \, {\left| \sum _{i \in A} y_i - \sum _{i \in A} \hat{y}_i\right| }/{\sum _{i \in A} y_i} \end{aligned}
These evaluation metrics capture different aspects of the prediction, and each has different significance. For example, $$R^2$$ may be dominated by large population counts while MeAPE may be dominated by small population counts.
#### Null model
As ‘null model’ we predict population as the mean of the training set irrespective of feature values. We used this as an initial baseline to ensure any perceived performance when transferring features from ImageNet is not trivial.
#### Baseline
To properly assess the performance of automatic feature extraction, we compared to results when using hand-crafted features and public datasets to predict population that is more common for census-independent population estimation. We took a variety of public features (Landsat imagery, land cover classification, OSM road data, night-time lights), along with building footprints automatically extracted from each image tile using a U-Net model pre-trained on SpaceNet and fine-tuned with ‘dot-annotation’ from non-surveyed buildings, and using these features to train a Random Forest model.
#### Cross-validation
We compare the different approaches to population estimation using cross-validation. For each region, we partitioned the data into four subsets spatially, and formed validation folds by taking the union of these subsets across the two regions. We reported the evaluation metrics over pooled predictions from the four validation folds covering the entire microcensus. When fine-tuning, we trained one network for each fold separately (to avoid data leakage) resulting in four networks.
## Results
In this section, we compare the performance of several self-supervised learning frameworks using cross-validation, apply the best performing model to predict a regional map of Boane, and compare it against existing maps from GRID3, HRSL and WorldPop where all of these methods take a census-dependent approach to population estimation within our ROIs, and cannot be regarded as ‘ground-truth’. We show the interpretability of the framework using uncertainty quantification and activation maps.
### Model selection
Table 3 shows the cross-validation results for population estimation using Random Forest regression on representations extracted using ResNet-50 model with curated tiles only. We observe that, (1) representations extracted using any pre-trained network outperformed estimation using publicly available features in all but $${\textsc {MeAE}}$$ metric, (2) fine-tuning any of the representation learning frameworks with microcensus, besides DeepCluster, resulted in an improvement of the performance of the framework, (3) although all of the representation learning framework (best $${\textsc {MeAE}}=3.91$$) outperformed than the null model ($${\textsc {MeAE}}=7.57$$), the baseline models trained with building footprint area (best $${\textsc {MeAE}}=3.75$$) as a feature still outperformed them in $$R^2$$ and $${\textsc {MeAE}}$$, and (4) Barlow Twins overall had lower error metrics and the second largest $$R^2$$ metric among the fine-tuned models, so we consider this model for further analysis. We did not evaluate the performance of Tile2Vec since the available pre-trained model required an NIR band for input data, which Vivid 2.0 lacks.
### Regional population estimation
We use representations learned with a ResNet50 architecture pre-trained on ImageNet using Barlow Twins and fine-tuned using curated and zero grid tiles (with 80-20 random train/validation split of all tiles, no cross-validation) to extract representations for our survey tiles. These representations are used to train a Random Forest model (section “Models and training”), which is used to produce a population map for Boane district. The map is shown in Fig. 3a along with three existing population maps from WorldPop, HRSL and GRID3. We observe that, with respect to our ‘ground truth’ microcensus, (1) GRID3 provides a more accurate population map of Boane than HRSL and WorldPop, but usually underestimates population, (2) WorldPop lacks the finer details of the other population maps, and underestimates population, (3) although the settlement map provided by HRSL matches that of SCIPE and GRID3 well, its similarity with SCIPE is less than that of GRID3, and (4) SCIPE and GRID3 provide visually similar settlement map, and the population estimates are also more similar in scale compared to HRSL and WorldPop.
### Census
We additionally compare the aggregated population estimate in Boane with the 2019 census projection of the 2017 census, and we observe that SCIPE overestimates population by 29%. Although projected census is not the ground truth, the discrepancy in estimated population is potentially due to SCIPE not modelling zero population explicitly, i.e., zero population grid tiles can be assigned a small population. This issue can potentially be addressed by creating a binary mask to set small population to hard zero values, or training SCIPE with more negative samples, i.e., grid tiles with zero population that are collected in a more principled manner, to better model these zero values. We leave this as future work.
Since GRID3 provides a more accurate population map than HRSL and WorldPop, we compare it against SCIPE in more detail. Figure 4 shows the difference in population maps produced by these two approaches. We observe that, (1) the estimated population of these approaches matched well quantitatively (Spearman’s $$\rho$$ 0.70, Pearson’s $$\rho$$ 0.79), (2) there are regions where SCIPE underestimated population compared to GRID3, and these are areas where microcensus was not available, and (3) there are regions where GRID3 underestimated population, and they usually coincided with regions where microcensus was available and SCIPE could potentially provide better estimates. Therefore, there is a high level of agreement between the two products and they provide similar estimates, and discrepancies appear in regions that lack microcensus for training. A more detailed comparison of these two population maps will be valuable, and may lead to both improved population estimation through ensemble learning and better microcensus data collection through resolving model disagreements. However, this is beyond the scope of this work.
### Uncertainty
To further assess the quality of the estimated population map, we quantify the uncertainty and qualitatively assess their ‘explanations’. We can assess the uncertainty of predictions in several ways either at the level of the representation learning or at the level of the Random Forest population model. For the former, we can apply Monte Carlo dropout33 by placing dropout layers ($$p=0.1$$) after each stage of the ResNet models, and predicting multiple population value for each grid tile. For the latter, the uncertainty can be quantified from the output of the individual decision trees in the Random Forest model without perturbing the representation. Figure 5c shows Random Forest model predictions on fine-tuned Barlow Twins features and their associated uncertainties. We observe that the estimated uncertainty matches the intuition of higher estimated population having higher uncertainty. We expect this since there are fewer samples with large population, and we expect more variability in population for similar satellite images in populated areas.
### Explanation
To assess the outcome of the model, we use regression activation maps (RAMs)34 that show the discriminative regions of input image that is informative of the outcome of the model. It is widely reported that building footprint area is an important indicator of population, and therefore, we expect SCIPE to focus on buildings when predicting population. We observe that a fine-tuned Barlow Twins model produces RAM plots that show a clear focus on built-up area, which agrees with our expectation (see Fig. 5a). To further explore if SCIPE focuses on built-up area to estimate population, we observe that population estimates using SCIPE and that using building footprints (as presented in Table 3) show high correlation (Spearman’s $$\rho$$ 0.68, Pearson’s $$\rho$$ 0.74 over Boane region) (see Fig. 5d) corroborating this observation.
### Embedding
Finally, we visualize the representations available from the fine-tuned Barlow Twins model to assess if they meaningfully separate in terms of population, and we observe that this is indeed the situation (see Fig. 5b).
## Discussion
We find representation learning to be an effective tool for estimating population at a medium-resolution from limited local microcensus. Although this approach did not outperform building footprint area based estimations; it is fast, does not require human supervision, and only relies on very-high-resolution satellite images, making it sustainable and transferable in the sense that users can extrapolate their own local microcensus with relative ease, and also quantify uncertainty and capture explanations.
There is likely a hard limit to the predictive power of satellite imagery alone, owed to the difficulty of distinguishing inhabited and uninhabited areas in some contexts. For example, Robinson et al.1 gave the example of Walt Disney World which is built to look like a settled area but has 0 population. To address this issue, an interesting extension of SCIPE will be to use multiple data sources, such as night-time light, land-cover data, altitude and slope information, location of services, of possibly different resolutions in the model alongside very-high-resolution imagery without changing its core focus, i.e., of using pre-trained network and fine-tuning them with limited amount of microcensus. Additionally cellphone data can be used to assess where people live compared to places people just visit. This can potentially improve the prediction of population in areas that are uninhabited. We have also used a Random Forest model which effectively treats grid tiles as independent and identically distributed samples which they are not. Considering the spatial arrangement of grid tiles can improve estimation further by using broader contextual information around it, for example to establish the socioeconomic status or land use of the surrounding areas. Given a micro-census is conducted over a small region, it is likely that any census-independent approach will make errors in regions that are far away from the sampling area and potentially have different semantic characteristics, such as different building architecture or different land types. This issue, however, can potentially be mitigated through conducting micro-census in areas where are the model makes errors, and using this information to iteratively update the model.
Ideally, we would like to use self-supervised learning framework directly on satellite images to learn appropriate representation, rather than relying on pre-trained networks and fine-tuning. This will, however, require a vast quantity of training data, and has not been the focus of this work. In this work we have focused on sustainability, both in terms of human annotation and computational resources which prohibits training from scratch, and have shown that existing tools can be used to produce reliable population estimates. The proposed framework should also be validated externally on a larger scale. We explored population of a single district in Mozambique while existing population maps are available over the whole of Africa. Assessing the utility of SCIPE better would require further large scale validation both in different regions of Mozambique (which is our immediate focus), and in other countries (which is our long term goal), to make population maps more frequent, accessible, reliable and reproducible.
SCIPE avoids several typical bottlenecks associated with census-independent population estimation. While some methods require tedious manual annotation of built up area or potentially incomplete public features, SCIPE extracts features automatically using only satellite images. SCIPE is extremely fast, requires negligible GPU time, and provides meaningful population estimates. Microcensus data may not be available in all countries or regions, and can be expensive to gather, but this cost is far lower than that of conducting census on a regular basis. Very-high-resolution satellite imagery can also be expensive, but has become more accessible in recent years when used for humanitarian purposes10. Given that many development agencies benefit from subsidised access to the Maxar very-high-resolution imagery these population maps could be produced relatively quickly for specific regions of focus, for example, when vaccination programmes are being planned. This approach, therefore, would contribute towards the UNs stated need for a data revolution35 by allowing regularly updated estimates of population between census enumeration periods supporting a range of humanitarian activities as well as general governmental and NGO planning and allocation of resources.
## Data availability
The satellite data that support the findings of this study are available from Maxar, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Maxar. The microcensus data was used to generate gridded population counts at 100 m grid tiles. This data is demographic in nature but is spatially aggregated. The gridded population count at a set of pre-defined grid tiles that support the findings of this study are available upon request from UNICEF but restrictions apply to the availability of these data.
## References
1. Robinson, C., Hohman, F. & Dilkina, B. A deep learning approach for population estimation from satellite imagery. In Proceedings of ACM SIGSPATIAL Workshop on Geospatial Humanities (2017).
2. Shearmur, R. Editorial—A world without data? The unintended consequences of fashion in geography. Urban Geogr.https://doi.org/10.2747/0272-3638.31.8.1009 (2010).
3. United Nations, Department of Economic and Social Affairs and Population Division. World population prospects Highlights, 2019 revision Highlights, 2019 revision. OCLC: 1110010089 (2019).
4. Ezeh, A., Kissling, F. & Singer, P. Why sub-Saharan Africa might exceed its projected population size by 2100. Lancet 396, 1131–1133. https://doi.org/10.1016/S0140-6736(20)31522-1 (2020).
5. Linard, C., Gilbert, M., Snow, R. W., Noor, A. M. & Tatem, A. J. Population distribution, settlement patterns and accessibility across Africa in 2010. PLoS One. https://doi.org/10.1371/journal.pone.0031743 (2012).
6. Lab, F. C. & for International Earth Science Information Network CIESIN Columbia University, C. High resolution settlement layer (HRSL) (2016).
7. Tiecke, T. G. et al. Mapping the world population one building at a time. https://doi.org/10.1596/33700. arXiv:1712.05839 (2017).
8. Bondarenko, M., Jones, P., Leasure, D., Lazar, A. & Tatem, A. Gridded population estimates disaggregated from Mozambique’s fourth general population and housing census (2017 census), version 1.1. https://doi.org/10.5258/SOTON/WP00672 (2020).
9. Wardrop, N. et al. Spatially disaggregated population estimates in the absence of national population and housing census data. PNAS. https://doi.org/10.1073/pnas.1715305115 (2018).
10. Engstrom, R., Newhouse, D. & Soundararajan, V. Estimating small-area population density in Sri Lanka using surveys and geo-spatial data. PLoS One 15, e0237063. https://doi.org/10.1371/journal.pone.0237063 (2020).
11. Leasure, D. R., Jochem, W. C., Weber, E. M., Seaman, V. & Tatem, A. J. National population mapping from sparse survey data: a hierarchical Bayesian modeling framework to account for uncertainty. PNAS. https://doi.org/10.1073/pnas.1913050117 (2020).
12. Razavian, A. S., Azizpour, H., Sullivan, J. & Carlsson, S. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of CVPR Workshops, 512–519. https://doi.org/10.1109/CVPRW.2014.131 (2014).
13. Bengio, Y., Courville, A. & Vincent, P. Representation learning: A review and new perspectives.https://doi.org/10.1109/TPAMI.2013.50 (2013).
14. Linardatos, P., Papastefanopoulos, V. & Kotsiantis, S. Explainable AI: A review of machine learning interpretability methods. Entropy. https://doi.org/10.3390/e23010018 (2020).
15. Weber, E. M. et al. Census-independent population mapping in northern Nigeria. Remote Sens. Environ. https://doi.org/10.1016/j.rse.2017.09.024 (2018).
16. Hillson, R. et al. Estimating the size of urban populations using Landsat images: a case study of Bo, Sierra Leone, West Africa. Int. J. Health Geogr.https://doi.org/10.1186/s12942-019-0180-1 (2019).
17. MacFeely, S. & Nastav, B. You say you want a [data] revolution: A proposal to use unofficial statistics for the SDG Global Indicator Framework. Stat. J. IAOS 35, 309–327. https://doi.org/10.3233/SJI-180486 (2019).
18. Ye, Y., Wamukoya, M., Ezeh, A., Emina, J. B. O. & Sankoh, O. Health and demographic surveillance systems: a step towards full civil registration and vital statistics system in sub-Sahara Africa?. BMC Public Health 12, 741. https://doi.org/10.1186/1471-2458-12-741 (2012).
19. Hargreaves, P. K. & Watmough, G. R. Satellite Earth observation to support sustainable rural development. Int. J. Appl. Earth Observ. Geoinf. 103, 102466. https://doi.org/10.1016/j.jag.2021.102466 (2021).
20. Watmough, G. R. et al. Socioecologically informed use of remote sensing data to predict rural household poverty. Proc. Natl. Acad. Sci. 116, 1213–1218. https://doi.org/10.1073/pnas.1812969116 (2019).
21. Steele, J. E. et al. Mapping poverty using mobile phone and satellite data. J. R. Soc. Interface 14, 20160690. https://doi.org/10.1098/rsif.2016.0690 (2017).
22. Watmough, G. R., Atkinson, P. M. & Hutton, C. W. Exploring the links between census and environment using remotely sensed satellite sensor imagery. J. Land Use Sci. 8, 284–303. https://doi.org/10.1080/1747423X.2012.667447 (2013).
23. Doupe, P., Bruzelius, E., Faghmous, J. & Ruchman, S. G. Equitable development through deep learning: The case of sub-national population density estimation. In Proceedings of ACM DEV, 1–10. https://doi.org/10.1145/3001913.3001921 (2016).
24. Hu, W. et al. Mapping missing population in rural India: A deep learning approach with satellite imagery. In Proceedings of AAAIhttps://doi.org/10.1145/3306618.3314263 (2019).
25. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of ICLR (2015).
26. Caron, M., Bojanowski, P., Joulin, A. & Douze, M. Deep clustering for unsupervised learning of visual features. In In Proceedings of the ECCV. https://doi.org/10.1007/978-3-030-01264-9_9 (2018).
27. Jean, N. et al. Tile2vec: Unsupervised representation learning for spatially distributed data. In Proceedings of the AAAI. https://doi.org/10.1609/aaai.v33i01.33013967 (2019).
28. Jing, L. & Tian, Y. Self-supervised visual feature learning with deep neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell.https://doi.org/10.1109/TPAMI.2020.2992393 (2020).
29. Zbontar, J., Jing, L., Misra, I., LeCun, Y. & Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. arXiv:2103.03230 (2021).
30. Caron, M. et al. Unsupervised learning of visual features by contrasting cluster assignments. In Proceedings of NeurIPS (2020).
31. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of CVPR. https://doi.org/10.1109/CVPR.2016.90 (2016).
32. Howard, J. & Gugger, S. Deep Learning for Coders with fastai and PyTorch (O’Reilly Media, 2020).
33. Gal, Y. & Ghahramani, Z. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of ICML, 1050–1059. https://doi.org/10.5555/3045390.3045502 (PMLR, 2016).
34. Wang, Z. & Yang, J. Diabetic retinopathy detection via deep convolutional networks for discriminative localization and visual explanation. In Proceedings of AAAI Workshops. https://doi.org/10.1109/ICVRV.2018.00016 (2018).
35. Independent expert advisory group on a data revolution for sustainable development. A world that counts: Mobilising the data revolution for sustainable development. https://doi.org/10.7551/mitpress/12439.003.0018 (2014).
## Acknowledgements
The project was funded by the Data for Children Collaborative with UNICEF.
## Author information
Authors
### Contributions
S.S., G.W. and M.S.D. conceived the study. I.N. and S.S. designed the study. I.N., S.S. and G.W. acquired the remote sensing data. M.S.D. initiated the microcensus data collected. I.N. implemented the methods. I.N. and S.S. analysed the data. I.N., S.S., G.W., and M.S.D. interpreted the results. I.N. and S.S. wrote the manuscript. G.W. revised the manuscript. I.N., S.S., G.W. and M.S.D. approved the manuscript.
### Corresponding author
Correspondence to Sohan Seth.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Neal, I., Seth, S., Watmough, G. et al. Census-independent population estimation using representation learning. Sci Rep 12, 5185 (2022). https://doi.org/10.1038/s41598-022-08935-1
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-022-08935-1 | 2022-08-19 21:38:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6567952632904053, "perplexity": 3392.843025556394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00405.warc.gz"} |
https://mathematica.stackexchange.com/questions/87998/mathematica-save-directory/88004 | # Mathematica Save Directory [closed]
Mathematica saves everything to Documents. I want it to save it into a folder Documents/Mathematica.
• I think $UserDocumentsDirectory holds the location where it is storing your documents. You can, with great care and caution, change your init.m file for Mathematica to give a new value for that which will then be used each time you start Mathematica. Be very careful when changing init.m, you don't want to make a mistake and somehow corrupt Mathematica. – Bill Jul 11 '15 at 19:01 • what is the deal with the phantom LaTeX!? It's breaking the rendering of the question! – MarcoB Jul 11 '15 at 19:08 • I don't like having to bandy about this diamond, but: unless you give a reasonable explanation for the useless$\LaTeX\$, *kindly stop*. – J. M. is in limbo Jul 11 '15 at 19:28
• Stack Exchange does not allow short questions. So I artificially make it longer. Why doesn't latex work for you? – grdgfgr Jul 11 '15 at 19:33
• …and you just didn't consider that you could try writing a better version of your question. – J. M. is in limbo Jul 11 '15 at 19:43
Use SetDirectory:
SetDirectory["yourdir"]
• @grdgfgr, that's what you use init.m for. – J. M. is in limbo Jul 11 '15 at 19:41 | 2020-02-17 00:41:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2117891013622284, "perplexity": 3026.299930760762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00414.warc.gz"} |
https://ictqt.ug.edu.pl/pages/publications/ | Peer-reviewed publications
2021
• T. Miller, M. Eckstein, P. Horodecki, and R. Horodecki, “Generally covariant N -particle dynamics,” Journal of geometry and physics, vol. 160, p. 103990, 2021. doi:10.1016/j.geomphys.2020.103990
@article{miller_generally_2021,
title = {Generally covariant {N} -particle dynamics},
volume = {160},
issn = {03930440},
doi = {10.1016/j.geomphys.2020.103990},
language = {en},
urldate = {2021-05-10},
journal = {Journal of Geometry and Physics},
author = {Miller, Tomasz and Eckstein, Michał and Horodecki, Paweł and Horodecki, Ryszard},
month = feb,
year = {2021},
pages = {103990},
}
• B. Ahmadi, S. Salimi, and A. S. Khorashad, “Irreversible work and Maxwell demon in terms of quantum thermodynamic force,” Scientific reports, vol. 11, iss. 1, p. 2301, 2021. doi:10.1038/s41598-021-81737-z
Abstract The second law of classical equilibrium thermodynamics, based on the positivity of entropy production, asserts that any process occurs only in a direction that some information may be lost (flow out of the system) due to the irreversibility inside the system. However, any thermodynamic system can exhibit fluctuations in which negative entropy production may be observed. In particular, in stochastic quantum processes due to quantum correlations and also memory effects we may see the reversal energy flow (heat flow from the cold system to the hot system) and the backflow of information into the system that leads to the negativity of the entropy production which is an apparent violation of the Second Law. In order to resolve this apparent violation, we will try to properly extend the Second Law to quantum processes by incorporating information explicitly into the Second Law. We will also provide a thermodynamic operational meaning for the flow and backflow of information. Finally, it is shown that negative and positive entropy production can be described by a quantum thermodynamic force.
@article{ahmadi_irreversible_2021,
title = {Irreversible work and {Maxwell} demon in terms of quantum thermodynamic force},
volume = {11},
issn = {2045-2322},
url = {http://www.nature.com/articles/s41598-021-81737-z},
doi = {10.1038/s41598-021-81737-z},
abstract = {Abstract
The second law of classical equilibrium thermodynamics, based on the positivity of entropy production, asserts that any process occurs only in a direction that some information may be lost (flow out of the system) due to the irreversibility inside the system. However, any thermodynamic system can exhibit fluctuations in which negative entropy production may be observed. In particular, in stochastic quantum processes due to quantum correlations and also memory effects we may see the reversal energy flow (heat flow from the cold system to the hot system) and the backflow of information into the system that leads to the negativity of the entropy production which is an apparent violation of the Second Law. In order to resolve this apparent violation, we will try to properly extend the Second Law to quantum processes by incorporating information explicitly into the Second Law. We will also provide a thermodynamic operational meaning for the flow and backflow of information. Finally, it is shown that negative and positive entropy production can be described by a quantum thermodynamic force.},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Scientific Reports},
month = dec,
year = {2021},
pages = {2301},
}
• Q. Guo, Y. Zhao, M. Grassl, X. Nie, G. Xiang, T. Xin, Z. Yin, and B. Zeng, “Testing a quantum error-correcting code on various platforms,” Science bulletin, vol. 66, iss. 1, p. 29–35, 2021. doi:10.1016/j.scib.2020.07.033
@article{guo_testing_2021,
title = {Testing a quantum error-correcting code on various platforms},
volume = {66},
issn = {20959273},
doi = {10.1016/j.scib.2020.07.033},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Science Bulletin},
author = {Guo, Qihao and Zhao, Yuan-Yuan and Grassl, Markus and Nie, Xinfang and Xiang, Guo-Yong and Xin, Tao and Yin, Zhang-Qi and Zeng, Bei},
month = jan,
year = {2021},
pages = {29--35},
}
• P. Mironowicz, G. Cañas, J. Cariñe, E. S. Gómez, J. F. Barra, A. Cabello, G. B. Xavier, G. Lima, and M. Pawłowski, “Quantum randomness protected against detection loophole attacks,” Quantum information processing, vol. 20, iss. 1, p. 39, 2021. doi:10.1007/s11128-020-02948-3
@article{mironowicz_quantum_2021,
title = {Quantum randomness protected against detection loophole attacks},
volume = {20},
issn = {1570-0755, 1573-1332},
doi = {10.1007/s11128-020-02948-3},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Quantum Information Processing},
author = {Mironowicz, Piotr and Cañas, Gustavo and Cariñe, Jaime and Gómez, Esteban S. and Barra, Johanna F. and Cabello, Adán and Xavier, Guilherme B. and Lima, Gustavo and Pawłowski, Marcin},
month = jan,
year = {2021},
pages = {39},
}
• M. Christandl, R. Ferrara, and K. Horodecki, “Upper Bounds on Device-Independent Quantum Key Distribution,” Physical review letters, vol. 126, iss. 16, p. 160501, 2021. doi:10.1103/PhysRevLett.126.160501
@article{christandl_upper_2021,
title = {Upper {Bounds} on {Device}-{Independent} {Quantum} {Key} {Distribution}},
volume = {126},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.126.160501},
language = {en},
number = {16},
urldate = {2021-05-10},
journal = {Physical Review Letters},
author = {Christandl, Matthias and Ferrara, Roberto and Horodecki, Karol},
month = apr,
year = {2021},
pages = {160501},
}
• R. Alicki, D. Gelbwaser-Klimovsky, A. Jenkins, and E. von Hauff, “Dynamical theory for the battery’s electromotive force,” Physical chemistry chemical physics, vol. 23, iss. 15, p. 9428–9439, 2021. doi:10.1039/D1CP00196E
We propose a dynamical theory of how the chemical energy stored in a battery generates the electromotive force (emf). , We propose a dynamical theory of how the chemical energy stored in a battery generates the electromotive force (emf). In this picture, the battery’s half-cell acts as an engine, cyclically extracting work from its underlying chemical disequilibrium. We show that the double layer at the electrode–electrolyte interface can exhibit a rapid self-oscillation that pumps an electric current, thus accounting for the persistent conversion of chemical energy into electrical work equal to the emf times the separated charge. We suggest a connection between this mechanism and the slow self-oscillations observed in various electrochemical cells, including batteries, as well as the enhancement of the current observed when ultrasound is applied to the half-cell. Finally, we propose more direct experimental tests of the predictions of this dynamical theory.
@article{alicki_dynamical_2021,
title = {Dynamical theory for the battery's electromotive force},
volume = {23},
issn = {1463-9076, 1463-9084},
doi = {10.1039/D1CP00196E},
abstract = {We propose a dynamical theory of how the chemical energy stored in a battery generates the electromotive force (emf).
,
We propose a dynamical theory of how the chemical energy stored in a battery generates the electromotive force (emf). In this picture, the battery's half-cell acts as an engine, cyclically extracting work from its underlying chemical disequilibrium. We show that the double layer at the electrode–electrolyte interface can exhibit a rapid self-oscillation that pumps an electric current, thus accounting for the persistent conversion of chemical energy into electrical work equal to the emf times the separated charge. We suggest a connection between this mechanism and the slow self-oscillations observed in various electrochemical cells, including batteries, as well as the enhancement of the current observed when ultrasound is applied to the half-cell. Finally, we propose more direct experimental tests of the predictions of this dynamical theory.},
language = {en},
number = {15},
urldate = {2021-05-10},
journal = {Physical Chemistry Chemical Physics},
author = {Alicki, Robert and Gelbwaser-Klimovsky, David and Jenkins, Alejandro and von Hauff, Elizabeth},
year = {2021},
pages = {9428--9439},
}
• M. Markiewicz and J. Przewocki, “On construction of finite averaging sets for \textitSL (2, C) via its Cartan decomposition,” Journal of physics a: mathematical and theoretical, 2021. doi:10.1088/1751-8121/abfa44
@article{markiewicz_construction_2021,
title = {On construction of finite averaging sets for \textit{{SL}} (2, {C}) via its {Cartan} decomposition},
issn = {1751-8113, 1751-8121},
url = {https://iopscience.iop.org/article/10.1088/1751-8121/abfa44},
doi = {10.1088/1751-8121/abfa44},
urldate = {2021-05-10},
journal = {Journal of Physics A: Mathematical and Theoretical},
author = {Markiewicz, Marcin and Przewocki, Janusz},
month = apr,
year = {2021},
}
• M. Żukowski and M. Markiewicz, “Physics and Metaphysics of Wigner’s Friends: Even Performed Premeasurements Have No Results,” Physical review letters, vol. 126, iss. 13, p. 130402, 2021. doi:10.1103/PhysRevLett.126.130402
@article{zukowski_physics_2021,
title = {Physics and {Metaphysics} of {Wigner}’s {Friends}: {Even} {Performed} {Premeasurements} {Have} {No} {Results}},
volume = {126},
issn = {0031-9007, 1079-7114},
shorttitle = {Physics and {Metaphysics} of {Wigner}’s {Friends}},
doi = {10.1103/PhysRevLett.126.130402},
language = {en},
number = {13},
urldate = {2021-05-10},
journal = {Physical Review Letters},
author = {Żukowski, Marek and Markiewicz, Marcin},
month = apr,
year = {2021},
pages = {130402},
}
• J. H. Selby, C. M. Scandolo, and B. Coecke, “Reconstructing quantum theory from diagrammatic postulates,” Quantum, vol. 5, p. 445, 2021. doi:10.22331/q-2021-04-28-445
A reconstruction of quantum theory refers to both a mathematical and a conceptual paradigm that allows one to derive the usual formulation of quantum theory from a set of primitive assumptions. The motivation for doing so is a discomfort with the usual formulation of quantum theory, a discomfort that started with its originator John von Neumann. We present a reconstruction of finite-dimensional quantum theory where all of the postulates are stated in diagrammatic terms, making them intuitive. Equivalently, they are stated in category-theoretic terms, making them mathematically appealing. Again equivalently, they are stated in process-theoretic terms, establishing that the conceptual backbone of quantum theory concerns the manner in which systems and processes compose. Aside from the diagrammatic form, the key novel aspect of this reconstruction is the introduction of a new postulate, symmetric purification. Unlike the ordinary purification postulate, symmetric purification applies equally well to classical theory as well as quantum theory. Therefore we first reconstruct the full process theoretic description of quantum theory, consisting of composite classical-quantum systems and their interactions, before restricting ourselves to just the ‘fully quantum’ systems as the final step. We propose two novel alternative manners of doing so, ‘no-leaking’ (roughly that information gain causes disturbance) and ‘purity of cups’ (roughly the existence of entangled states). Interestingly, these turn out to be equivalent in any process theory with cups & caps. Additionally, we show how the standard purification postulate can be seen as an immediate consequence of the symmetric purification postulate and purity of cups. Other tangential results concern the specific frameworks of generalised probabilistic theories (GPTs) and process theories (a.k.a. CQM). Firstly, we provide a diagrammatic presentation of GPTs, which, henceforth, can be subsumed under process theories. Secondly, we argue that the ‘sharp dagger’ is indeed the right choice of a dagger structure as this sharpness is vital to the reconstruction.
@article{selby_reconstructing_2021,
title = {Reconstructing quantum theory from diagrammatic postulates},
volume = {5},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2021-04-28-445/},
doi = {10.22331/q-2021-04-28-445},
abstract = {A reconstruction of quantum theory refers to both a mathematical and a conceptual paradigm that allows one to derive the usual formulation of quantum theory from a set of primitive assumptions. The motivation for doing so is a discomfort with the usual formulation of quantum theory, a discomfort that started with its originator John von Neumann.
We present a reconstruction of finite-dimensional quantum theory where all of the postulates are stated in diagrammatic terms, making them intuitive. Equivalently, they are stated in category-theoretic terms, making them mathematically appealing. Again equivalently, they are stated in process-theoretic terms, establishing that the conceptual backbone of quantum theory concerns the manner in which systems and processes compose.
Aside from the diagrammatic form, the key novel aspect of this reconstruction is the introduction of a new postulate, symmetric purification. Unlike the ordinary purification postulate, symmetric purification applies equally well to classical theory as well as quantum theory. Therefore we first reconstruct the full process theoretic description of quantum theory, consisting of composite classical-quantum systems and their interactions, before restricting ourselves to just the ‘fully quantum’ systems as the final step.
We propose two novel alternative manners of doing so, ‘no-leaking’ (roughly that information gain causes disturbance) and ‘purity of cups’ (roughly the existence of entangled states). Interestingly, these turn out to be equivalent in any process theory with cups \& caps. Additionally, we show how the standard purification postulate can be seen as an immediate consequence of the symmetric purification postulate and purity of cups.
Other tangential results concern the specific frameworks of generalised probabilistic theories (GPTs) and process theories (a.k.a. CQM). Firstly, we provide a diagrammatic presentation of GPTs, which, henceforth, can be subsumed under process theories. Secondly, we argue that the ‘sharp dagger’ is indeed the right choice of a dagger structure as this sharpness is vital to the reconstruction.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Selby, John H. and Scandolo, Carlo Maria and Coecke, Bob},
month = apr,
year = {2021},
pages = {445},
}
• H. S. Karthik, H. Akshata Shenoy, and U. A. R. Devi, “Leggett-Garg inequalities and temporal correlations for a qubit under PT -symmetric dynamics,” Physical review a, vol. 103, iss. 3, p. 32420, 2021. doi:10.1103/PhysRevA.103.032420
@article{karthik_leggett-garg_2021,
title = {Leggett-{Garg} inequalities and temporal correlations for a qubit under {PT} -symmetric dynamics},
volume = {103},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.103.032420},
language = {en},
number = {3},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Karthik, H. S. and Akshata Shenoy, H. and Devi, A. R. Usha},
month = mar,
year = {2021},
pages = {032420},
}
• N. Miklin and M. Oszmaniec, “A universal scheme for robust self-testing in the prepare-and-measure scenario,” Quantum, vol. 5, p. 424, 2021. doi:10.22331/q-2021-04-06-424
We consider the problem of certification of arbitrary ensembles of pure states and projective measurements solely from the experimental statistics in the prepare-and-measure scenario assuming the upper bound on the dimension of the Hilbert space. To this aim, we propose a universal and intuitive scheme based on establishing perfect correlations between target states and suitably-chosen projective measurements. The method works in all finite dimensions and allows for robust certification of the overlaps between arbitrary preparation states and between the corresponding measurement operators. Finally, we prove that for qubits, our technique can be used to robustly self-test arbitrary configurations of pure quantum states and projective measurements. These results pave the way towards the practical application of the prepare-and-measure paradigm to certification of quantum devices.
@article{miklin_universal_2021,
title = {A universal scheme for robust self-testing in the prepare-and-measure scenario},
volume = {5},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2021-04-06-424/},
doi = {10.22331/q-2021-04-06-424},
abstract = {We consider the problem of certification of arbitrary ensembles of pure states and projective measurements solely from the experimental statistics in the prepare-and-measure scenario assuming the upper bound on the dimension of the Hilbert space. To this aim, we propose a universal and intuitive scheme based on establishing perfect correlations between target states and suitably-chosen projective measurements. The method works in all finite dimensions and allows for robust certification of the overlaps between arbitrary preparation states and between the corresponding measurement operators. Finally, we prove that for qubits, our technique can be used to robustly self-test arbitrary configurations of pure quantum states and projective measurements. These results pave the way towards the practical application of the prepare-and-measure paradigm to certification of quantum devices.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Miklin, Nikolai and Oszmaniec, Michał},
month = apr,
year = {2021},
pages = {424},
}
• D. Schmid, J. H. Selby, E. Wolfe, R. Kunjwal, and R. W. Spekkens, “Characterization of Noncontextuality in the Framework of Generalized Probabilistic Theories,” Prx quantum, vol. 2, iss. 1, p. 10331, 2021. doi:10.1103/PRXQuantum.2.010331
@article{schmid_characterization_2021,
title = {Characterization of {Noncontextuality} in the {Framework} of {Generalized} {Probabilistic} {Theories}},
volume = {2},
issn = {2691-3399},
doi = {10.1103/PRXQuantum.2.010331},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {PRX Quantum},
author = {Schmid, David and Selby, John H. and Wolfe, Elie and Kunjwal, Ravi and Spekkens, Robert W.},
month = feb,
year = {2021},
pages = {010331},
}
• P. Lipka-Bartosik, P. Mazurek, and M. Horodecki, “Second law of thermodynamics for batteries with vacuum state,” Quantum, vol. 5, p. 408, 2021. doi:10.22331/q-2021-03-10-408
In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence — of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.
@article{lipka-bartosik_second_2021,
title = {Second law of thermodynamics for batteries with vacuum state},
volume = {5},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2021-03-10-408/},
doi = {10.22331/q-2021-03-10-408},
abstract = {In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence — of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Lipka-Bartosik, Patryk and Mazurek, Paweł and Horodecki, Michał},
month = mar,
year = {2021},
pages = {408},
}
• A. Z. Goldberg, P. de la Hoz, G. Björk, A. B. Klimov, M. Grassl, G. Leuchs, and L. L. Sánchez-Soto, “Quantum concepts in optical polarization,” Advances in optics and photonics, vol. 13, iss. 1, p. 1, 2021. doi:10.1364/AOP.404175
@article{goldberg_quantum_2021,
title = {Quantum concepts in optical polarization},
volume = {13},
issn = {1943-8206},
url = {https://www.osapublishing.org/abstract.cfm?URI=aop-13-1-1},
doi = {10.1364/AOP.404175},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Advances in Optics and Photonics},
author = {Goldberg, Aaron Z. and de la Hoz, Pablo and Björk, Gunnar and Klimov, Andrei B. and Grassl, Markus and Leuchs, Gerd and Sánchez-Soto, Luis L.},
month = mar,
year = {2021},
pages = {1},
}
• R. Uola, T. Kraft, S. Designolle, N. Miklin, A. Tavakoli, J. Pellonpää, O. Gühne, and N. Brunner, “Quantum measurement incompatibility in subspaces,” Physical review a, vol. 103, iss. 2, p. 22203, 2021. doi:10.1103/PhysRevA.103.022203
@article{uola_quantum_2021,
title = {Quantum measurement incompatibility in subspaces},
volume = {103},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.103.022203},
language = {en},
number = {2},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Uola, Roope and Kraft, Tristan and Designolle, Sébastien and Miklin, Nikolai and Tavakoli, Armin and Pellonpää, Juha-Pekka and Gühne, Otfried and Brunner, Nicolas},
month = feb,
year = {2021},
pages = {022203},
}
• A. Tavakoli, Máté. Farkas, D. Rosset, J. Bancal, and J. Kaniewski, “Mutually unbiased bases and symmetric informationally complete measurements in Bell experiments,” Science advances, vol. 7, iss. 7, p. eabc3847, 2021. doi:10.1126/sciadv.abc3847
Mutually unbiased bases (MUBs) and symmetric informationally complete projectors (SICs) are crucial to many conceptual and practical aspects of quantum theory. Here, we develop their role in quantum nonlocality by (i) introducing families of Bell inequalities that are maximally violated by d-dimensional MUBs and SICs, respectively, (ii) proving device-independent certification of natural operational notions of MUBs and SICs, and (iii) using MUBs and SICs to develop optimal-rate and nearly optimal-rate protocols for device-independent quantum key distribution and device-independent quantum random number generation, respectively. Moreover, we also present the first example of an extremal point of the quantum set of correlations that admits physically inequivalent quantum realizations. Our results elaborately demonstrate the foundational and practical relevance of the two most important discrete Hilbert space structures to the field of quantum nonlocality.
@Article{tavakoli_mutually_2021,
author = {Tavakoli, Armin and Farkas, Máté and Rosset, Denis and Bancal, Jean-Daniel and Kaniewski, Jedrzej},
title = {Mutually unbiased bases and symmetric informationally complete measurements in {Bell} experiments},
year = {2021},
issn = {2375-2548},
month = feb,
number = {7},
pages = {eabc3847},
volume = {7},
abstract = {Mutually unbiased bases (MUBs) and symmetric informationally complete projectors (SICs) are crucial to many conceptual and practical aspects of quantum theory. Here, we develop their role in quantum nonlocality by (i) introducing families of Bell inequalities that are maximally violated by d-dimensional MUBs and SICs, respectively, (ii) proving device-independent certification of natural operational notions of MUBs and SICs, and (iii) using MUBs and SICs to develop optimal-rate and nearly optimal-rate protocols for device-independent quantum key distribution and device-independent quantum random number generation, respectively. Moreover, we also present the first example of an extremal point of the quantum set of correlations that admits physically inequivalent quantum realizations. Our results elaborately demonstrate the foundational and practical relevance of the two most important discrete Hilbert space structures to the field of quantum nonlocality.},
language = {en},
urldate = {2021-05-10},
}
• Máté. Farkas, N. Guerrero, J. Cariñe, G. Cañas, and G. Lima, “Self-Testing Mutually Unbiased Bases in Higher Dimensions with Space-Division Multiplexing Optical Fiber Technology,” Physical review applied, vol. 15, iss. 1, p. 14028, 2021. doi:10.1103/PhysRevApplied.15.014028
@article{farkas_self-testing_2021,
title = {Self-{Testing} {Mutually} {Unbiased} {Bases} in {Higher} {Dimensions} with {Space}-{Division} {Multiplexing} {Optical} {Fiber} {Technology}},
volume = {15},
issn = {2331-7019},
doi = {10.1103/PhysRevApplied.15.014028},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Physical Review Applied},
author = {Farkas, Máté and Guerrero, Nayda and Cariñe, Jaime and Cañas, Gustavo and Lima, Gustavo},
month = jan,
year = {2021},
pages = {014028},
}
• K. Schlichtholz, B. Woloncewicz, and M. Żukowski, “Nonclassicality of bright Greenberger-Horne-Zeilinger–like radiation of an optical parametric source,” Physical review a, vol. 103, iss. 4, p. 42226, 2021. doi:10.1103/PhysRevA.103.042226
@article{schlichtholz_nonclassicality_2021,
title = {Nonclassicality of bright {Greenberger}-{Horne}-{Zeilinger}–like radiation of an optical parametric source},
volume = {103},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.103.042226},
language = {en},
number = {4},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Schlichtholz, Konrad and Woloncewicz, Bianka and Żukowski, Marek},
month = apr,
year = {2021},
pages = {042226},
}
• R. Salazar, T. Biswas, J. Czartowski, K. Życzkowski, and P. Horodecki, “Optimal allocation of quantum resources,” Quantum, vol. 5, p. 407, 2021. doi:10.22331/q-2021-03-10-407
The optimal allocation of resources is a crucial task for their efficient use in a wide range of practical applications in science and engineering. This paper investigates the optimal allocation of resources in multipartite quantum systems. In particular, we show the relevance of proportional fairness and optimal reliability criteria for the application of quantum resources. Moreover, we present optimal allocation solutions for an arbitrary number of qudits using measurement incompatibility as an exemplary resource theory. Besides, we study the criterion of optimal equitability and demonstrate its relevance to scenarios involving several resource theories such as nonlocality vs local contextuality. Finally, we highlight the potential impact of our results for quantum networks and other multi-party quantum information processing, in particular to the future Quantum Internet.
@article{salazar_optimal_2021,
title = {Optimal allocation of quantum resources},
volume = {5},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2021-03-10-407/},
doi = {10.22331/q-2021-03-10-407},
abstract = {The optimal allocation of resources is a crucial task for their efficient use in a wide range of practical applications in science and engineering. This paper investigates the optimal allocation of resources in multipartite quantum systems. In particular, we show the relevance of proportional fairness and optimal reliability criteria for the application of quantum resources. Moreover, we present optimal allocation solutions for an arbitrary number of qudits using measurement incompatibility as an exemplary resource theory. Besides, we study the criterion of optimal equitability and demonstrate its relevance to scenarios involving several resource theories such as nonlocality vs local contextuality. Finally, we highlight the potential impact of our results for quantum networks and other multi-party quantum information processing, in particular to the future Quantum Internet.},
language = {en},
urldate = {2021-07-28},
journal = {Quantum},
author = {Salazar, Roberto and Biswas, Tanmoy and Czartowski, Jakub and Życzkowski, Karol and Horodecki, Paweł},
month = mar,
year = {2021},
pages = {407},
}
• C. Datta, T. Biswas, D. Saha, and R. Augusiak, “Perfect discrimination of quantum measurements using entangled systems,” New journal of physics, vol. 23, iss. 4, p. 43021, 2021. doi:10.1088/1367-2630/abecaf
@article{datta_perfect_2021,
title = {Perfect discrimination of quantum measurements using entangled systems},
volume = {23},
issn = {1367-2630},
url = {https://iopscience.iop.org/article/10.1088/1367-2630/abecaf},
doi = {10.1088/1367-2630/abecaf},
number = {4},
urldate = {2021-07-28},
journal = {New Journal of Physics},
author = {Datta, Chandan and Biswas, Tanmoy and Saha, Debashis and Augusiak, Remigiusz},
month = apr,
year = {2021},
pages = {043021},
}
• R. Horodecki, “Quantum Information,” Acta physica polonica a, vol. 139, iss. 3, p. 197–2018, 2021. doi:10.12693/APhysPolA.139.197
@article{horodecki_quantum_2021,
title = {Quantum {Information}},
volume = {139},
issn = {1898-794X, 0587-4246},
url = {http://przyrbwn.icm.edu.pl/APP/PDF/139/app139z3p01.pdf},
doi = {10.12693/APhysPolA.139.197},
number = {3},
urldate = {2021-07-28},
journal = {Acta Physica Polonica A},
author = {Horodecki, R.},
month = mar,
year = {2021},
pages = {197--2018},
}
• T. Miller, M. Eckstein, P. Horodecki, and R. Horodecki, “Generally covariant N -particle dynamics,” Journal of geometry and physics, vol. 160, p. 103990, 2021. doi:10.1016/j.geomphys.2020.103990
@article{miller_generally_2021-1,
title = {Generally covariant {N} -particle dynamics},
volume = {160},
issn = {03930440},
doi = {10.1016/j.geomphys.2020.103990},
language = {en},
urldate = {2021-07-28},
journal = {Journal of Geometry and Physics},
author = {Miller, Tomasz and Eckstein, Michał and Horodecki, Paweł and Horodecki, Ryszard},
month = feb,
year = {2021},
pages = {103990},
}
• M. Wieśniak, “Symmetrized persistency of Bell correlations for Dicke states and GHZ-based mixtures,” Scientific reports, vol. 11, iss. 1, p. 14333, 2021. doi:10.1038/s41598-021-93786-5
Abstract Quantum correlations, in particular those, which enable to violate a Bell inequality, open a way to advantage in certain communication tasks. However, the main difficulty in harnessing quantumness is its fragility to, e.g, noise or loss of particles. We study the persistency of Bell correlations of GHZ based mixtures and Dicke states. For the former, we consider quantum communication complexity reduction (QCCR) scheme, and propose new Bell inequalities (BIs), which can be used in that scheme for higher persistency in the limit of large number of particles N . In case of Dicke states, we show that persistency can reach 0.482 N , significantly more than reported in previous studies.
@article{wiesniak_symmetrized_2021,
title = {Symmetrized persistency of {Bell} correlations for {Dicke} states and {GHZ}-based mixtures},
volume = {11},
issn = {2045-2322},
url = {http://www.nature.com/articles/s41598-021-93786-5},
doi = {10.1038/s41598-021-93786-5},
abstract = {Abstract
Quantum correlations, in particular those, which enable to violate a Bell inequality, open a way to advantage in certain communication tasks. However, the main difficulty in harnessing quantumness is its fragility to, e.g, noise or loss of particles. We study the persistency of Bell correlations of GHZ based mixtures and Dicke states. For the former, we consider quantum communication complexity reduction (QCCR) scheme, and propose new Bell inequalities (BIs), which can be used in that scheme for higher persistency in the limit of large number of particles
N
. In case of Dicke states, we show that persistency can reach 0.482
N
, significantly more than reported in previous studies.},
language = {en},
number = {1},
urldate = {2021-07-28},
journal = {Scientific Reports},
author = {Wieśniak, Marcin},
month = dec,
year = {2021},
pages = {14333},
}
• K. Anjali, A. S. Hejamadi, H. S. Karthik, S. Sahu, Sudha, and U. A. R. Devi, “Characterizing nonlocality of pure symmetric three-qubit states,” Quantum information processing, vol. 20, iss. 5, p. 187, 2021. doi:10.1007/s11128-021-03124-x
@article{anjali_characterizing_2021,
title = {Characterizing nonlocality of pure symmetric three-qubit states},
volume = {20},
issn = {1570-0755, 1573-1332},
doi = {10.1007/s11128-021-03124-x},
language = {en},
number = {5},
urldate = {2021-07-28},
journal = {Quantum Information Processing},
author = {Anjali, K. and Hejamadi, Akshata Shenoy and Karthik, H. S. and Sahu, S. and {Sudha} and Devi, A. R. Usha},
month = may,
year = {2021},
pages = {187},
}
• M. Banacki, R. R. Rodríguez, and P. Horodecki, “On the edge of the set of no-signaling assemblages,” Physical review a, vol. 103, iss. 5, p. 52434, 2021. doi:10.1103/PhysRevA.103.052434
Following recent advancements, we consider a scenario of multipartite postquantum steering and general no-signaling assemblages. We introduce the notion of the edge of the set of no-signaling assemblages and we present its characterization. Next, we use this concept to construct witnesses for no-signaling assemblages without an LHS model. Finally, in the simplest nontrivial case of steering with two untrusted subsystems, we discuss the possibility of quantum realization of assemblages on the edge. In particular, for three-qubit states, we obtain a no-go type result, which states that it is impossible to produce assemblage on the edge using measurements described by POVMs as long as the rank of a given state is greater than or equal to 3.
@article{banacki_edge_2021,
title = {On the edge of the set of no-signaling assemblages},
volume = {103},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/2008.12325},
doi = {10.1103/PhysRevA.103.052434},
abstract = {Following recent advancements, we consider a scenario of multipartite postquantum steering and general no-signaling assemblages. We introduce the notion of the edge of the set of no-signaling assemblages and we present its characterization. Next, we use this concept to construct witnesses for no-signaling assemblages without an LHS model. Finally, in the simplest nontrivial case of steering with two untrusted subsystems, we discuss the possibility of quantum realization of assemblages on the edge. In particular, for three-qubit states, we obtain a no-go type result, which states that it is impossible to produce assemblage on the edge using measurements described by POVMs as long as the rank of a given state is greater than or equal to 3.},
number = {5},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Banacki, Michał and Rodríguez, Ricard Ravell and Horodecki, Paweł},
month = may,
year = {2021},
note = {arXiv: 2008.12325},
keywords = {Quantum Physics},
pages = {052434},
}
• S. Cusumano and Ł. Rudnicki, “Comment on “Fluctuations in Extractable Work Bound the Charging Power of Quantum Batteries”,” Physical review letters, vol. 127, iss. 2, p. 28901, 2021. doi:10.1103/PhysRevLett.127.028901
In the abstract of{\textasciitilde}[Phys. Rev. Lett. \{{\textbackslash}bf 125\}, 040601 (2020)] one can read that: […]\{{\textbackslash}it to have a nonzero rate of change of the extractable work, the state \${\textbackslash}rho_{\textbackslash}mathcal\{W\}\$ of the battery cannot be an eigenstate of a “free energy operator”, defined by \${\textbackslash}mathcal\{F\}=H_{\textbackslash}mathcal\{W\}+{\textbackslash}beta{\textasciicircum}\{-1\}{\textbackslash}log {\textbackslash}rho_{\textbackslash}mathcal\{W\}\$, where \$H_{\textbackslash}mathcal\{W\}\$ is the Hamiltonian of the battery and \${\textbackslash}beta\$ is the inverse temperature\} […]. Contrarily to what is presented below Eq.{\textasciitilde}(17) of the paper, we observe that the above conclusion does not hold when the battery is subject to nonunitary dynamics.
@article{cusumano_comment_2021,
title = {Comment on "{Fluctuations} in {Extractable} {Work} {Bound} the {Charging} {Power} of {Quantum} {Batteries}"},
volume = {127},
issn = {0031-9007, 1079-7114},
url = {http://arxiv.org/abs/2102.05627},
doi = {10.1103/PhysRevLett.127.028901},
abstract = {In the abstract of{\textasciitilde}[Phys. Rev. Lett. \{{\textbackslash}bf 125\}, 040601 (2020)] one can read that: [...]\{{\textbackslash}it to have a nonzero rate of change of the extractable work, the state \${\textbackslash}rho\_{\textbackslash}mathcal\{W\}\$ of the battery cannot be an eigenstate of a "free energy operator", defined by \${\textbackslash}mathcal\{F\}=H\_{\textbackslash}mathcal\{W\}+{\textbackslash}beta{\textasciicircum}\{-1\}{\textbackslash}log {\textbackslash}rho\_{\textbackslash}mathcal\{W\}\$, where \$H\_{\textbackslash}mathcal\{W\}\$ is the Hamiltonian of the battery and \${\textbackslash}beta\$ is the inverse temperature\} [...]. Contrarily to what is presented below Eq.{\textasciitilde}(17) of the paper, we observe that the above conclusion does not hold when the battery is subject to nonunitary dynamics.},
number = {2},
urldate = {2021-07-28},
journal = {Physical Review Letters},
author = {Cusumano, Stefano and Rudnicki, Łukasz},
month = jul,
year = {2021},
note = {arXiv: 2102.05627},
keywords = {Quantum Physics},
pages = {028901},
}
• N. Miklin and M. Pawłowski, “Information Causality without concatenation,” Physical review letters, vol. 126, iss. 22, p. 220403, 2021. doi:10.1103/PhysRevLett.126.220403
Information Causality is a physical principle which states that the amount of randomly accessible data over a classical communication channel cannot exceed its capacity, even if the sender and the receiver have access to a source of nonlocal correlations. This principle can be used to bound the nonlocality of quantum mechanics without resorting to its full formalism, with a notable example of reproducing the Tsirelson’s bound of the Clauser-Horne-Shimony-Holt inequality. Despite being promising, the latter result found little generalization to other Bell inequalities because of the limitations imposed by the process of concatenation, in which several nonsignaling resources are put together to produce tighter bounds. In this work, we show that concatenation can be successfully replaced by limits on the communication channel capacity. It allows us to re-derive and, in some cases, significantly improve all the previously known results in a simpler manner and apply the Information Causality principle to previously unapproachable Bell scenarios.
@article{miklin_information_2021,
title = {Information {Causality} without concatenation},
volume = {126},
issn = {0031-9007, 1079-7114},
url = {http://arxiv.org/abs/2101.12710},
doi = {10.1103/PhysRevLett.126.220403},
abstract = {Information Causality is a physical principle which states that the amount of randomly accessible data over a classical communication channel cannot exceed its capacity, even if the sender and the receiver have access to a source of nonlocal correlations. This principle can be used to bound the nonlocality of quantum mechanics without resorting to its full formalism, with a notable example of reproducing the Tsirelson's bound of the Clauser-Horne-Shimony-Holt inequality. Despite being promising, the latter result found little generalization to other Bell inequalities because of the limitations imposed by the process of concatenation, in which several nonsignaling resources are put together to produce tighter bounds. In this work, we show that concatenation can be successfully replaced by limits on the communication channel capacity. It allows us to re-derive and, in some cases, significantly improve all the previously known results in a simpler manner and apply the Information Causality principle to previously unapproachable Bell scenarios.},
number = {22},
urldate = {2021-07-28},
journal = {Physical Review Letters},
author = {Miklin, Nikolai and Pawłowski, Marcin},
month = jun,
year = {2021},
note = {arXiv: 2101.12710},
keywords = {Quantum Physics},
pages = {220403},
}
• S. Chen, N. Miklin, C. Budroni, and Y. Chen, “Device-independent quantification of measurement incompatibility,” Physical review research, vol. 3, iss. 2, p. 23143, 2021. doi:10.1103/PhysRevResearch.3.023143
Incompatible measurements, i.e., measurements that cannot be simultaneously performed, are necessary to observe nonlocal correlations. It is natural to ask, e.g., how incompatible the measurements have to be to achieve a certain violation of a Bell inequality. In this work, we provide the direct link between Bell nonlocality and the quantification of measurement incompatibility. This includes quantifiers for both incompatible and genuine-multipartite incompatible measurements. Our method straightforwardly generalizes to include constraints on the system’s dimension (semi-device-independent approach) and on projective measurements, providing improved bounds on incompatibility quantifiers, and to include the prepare-and-measure scenario.
@article{chen_device-independent_2021,
title = {Device-independent quantification of measurement incompatibility},
volume = {3},
issn = {2643-1564},
url = {http://arxiv.org/abs/2010.08456},
doi = {10.1103/PhysRevResearch.3.023143},
abstract = {Incompatible measurements, i.e., measurements that cannot be simultaneously performed, are necessary to observe nonlocal correlations. It is natural to ask, e.g., how incompatible the measurements have to be to achieve a certain violation of a Bell inequality. In this work, we provide the direct link between Bell nonlocality and the quantification of measurement incompatibility. This includes quantifiers for both incompatible and genuine-multipartite incompatible measurements. Our method straightforwardly generalizes to include constraints on the system's dimension (semi-device-independent approach) and on projective measurements, providing improved bounds on incompatibility quantifiers, and to include the prepare-and-measure scenario.},
number = {2},
urldate = {2021-07-28},
journal = {Physical Review Research},
author = {Chen, Shin-Liang and Miklin, Nikolai and Budroni, Costantino and Chen, Yueh-Nan},
month = may,
year = {2021},
note = {arXiv: 2010.08456},
keywords = {Quantum Physics},
pages = {023143},
}
• M. Grassl, “Entanglement-Assisted Quantum Communication Beating the Quantum Singleton Bound,” Physical review a, vol. 103, iss. 2, p. L020601, 2021. doi:10.1103/PhysRevA.103.L020601
Brun, Devetak, and Hsieh [Science 314, 436 (2006)] demonstrated that pre-shared entanglement between sender and receiver enables quantum communication protocols that have better parameters than schemes without the assistance of entanglement. Subsequently, the same authors derived a version of the so-called quantum Singleton bound that relates the parameters of the entanglement-assisted quantum-error correcting codes proposed by them. We present a new entanglement-assisted quantum communication scheme with parameters violating this bound in certain ranges.
@article{grassl_entanglement-assisted_2021,
title = {Entanglement-{Assisted} {Quantum} {Communication} {Beating} the {Quantum} {Singleton} {Bound}},
volume = {103},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/2007.01249},
doi = {10.1103/PhysRevA.103.L020601},
abstract = {Brun, Devetak, and Hsieh [Science 314, 436 (2006)] demonstrated that pre-shared entanglement between sender and receiver enables quantum communication protocols that have better parameters than schemes without the assistance of entanglement. Subsequently, the same authors derived a version of the so-called quantum Singleton bound that relates the parameters of the entanglement-assisted quantum-error correcting codes proposed by them. We present a new entanglement-assisted quantum communication scheme with parameters violating this bound in certain ranges.},
number = {2},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Grassl, Markus},
month = feb,
year = {2021},
note = {arXiv: 2007.01249},
keywords = {Quantum Physics, Computer Science - Information Theory},
pages = {L020601},
}
• W. Song, Y. Lim, H. Kwon, G. Adesso, M. Wieśniak, M. Pawłowski, J. Kim, and J. Bang, “Quantum secure learning with classical samples,” Physical review a, vol. 103, iss. 4, p. 42409, 2021. doi:10.1103/PhysRevA.103.042409
Studies addressing the question “Can a learner complete the learning securely?” have recently been spurred from the standpoints of fundamental theory and potential applications. In the relevant context of this question, we present a classical-quantum hybrid sampling protocol and define a security condition that allows only legitimate learners to prepare a finite set of samples that guarantees the success of the learning; the security condition excludes intruders. We do this by combining our security concept with the bound of the so-called probably approximately correct (PAC) learning. We show that while the lower bound on the learning samples guarantees PAC learning, an upper bound can be derived to rule out adversarial learners. Such a secure learning condition is appealing, because it is defined only by the size of samples required for the successful learning and is independent of the algorithm employed. Notably, the security stems from the fundamental quantum no-broadcasting principle. No such condition can thus occur in any classical regime, where learning samples can be copied. Owing to the hybrid architecture, our scheme also offers a practical advantage for implementation in noisy intermediate-scale quantum devices.
@article{song_quantum_2021,
title = {Quantum secure learning with classical samples},
volume = {103},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1912.10594},
doi = {10.1103/PhysRevA.103.042409},
abstract = {Studies addressing the question "Can a learner complete the learning securely?" have recently been spurred from the standpoints of fundamental theory and potential applications. In the relevant context of this question, we present a classical-quantum hybrid sampling protocol and define a security condition that allows only legitimate learners to prepare a finite set of samples that guarantees the success of the learning; the security condition excludes intruders. We do this by combining our security concept with the bound of the so-called probably approximately correct (PAC) learning. We show that while the lower bound on the learning samples guarantees PAC learning, an upper bound can be derived to rule out adversarial learners. Such a secure learning condition is appealing, because it is defined only by the size of samples required for the successful learning and is independent of the algorithm employed. Notably, the security stems from the fundamental quantum no-broadcasting principle. No such condition can thus occur in any classical regime, where learning samples can be copied. Owing to the hybrid architecture, our scheme also offers a practical advantage for implementation in noisy intermediate-scale quantum devices.},
number = {4},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Song, Wooyeong and Lim, Youngrong and Kwon, Hyukjoon and Adesso, Gerardo and Wieśniak, Marcin and Pawłowski, Marcin and Kim, Jaewan and Bang, Jeongho},
month = apr,
year = {2021},
note = {arXiv: 1912.10594},
keywords = {Quantum Physics},
pages = {042409},
}
• P. Lipka-Bartosik, P. Mazurek, and M. Horodecki, “Second law of thermodynamics for batteries with vacuum state,” Quantum, vol. 5, p. 408, 2021. doi:10.22331/q-2021-03-10-408
In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence – of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.
@article{lipka-bartosik_second_2021-1,
title = {Second law of thermodynamics for batteries with vacuum state},
volume = {5},
issn = {2521-327X},
url = {http://arxiv.org/abs/1905.12072},
doi = {10.22331/q-2021-03-10-408},
abstract = {In stochastic thermodynamics work is a random variable whose average is bounded by the change in the free energy of the system. In most treatments, however, the work reservoir that absorbs this change is either tacitly assumed or modelled using unphysical systems with unbounded Hamiltonians (i.e. the ideal weight). In this work we describe the consequences of introducing the ground state of the battery and hence -- of breaking its translational symmetry. The most striking consequence of this shift is the fact that the Jarzynski identity is replaced by a family of inequalities. Using these inequalities we obtain corrections to the second law of thermodynamics which vanish exponentially with the distance of the initial state of the battery to the bottom of its spectrum. Finally, we study an exemplary thermal operation which realizes the approximate Landauer erasure and demonstrate the consequences which arise when the ground state of the battery is explicitly introduced. In particular, we show that occupation of the vacuum state of any physical battery sets a lower bound on fluctuations of work, while batteries without vacuum state allow for fluctuation-free erasure.},
urldate = {2021-07-28},
journal = {Quantum},
author = {Lipka-Bartosik, Patryk and Mazurek, Paweł and Horodecki, Michał},
month = mar,
year = {2021},
note = {arXiv: 1905.12072},
keywords = {Quantum Physics},
pages = {408},
}
• M. Markiewicz, M. Karczewski, and P. Kurzynski, “Borromean states in discrete-time quantum walks,” Quantum, vol. 5, p. 523, 2021. doi:10.22331/q-2021-08-16-523
@Article{Markiewicz2021borromeanstatesin,
author = {Markiewicz, Marcin and Karczewski, Marcin and Kurzynski, Pawel},
journal = {{Quantum}},
title = {Borromean states in discrete-time quantum walks},
year = {2021},
issn = {2521-327X},
month = aug,
pages = {523},
volume = {5},
doi = {10.22331/q-2021-08-16-523},
publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}},
url = {https://doi.org/10.22331/q-2021-08-16-523},
}
• R. Alicki, D. Gelbwaser-Klimovsky, and A. Jenkins, “The problem of engines in statistical physics,” Entropy, vol. 23, iss. 8, 2021. doi:10.3390/e23081095
Engines are open systems that can generate work cyclically at the expense of an external disequilibrium. They are ubiquitous in nature and technology, but the course of mathematical physics over the last 300 years has tended to make their dynamics in time a theoretical blind spot. This has hampered the usefulness of statistical mechanics applied to active systems, including living matter. We argue that recent advances in the theory of open quantum systems, coupled with renewed interest in understanding how active forces result from positive feedback between different macroscopic degrees of freedom in the presence of dissipation, point to a more realistic description of autonomous engines. We propose a general conceptualization of an engine that helps clarify the distinction between its heat and work outputs. Based on this, we show how the external loading force and the thermal noise may be incorporated into the relevant equations of motion. This modifies the usual Fokker–Planck and Langevin equations, offering a thermodynamically complete formulation of the irreversible dynamics of simple oscillating and rotating engines.
@Article{AlickiAugust2021,
author = {Alicki, Robert and Gelbwaser-Klimovsky, David and Jenkins, Alejandro},
journal = {Entropy},
title = {The Problem of Engines in Statistical Physics},
year = {2021},
issn = {1099-4300},
number = {8},
volume = {23},
abstract = {Engines are open systems that can generate work cyclically at the expense of an external disequilibrium. They are ubiquitous in nature and technology, but the course of mathematical physics over the last 300 years has tended to make their dynamics in time a theoretical blind spot. This has hampered the usefulness of statistical mechanics applied to active systems, including living matter. We argue that recent advances in the theory of open quantum systems, coupled with renewed interest in understanding how active forces result from positive feedback between different macroscopic degrees of freedom in the presence of dissipation, point to a more realistic description of autonomous engines. We propose a general conceptualization of an engine that helps clarify the distinction between its heat and work outputs. Based on this, we show how the external loading force and the thermal noise may be incorporated into the relevant equations of motion. This modifies the usual Fokker–Planck and Langevin equations, offering a thermodynamically complete formulation of the irreversible dynamics of simple oscillating and rotating engines.},
article-number = {1095},
doi = {10.3390/e23081095},
pubmedid = {34441235},
url = {https://www.mdpi.com/1099-4300/23/8/1095},
}
• M. Stankiewicz, K. Horodecki, O. Sakarya, and D. Makowiec, “Private weakly-random sequences from human heart rate for quantum amplification,” Entropy, vol. 23, iss. 9, p. 1182, 2021. doi:10.3390/e23091182
We investigate whether the heart rate can be treated as a semi-random source with the aim of amplification by quantum devices. We use a semi-random source model called $\epsilon$-Santha-Vazirani source, which can be amplified via quantum protocols to obtain fully private random sequence. We analyze time intervals between consecutive heartbeats obtained from Holter electrocardiogram (ECG) recordings of people of different sex and age. We propose several transformations of the original time series into binary sequences. We have performed different statistical randomness tests and estimated quality parameters. We find that the heart can be treated as good enough, and private by its nature, source of randomness, that every human possesses. As such, in principle it can be used as input to quantum device-independent randomness amplification protocols. The properly interpreted $\epsilon$ parameter can potentially serve as a new characteristic of the human’s heart from the perspective of medicine.
@Article{Stankiewicz2021,
author = {Stankiewicz, Maciej and Horodecki, Karol and Sakarya, Omer and Makowiec, Danuta},
journal = {Entropy},
title = {Private Weakly-Random Sequences from Human Heart Rate for Quantum Amplification},
year = {2021},
month = sep,
number = {9},
pages = {1182},
volume = {23},
abstract = {We investigate whether the heart rate can be treated as a semi-random source with the aim of amplification by quantum devices. We use a semi-random source model called $\epsilon$-Santha-Vazirani source, which can be amplified via quantum protocols to obtain fully private random sequence. We analyze time intervals between consecutive heartbeats obtained from Holter electrocardiogram (ECG) recordings of people of different sex and age. We propose several transformations of the original time series into binary sequences. We have performed different statistical randomness tests and estimated quality parameters. We find that the heart can be treated as good enough, and private by its nature, source of randomness, that every human possesses. As such, in principle it can be used as input to quantum device-independent randomness amplification protocols. The properly interpreted $\epsilon$ parameter can potentially serve as a new characteristic of the human's heart from the perspective of medicine.},
archiveprefix = {arXiv},
doi = {10.3390/e23091182},
eprint = {2107.14630},
keywords = {Quantum Physics},
primaryclass = {quant-ph},
}
• M. Wie{‘s}niak, “Symmetrized persistency of bell correlations for dicke states and ghz-based mixtures,” Scientific reports, vol. 11, p. 14333, 2021. doi:10.1038/s41598-021-93786-5
Quantum correlations, in particular those, which enable to violate a Bell inequality, open a way to advantage in certain communication tasks. However, the main difficulty in harnessing quantumness is its fragility to, e.g, noise or loss of particles. We study the persistency of Bell correlations of GHZ based mixtures and Dicke states. For the former, we consider quantum communication complexity reduction (QCCR) scheme, and propose new Bell inequalities (BIs), which can be used in that scheme for higher persistency in the limit of large number of particles N. In case of Dicke states, we show that persistency can reach 0.482N, significantly more than reported in previous studies.
@Article{Wiesniak2021,
author = {Wie{\'s}niak, Marcin},
journal = {Scientific Reports},
title = {Symmetrized persistency of Bell correlations for Dicke states and GHZ-based mixtures},
year = {2021},
month = jan,
pages = {14333},
volume = {11},
abstract = {Quantum correlations, in particular those, which enable to violate a Bell inequality, open a way to advantage in certain communication tasks. However, the main difficulty in harnessing quantumness is its fragility to, e.g, noise or loss of particles. We study the persistency of Bell correlations of GHZ based mixtures and Dicke states. For the former, we consider quantum communication complexity reduction (QCCR) scheme, and propose new Bell inequalities (BIs), which can be used in that scheme for higher persistency in the limit of large number of particles N. In case of Dicke states, we show that persistency can reach 0.482N, significantly more than reported in previous studies.},
archiveprefix = {arXiv},
doi = {10.1038/s41598-021-93786-5},
eid = {14333},
eprint = {2102.08141},
keywords = {Quantum Physics},
primaryclass = {quant-ph},
}
• A. Chaturvedi, Máté. Farkas, and V. J. Wright, “Characterising and bounding the set of quantum behaviours in contextuality scenarios,” Quantum, vol. 5, p. 484, 2021. doi:10.22331/q-2021-06-29-484
The predictions of quantum theory resist generalised noncontextual explanations. In addition to the foundational relevance of this fact, the particular extent to which quantum theory violates noncontextuality limits available quantum advantage in communication and information processing. In the first part of this work, we formally define contextuality scenarios via prepare-and-measure experiments, along with the polytope of general contextual behaviours containing the set of quantum contextual behaviours. This framework allows us to recover several properties of set of quantum behaviours in these scenarios, including contextuality scenarios and associated noncontextuality inequalities that require for their violation the individual quantum preparation and measurement procedures to be mixed states and unsharp measurements. With the framework in place, we formulate novel semidefinite programming relaxations for bounding these sets of quantum contextual behaviours. Most significantly, to circumvent the inadequacy of pure states and projective measurements in contextuality scenarios, we present a novel unitary operator based semidefinite relaxation technique. We demonstrate the efficacy of these relaxations by obtaining tight upper bounds on the quantum violation of several noncontextuality inequalities and identifying novel maximally contextual quantum strategies. To further illustrate the versatility of these relaxations, we demonstrate $\textit{monogamy of preparation contextuality}$ in a tripartite setting, and present a secure semi-device independent quantum key distribution scheme powered by quantum advantage in parity oblivious random access codes.
@Article{Chaturvedi2021,
author = {Anubhav Chaturvedi and Máté Farkas and Victoria J Wright},
journal = {Quantum},
title = {Characterising and bounding the set of quantum behaviours in contextuality scenarios},
year = {2021},
issn = {2521-327X},
month = {06},
pages = {484},
volume = {5},
abstract = {The predictions of quantum theory resist generalised noncontextual explanations. In addition to the foundational relevance of this fact, the particular extent to which quantum theory violates noncontextuality limits available quantum advantage in communication and information processing. In the first part of this work, we formally define contextuality scenarios via prepare-and-measure experiments, along with the polytope of general contextual behaviours containing the set of quantum contextual behaviours. This framework allows us to recover several properties of set of quantum behaviours in these scenarios, including contextuality scenarios and associated noncontextuality inequalities that require for their violation the individual quantum preparation and measurement procedures to be mixed states and unsharp measurements. With the framework in place, we formulate novel semidefinite programming relaxations for bounding these sets of quantum contextual behaviours. Most significantly, to circumvent the inadequacy of pure states and projective measurements in contextuality scenarios, we present a novel unitary operator based semidefinite relaxation technique. We demonstrate the efficacy of these relaxations by obtaining tight upper bounds on the quantum violation of several noncontextuality inequalities and identifying novel maximally contextual quantum strategies. To further illustrate the versatility of these relaxations, we demonstrate $\textit{monogamy of preparation contextuality}$ in a tripartite setting, and present a secure semi-device independent quantum key distribution scheme powered by quantum advantage in parity oblivious random access codes.},
doi = {10.22331/q-2021-06-29-484},
publisher = {Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften},
url = {https://quantum-journal.org/papers/q-2021-06-29-484/pdf/},
}
• S. Cusumano and Ł. Rudnicki, “Thermodynamics of reduced state of the field,” , vol. 23, p. 1198, 2021. doi:10.3390/e23091198
[BibTeX]
@Article{Cusumano,
author = {Stefano Cusumano and Łukasz Rudnicki},
title = {Thermodynamics of Reduced State of the Field},
year = {2021},
issn = {1099-4300},
pages = {1198},
volume = {23},
doi = {10.3390/e23091198},
}
• Ł. Rudnicki, “Quantum speed limit and geometric measure of entanglement,” , vol. 104, 2021. doi:10.1103/physreva.104.032417
[BibTeX]
@Article{Rudnicki,
author = {Łukasz Rudnicki},
title = {Quantum speed limit and geometric measure of entanglement},
year = {2021},
issn = {2469-9926},
volume = {104},
doi = {10.1103/physreva.104.032417},
}
• A. Barasiński, A. Černoch, W. Laskowski, K. Lemr, T. Vértesi, and J. Soubusta, “Experimentally friendly approach towards nonlocal correlations in multisetting N-partite Bell scenarios,” Quantum, vol. 5, p. 430, 2021. doi:10.22331/q-2021-04-14-430
In this work, we study a recently proposed operational measure of nonlocality by Fonseca and Parisio [Phys. Rev. A 92, 030101(R) (2015)] which describes the probability of violation of local realism under randomly sampled observables, and the strength of such violation as described by resistance to white noise admixture. While our knowledge concerning these quantities is well established from a theoretical point of view, the experimental counterpart is a considerably harder task and very little has been done in this field. It is caused by the lack of complete knowledge about the facets of the local polytope required for the analysis. In this paper, we propose a simple procedure towards experimentally determining both quantities for N -qubit pure states, based on the incomplete set of tight Bell inequalities. We show that the imprecision arising from this approach is of similar magnitude as the potential measurement errors. We also show that even with both a randomly chosen N -qubit pure state and randomly chosen measurement bases, a violation of local realism can be detected experimentally almost 100 \% of the time. Among other applications, our work provides a feasible alternative for the witnessing of genuine multipartite entanglement without aligned reference frames.
@Article{Barasinski2021,
author = {Barasiński, Artur and Černoch, Antonín and Laskowski, Wiesław and Lemr, Karel and Vértesi, Tamás and Soubusta, Jan},
journal = {Quantum},
title = {Experimentally friendly approach towards nonlocal correlations in multisetting {N}-partite {Bell} scenarios},
year = {2021},
issn = {2521-327X},
month = apr,
pages = {430},
volume = {5},
abstract = {In this work, we study a recently proposed operational measure of nonlocality by Fonseca and Parisio [Phys. Rev. A 92, 030101(R) (2015)] which describes the probability of violation of local realism under randomly sampled observables, and the strength of such violation as described by resistance to white noise admixture. While our knowledge concerning these quantities is well established from a theoretical point of view, the experimental counterpart is a considerably harder task and very little has been done in this field. It is caused by the lack of complete knowledge about the facets of the local polytope required for the analysis. In this paper, we propose a simple procedure towards experimentally determining both quantities for
N
-qubit pure states, based on the incomplete set of tight Bell inequalities. We show that the imprecision arising from this approach is of similar magnitude as the potential measurement errors. We also show that even with both a randomly chosen
N
-qubit pure state and randomly chosen measurement bases, a violation of local realism can be detected experimentally almost
100
\%
of the time. Among other applications, our work provides a feasible alternative for the witnessing of genuine multipartite entanglement without aligned reference frames.},
doi = {10.22331/q-2021-04-14-430},
language = {en},
url = {https://quantum-journal.org/papers/q-2021-04-14-430/},
urldate = {2021-10-11},
}
• P. Blasiak, E. Borsuk, M. Markiewicz, and Y. Kim, “Efficient linear-optical generation of a multipartite W state,” Physical review a, vol. 104, iss. 2, p. 23701, 2021. doi:10.1103/PhysRevA.104.023701
@Article{Blasiak2021,
author = {Blasiak, Pawel and Borsuk, Ewa and Markiewicz, Marcin and Kim, Yong-Su},
journal = {Physical Review A},
title = {Efficient linear-optical generation of a multipartite {W} state},
year = {2021},
issn = {2469-9926, 2469-9934},
month = aug,
number = {2},
pages = {023701},
volume = {104},
doi = {10.1103/PhysRevA.104.023701},
language = {en},
urldate = {2021-10-11},
}
• E. Aurell, M. Eckstein, and P. Horodecki, “Quantum Black Holes as Solvents,” Foundations of physics, vol. 51, iss. 2, p. 54, 2021. doi:10.1007/s10701-021-00456-7
Abstract Almost all of the entropy in the universe is in the form of Bekenstein–Hawking (BH) entropy of super-massive black holes. This entropy, if it satisfies Boltzmann’s equation \$\$S={\textbackslash}log {\textbackslash}mathcal\{N\}\$\$ S = log N , hence represents almost all the accessible phase space of the Universe, somehow associated to objects which themselves fill out a very small fraction of ordinary three-dimensional space. Although time scales are very long, it is believed that black holes will eventually evaporate by emitting Hawking radiation, which is thermal when counted mode by mode. A pure quantum state collapsing to a black hole will hence eventually re-emerge as a state with strictly positive entropy, which constitutes the famous black hole information paradox. Expanding on a remark by Hawking we posit that BH entropy is a thermodynamic entropy, which must be distinguished from information-theoretic entropy. The paradox can then be explained by information return in Hawking radiation. The novel perspective advanced here is that if BH entropy counts the number of accessible physical states in a quantum black hole, then the paradox can be seen as an instance of the fundamental problem of statistical mechanics. We suggest a specific analogy to the increase of the entropy in a solvation process. We further show that the huge phase volume ( \$\${\textbackslash}mathcal\{N\}\$\$ N ), which must be made available to the universe in a gravitational collapse, cannot originate from the entanglement between ordinary matter and/or radiation inside and outside the black hole. We argue that, instead, the quantum degrees of freedom of the gravitational field must get activated near the singularity, resulting in a final state of the ‘entangled entanglement’ form involving both matter and gravity.
@Article{Aurell2021,
author = {Aurell, Erik and Eckstein, Michał and Horodecki, Paweł},
journal = {Foundations of Physics},
title = {Quantum {Black} {Holes} as {Solvents}},
year = {2021},
issn = {0015-9018, 1572-9516},
month = apr,
number = {2},
pages = {54},
volume = {51},
abstract = {Abstract
Almost all of the entropy in the universe is in the form of Bekenstein–Hawking (BH) entropy of super-massive black holes. This entropy, if it satisfies Boltzmann’s equation
\$\$S={\textbackslash}log {\textbackslash}mathcal\{N\}\$\$
S
=
log
N
, hence represents almost all the accessible phase space of the Universe, somehow associated to objects which themselves fill out a very small fraction of ordinary three-dimensional space. Although time scales are very long, it is believed that black holes will eventually evaporate by emitting Hawking radiation, which is thermal when counted mode by mode. A pure quantum state collapsing to a black hole will hence eventually re-emerge as a state with strictly positive entropy, which constitutes the famous black hole information paradox. Expanding on a remark by Hawking we posit that BH entropy is a thermodynamic entropy, which must be distinguished from information-theoretic entropy. The paradox can then be explained by information return in Hawking radiation. The novel perspective advanced here is that if BH entropy counts the number of accessible physical states in a quantum black hole, then the paradox can be seen as an instance of the fundamental problem of statistical mechanics. We suggest a specific analogy to the increase of the entropy in a solvation process. We further show that the huge phase volume (
\$\${\textbackslash}mathcal\{N\}\$\$
N
), which must be made available to the universe in a gravitational collapse, cannot originate from the entanglement between ordinary matter and/or radiation inside and outside the black hole. We argue that, instead, the quantum degrees of freedom of the gravitational field must get activated near the singularity, resulting in a final state of the ‘entangled entanglement’ form involving both matter and gravity.},
doi = {10.1007/s10701-021-00456-7},
language = {en},
urldate = {2021-10-11},
}
• C. M. Scandolo, R. Salazar, J. K. Korbicz, and P. Horodecki, “Universal structure of objective states in all fundamental causal theories,” , vol. 3, 2021. doi:10.1103/physrevresearch.3.033148
[BibTeX]
@Article{Scandolo,
author = {Carlo Maria Scandolo and Roberto Salazar and Jarosław K. Korbicz and Paweł Horodecki},
title = {Universal structure of objective states in all fundamental causal theories},
year = {2021},
issn = {2643-1564},
volume = {3},
doi = {10.1103/physrevresearch.3.033148},
}
• M. Markiewicz and J. Przewocki, “On construction of finite averaging sets for sl(2,c) via its cartan decomposition,” , vol. 54, p. 235302, 2021. doi:10.1088/1751-8121/abfa44
[BibTeX]
@Article{Markiewicz2021,
author = {Marcin Markiewicz and Janusz Przewocki},
title = {On construction of finite averaging sets for SL(2,C) via its Cartan decomposition},
year = {2021},
issn = {1751-8113},
pages = {235302},
volume = {54},
doi = {10.1088/1751-8121/abfa44},
}
2020
• P. Mazurek, Máté. Farkas, A. Grudka, M. Horodecki, and M. Studziński, “Quantum error-correction codes and absolutely maximally entangled states,” Physical review a, vol. 101, iss. 4, p. 42305, 2020. doi:10.1103/PhysRevA.101.042305
@article{mazurek_quantum_2020,
title = {Quantum error-correction codes and absolutely maximally entangled states},
volume = {101},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.101.042305},
language = {en},
number = {4},
urldate = {2020-04-22},
journal = {Physical Review A},
author = {Mazurek, Paweł and Farkas, Máté and Grudka, Andrzej and Horodecki, Michał and Studziński, Michał},
month = apr,
year = {2020},
pages = {042305},
}
• K. Horodecki and M. Stankiewicz, “Semi-device-independent quantum money,” New journal of physics, vol. 22, iss. 2, p. 23007, 2020. doi:10.1088/1367-2630/ab6872
@article{horodecki_semi-device-independent_2020,
title = {Semi-device-independent quantum money},
volume = {22},
issn = {1367-2630},
url = {https://iopscience.iop.org/article/10.1088/1367-2630/ab6872},
doi = {10.1088/1367-2630/ab6872},
number = {2},
urldate = {2020-04-22},
journal = {New Journal of Physics},
author = {Horodecki, Karol and Stankiewicz, Maciej},
month = feb,
year = {2020},
pages = {023007},
}
• T. Linowski, G. Rajchel-Mieldzioć, and K. Życzkowski, “Entangling power of multipartite unitary gates,” Journal of physics a: mathematical and theoretical, vol. 53, iss. 12, p. 125303, 2020. doi:10.1088/1751-8121/ab749a
@article{linowski_entangling_2020,
title = {Entangling power of multipartite unitary gates},
volume = {53},
issn = {1751-8113, 1751-8121},
url = {https://iopscience.iop.org/article/10.1088/1751-8121/ab749a},
doi = {10.1088/1751-8121/ab749a},
number = {12},
urldate = {2020-04-22},
journal = {Journal of Physics A: Mathematical and Theoretical},
author = {Linowski, Tomasz and Rajchel-Mieldzioć, Grzegorz and Życzkowski, Karol},
month = mar,
year = {2020},
pages = {125303},
}
• M. Pawłowski, “Entropy in Foundations of Quantum Physics,” Entropy, vol. 22, iss. 3, p. 371, 2020. doi:10.3390/e22030371
Entropy can be used in studies on foundations of quantum physics in many different ways, each of them using different properties of this mathematical object […]
@article{pawlowski_entropy_2020,
title = {Entropy in {Foundations} of {Quantum} {Physics}},
volume = {22},
issn = {1099-4300},
url = {https://www.mdpi.com/1099-4300/22/3/371},
doi = {10.3390/e22030371},
abstract = {Entropy can be used in studies on foundations of quantum physics in many different ways, each of them using different properties of this mathematical object [...]},
language = {en},
number = {3},
urldate = {2020-04-22},
journal = {Entropy},
author = {Pawłowski, Marcin},
month = mar,
year = {2020},
pages = {371},
}
• M. Smania, P. Mironowicz, M. Nawareg, M. Pawłowski, A. Cabello, and M. Bourennane, “Experimental certification of an informationally complete quantum measurement in a device-independent protocol,” Optica, vol. 7, iss. 2, p. 123, 2020. doi:10.1364/OPTICA.377959
@article{smania_experimental_2020,
title = {Experimental certification of an informationally complete quantum measurement in a device-independent protocol},
volume = {7},
issn = {2334-2536},
url = {https://www.osapublishing.org/abstract.cfm?URI=optica-7-2-123},
doi = {10.1364/OPTICA.377959},
language = {en},
number = {2},
urldate = {2020-04-22},
journal = {Optica},
author = {Smania, Massimiliano and Mironowicz, Piotr and Nawareg, Mohamed and Pawłowski, Marcin and Cabello, Adán and Bourennane, Mohamed},
month = feb,
year = {2020},
pages = {123},
}
• S. Milz, F. Sakuldee, F. A. Pollock, and K. Modi, “Kolmogorov extension theorem for (quantum) causal modelling and general probabilistic theories,” Quantum, vol. 4, p. 255, 2020. doi:10.22331/q-2020-04-20-255
In classical physics, the Kolmogorov extension theorem lays the foundation for the theory of stochastic processes. It has been known for a long time that, in its original form, this theorem does not hold in quantum mechanics. More generally, it does not hold in any theory of stochastic processes – classical, quantum or beyond – that does not just describe passive observations, but allows for active interventions. Such processes form the basis of the study of causal modelling across the sciences, including in the quantum domain. To date, these frameworks have lacked a conceptual underpinning similar to that provided by Kolmogorov’s theorem for classical stochastic processes. We prove a generalized extension theorem that applies to all theories of stochastic processes, putting them on equally firm mathematical ground as their classical counterpart. Additionally, we show that quantum causal modelling and quantum stochastic processes are equivalent. This provides the correct framework for the description of experiments involving continuous control, which play a crucial role in the development of quantum technologies. Furthermore, we show that the original extension theorem follows from the generalized one in the correct limit, and elucidate how a comprehensive understanding of general stochastic processes allows one to unambiguously define the distinction between those that are classical and those that are quantum.
@Article{milz_kolmogorov_2020,
author = {Milz, Simon and Sakuldee, Fattah and Pollock, Felix A. and Modi, Kavan},
journal = {Quantum},
title = {Kolmogorov extension theorem for (quantum) causal modelling and general probabilistic theories},
year = {2020},
issn = {2521-327X},
month = apr,
pages = {255},
volume = {4},
abstract = {In classical physics, the Kolmogorov extension theorem lays the foundation for the theory of stochastic processes. It has been known for a long time that, in its original form, this theorem does not hold in quantum mechanics. More generally, it does not hold in any theory of stochastic processes -- classical, quantum or beyond -- that does not just describe passive observations, but allows for active interventions. Such processes form the basis of the study of causal modelling across the sciences, including in the quantum domain. To date, these frameworks have lacked a conceptual underpinning similar to that provided by Kolmogorov’s theorem for classical stochastic processes. We prove a generalized extension theorem that applies to all theories of stochastic processes, putting them on equally firm mathematical ground as their classical counterpart. Additionally, we show that quantum causal modelling and quantum stochastic processes are equivalent. This provides the correct framework for the description of experiments involving continuous control, which play a crucial role in the development of quantum technologies. Furthermore, we show that the original extension theorem follows from the generalized one in the correct limit, and elucidate how a comprehensive understanding of general stochastic processes allows one to unambiguously define the distinction between those that are classical and those that are quantum.},
doi = {10.22331/q-2020-04-20-255},
language = {en},
url = {https://quantum-journal.org/papers/q-2020-04-20-255/},
urldate = {2020-04-22},
}
• K. Szczygielski and R. Alicki, “On Howland time-independent formulation of CP-divisible quantum evolutions,” Reviews in mathematical physics, p. 2050021, 2020. doi:10.1142/S0129055X2050021X
We extend Howland time-independent formalism to the case of completely positive and trace preserving dynamics of finite-dimensional open quantum systems governed by periodic, time-dependent Lindbladian in Weak Coupling Limit, expanding our result from previous papers. We propose the Bochner space of periodic, square integrable matrix-valued functions, as well as its tensor product representation, as the generalized space of states within the time-independent formalism. We examine some densely defined operators on this space, together with their Fourier-like expansions and address some problems related to their convergence by employing general results on Banach space-valued Fourier series, such as the generalized Carleson–Hunt theorem. We formulate Markovian dynamics in the generalized space of states by constructing appropriate time-independent Lindbladian in standard (Lindblad–Gorini–Kossakowski–Sudarshan) form, as well as one-parameter semigroup of bounded evolution maps. We show their similarity with Markovian generators and dynamical maps defined on matrix space, i.e. the generator still possesses a standard form (extended by closed perturbation) and the resulting semigroup is also completely positive, trace preserving and a contraction.
@article{szczygielski_howland_2020,
title = {On {Howland} time-independent formulation of {CP}-divisible quantum evolutions},
issn = {0129-055X, 1793-6659},
url = {https://www.worldscientific.com/doi/abs/10.1142/S0129055X2050021X},
doi = {10.1142/S0129055X2050021X},
abstract = {We extend Howland time-independent formalism to the case of completely positive and trace preserving dynamics of finite-dimensional open quantum systems governed by periodic, time-dependent Lindbladian in Weak Coupling Limit, expanding our result from previous papers. We propose the Bochner space of periodic, square integrable matrix-valued functions, as well as its tensor product representation, as the generalized space of states within the time-independent formalism. We examine some densely defined operators on this space, together with their Fourier-like expansions and address some problems related to their convergence by employing general results on Banach space-valued Fourier series, such as the generalized Carleson–Hunt theorem. We formulate Markovian dynamics in the generalized space of states by constructing appropriate time-independent Lindbladian in standard (Lindblad–Gorini–Kossakowski–Sudarshan) form, as well as one-parameter semigroup of bounded evolution maps. We show their similarity with Markovian generators and dynamical maps defined on matrix space, i.e. the generator still possesses a standard form (extended by closed perturbation) and the resulting semigroup is also completely positive, trace preserving and a contraction.},
language = {en},
urldate = {2020-05-13},
journal = {Reviews in Mathematical Physics},
author = {Szczygielski, Krzysztof and Alicki, Robert},
month = jan,
year = {2020},
pages = {2050021},
}
• M. Rosicka, P. Mazurek, A. Grudka, and M. Horodecki, “Generalized XOR non-locality games with graph description on a square lattice,” Journal of physics a: mathematical and theoretical, vol. 53, iss. 26, p. 265302, 2020. doi:10.1088/1751-8121/ab8f3e
@article{rosicka_generalized_2020,
title = {Generalized {XOR} non-locality games with graph description on a square lattice},
volume = {53},
issn = {1751-8113, 1751-8121},
url = {https://iopscience.iop.org/article/10.1088/1751-8121/ab8f3e},
doi = {10.1088/1751-8121/ab8f3e},
number = {26},
urldate = {2020-06-24},
journal = {Journal of Physics A: Mathematical and Theoretical},
author = {Rosicka, Monika and Mazurek, Paweł and Grudka, Andrzej and Horodecki, Michał},
month = jul,
year = {2020},
pages = {265302},
}
• F. Huber and M. Grassl, “Quantum Codes of Maximal Distance and Highly Entangled Subspaces,” Quantum, vol. 4, p. 284, 2020. doi:10.22331/q-2020-06-18-284
@Article{huber_quantum_2020,
author = {Huber, Felix and Grassl, Markus},
journal = {Quantum},
title = {Quantum {Codes} of {Maximal} {Distance} and {Highly} {Entangled} {Subspaces}},
year = {2020},
issn = {2521-327X},
month = jun,
pages = {284},
volume = {4},
doi = {10.22331/q-2020-06-18-284},
language = {en},
url = {https://quantum-journal.org/papers/q-2020-06-18-284/},
urldate = {2020-06-24},
}
• P. Skrzypczyk, M. J. Hoban, A. B. Sainz, and N. Linden, “Complexity of compatible measurements,” Physical review research, vol. 2, iss. 2, p. 23292, 2020. doi:10.1103/PhysRevResearch.2.023292
@article{skrzypczyk_complexity_2020,
title = {Complexity of compatible measurements},
volume = {2},
issn = {2643-1564},
doi = {10.1103/PhysRevResearch.2.023292},
language = {en},
number = {2},
urldate = {2020-06-24},
journal = {Physical Review Research},
author = {Skrzypczyk, Paul and Hoban, Matty J. and Sainz, Ana Belén and Linden, Noah},
month = jun,
year = {2020},
pages = {023292},
}
• A. Tavakoli, M. Żukowski, and Č. Brukner, “Does violation of a Bell inequality always imply quantum advantage in a communication complexity problem?,” Quantum, vol. 4, p. 316, 2020. doi:10.22331/q-2020-09-07-316
Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, {\textbackslash}cite\{BZ02\}; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the I 3322 Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs
@Article{tavakoli_does_2020,
author = {Tavakoli, Armin and Żukowski, Marek and Brukner, Časlav},
journal = {Quantum},
title = {Does violation of a {Bell} inequality always imply quantum advantage in a communication complexity problem?},
year = {2020},
issn = {2521-327X},
month = sep,
pages = {316},
volume = {4},
abstract = {Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, {\textbackslash}cite\{BZ02\}; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the I 3322 Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs},
doi = {10.22331/q-2020-09-07-316},
language = {en},
url = {https://quantum-journal.org/papers/q-2020-09-07-316/},
urldate = {2021-05-10},
}
• Ł. Rudnicki, L. L. Sánchez-Soto, G. Leuchs, and R. W. Boyd, “Fundamental quantum limits in ellipsometry,” Optics letters, vol. 45, iss. 16, p. 4607, 2020. doi:10.1364/OL.392955
@article{rudnicki_fundamental_2020,
title = {Fundamental quantum limits in ellipsometry},
volume = {45},
issn = {0146-9592, 1539-4794},
url = {https://www.osapublishing.org/abstract.cfm?URI=ol-45-16-4607},
doi = {10.1364/OL.392955},
language = {en},
number = {16},
urldate = {2021-05-10},
journal = {Optics Letters},
author = {Rudnicki, Ł. and Sánchez-Soto, L. L. and Leuchs, G. and Boyd, R. W.},
month = aug,
year = {2020},
pages = {4607},
}
• A. B. Sainz, M. J. Hoban, P. Skrzypczyk, and L. Aolita, “Bipartite Postquantum Steering in Generalized Scenarios,” Physical review letters, vol. 125, iss. 5, p. 50404, 2020. doi:10.1103/PhysRevLett.125.050404
@article{sainz_bipartite_2020,
title = {Bipartite {Postquantum} {Steering} in {Generalized} {Scenarios}},
volume = {125},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.125.050404},
language = {en},
number = {5},
urldate = {2021-05-10},
journal = {Physical Review Letters},
author = {Sainz, Ana Belén and Hoban, Matty J. and Skrzypczyk, Paul and Aolita, Leandro},
month = jul,
year = {2020},
pages = {050404},
}
• S. Popescu, A. B. Sainz, A. J. Short, and A. Winter, “Reference Frames Which Separately Store Noncommuting Conserved Quantities,” Physical review letters, vol. 125, iss. 9, p. 90601, 2020. doi:10.1103/PhysRevLett.125.090601
@article{popescu_reference_2020,
title = {Reference {Frames} {Which} {Separately} {Store} {Noncommuting} {Conserved} {Quantities}},
volume = {125},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.125.090601},
language = {en},
number = {9},
urldate = {2021-05-10},
journal = {Physical Review Letters},
author = {Popescu, Sandu and Sainz, Ana Belén and Short, Anthony J. and Winter, Andreas},
month = aug,
year = {2020},
pages = {090601},
}
• D. Saha, M. Oszmaniec, L. Czekaj, M. Horodecki, and R. Horodecki, “Operational foundations for complementarity and uncertainty relations,” Physical review a, vol. 101, iss. 5, p. 52104, 2020. doi:10.1103/PhysRevA.101.052104
@article{saha_operational_2020,
title = {Operational foundations for complementarity and uncertainty relations},
volume = {101},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.101.052104},
language = {en},
number = {5},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Saha, Debashis and Oszmaniec, Michał and Czekaj, Lukasz and Horodecki, Michał and Horodecki, Ryszard},
month = may,
year = {2020},
pages = {052104},
}
• G. Tóth, T. Vértesi, P. Horodecki, and R. Horodecki, “Activating Hidden Metrological Usefulness,” Physical review letters, vol. 125, iss. 2, p. 20402, 2020. doi:10.1103/PhysRevLett.125.020402
@article{toth_activating_2020,
title = {Activating {Hidden} {Metrological} {Usefulness}},
volume = {125},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.125.020402},
language = {en},
number = {2},
urldate = {2021-05-10},
journal = {Physical Review Letters},
author = {Tóth, Géza and Vértesi, Tamás and Horodecki, Paweł and Horodecki, Ryszard},
month = jul,
year = {2020},
pages = {020402},
}
• K. Horodecki, R. P. Kostecki, R. Salazar, and M. Studziński, “Limitations for private randomness repeaters,” Physical review a, vol. 102, iss. 1, p. 12615, 2020. doi:10.1103/PhysRevA.102.012615
@article{horodecki_limitations_2020,
title = {Limitations for private randomness repeaters},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.012615},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Horodecki, Karol and Kostecki, Ryszard P. and Salazar, Roberto and Studziński, Michał},
month = jul,
year = {2020},
pages = {012615},
}
• A. de Rosier, J. Gruca, F. Parisio, T. Vértesi, and W. Laskowski, “Strength and typicality of nonlocality in multisetting and multipartite Bell scenarios,” Physical review a, vol. 101, iss. 1, p. 12116, 2020. doi:10.1103/PhysRevA.101.012116
@article{de_rosier_strength_2020,
title = {Strength and typicality of nonlocality in multisetting and multipartite {Bell} scenarios},
volume = {101},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.101.012116},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {de Rosier, Anna and Gruca, Jacek and Parisio, Fernando and Vértesi, Tamás and Laskowski, Wiesław},
month = jan,
year = {2020},
pages = {012116},
}
• L. Knips, J. Dziewior, W. Kłobus, W. Laskowski, T. Paterek, P. J. Shadbolt, H. Weinfurter, and J. D. A. Meinecke, “Multipartite entanglement analysis from random correlations,” Npj quantum information, vol. 6, iss. 1, p. 51, 2020. doi:10.1038/s41534-020-0281-5
Abstract Quantum entanglement is usually revealed via a well aligned, carefully chosen set of measurements. Yet, under a number of experimental conditions, for example in communication within multiparty quantum networks, noise along the channels or fluctuating orientations of reference frames may ruin the quality of the distributed states. Here, we show that even for strong fluctuations one can still gain detailed information about the state and its entanglement using random measurements. Correlations between all or subsets of the measurement outcomes and especially their distributions provide information about the entanglement structure of a state. We analytically derive an entanglement criterion for two-qubit states and provide strong numerical evidence for witnessing genuine multipartite entanglement of three and four qubits. Our methods take the purity of the states into account and are based on only the second moments of measured correlations. Extended features of this theory are demonstrated experimentally with four photonic qubits. As long as the rate of entanglement generation is sufficiently high compared to the speed of the fluctuations, this method overcomes any type and strength of localized unitary noise.
@article{knips_multipartite_2020,
title = {Multipartite entanglement analysis from random correlations},
volume = {6},
issn = {2056-6387},
url = {http://www.nature.com/articles/s41534-020-0281-5},
doi = {10.1038/s41534-020-0281-5},
abstract = {Abstract
Quantum entanglement is usually revealed via a well aligned, carefully chosen set of measurements. Yet, under a number of experimental conditions, for example in communication within multiparty quantum networks, noise along the channels or fluctuating orientations of reference frames may ruin the quality of the distributed states. Here, we show that even for strong fluctuations one can still gain detailed information about the state and its entanglement using random measurements. Correlations between all or subsets of the measurement outcomes and especially their distributions provide information about the entanglement structure of a state. We analytically derive an entanglement criterion for two-qubit states and provide strong numerical evidence for witnessing genuine multipartite entanglement of three and four qubits. Our methods take the purity of the states into account and are based on only the second moments of measured correlations. Extended features of this theory are demonstrated experimentally with four photonic qubits. As long as the rate of entanglement generation is sufficiently high compared to the speed of the fluctuations, this method overcomes any type and strength of localized unitary noise.},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {npj Quantum Information},
author = {Knips, Lukas and Dziewior, Jan and Kłobus, Waldemar and Laskowski, Wiesław and Paterek, Tomasz and Shadbolt, Peter J. and Weinfurter, Harald and Meinecke, Jasmin D. A.},
month = dec,
year = {2020},
pages = {51},
}
• S. Roy, T. Das, and A. Sen(De), “Computable genuine multimode entanglement measure: Gaussian versus non-Gaussian,” Physical review a, vol. 102, iss. 1, p. 12421, 2020. doi:10.1103/PhysRevA.102.012421
@article{roy_computable_2020,
title = {Computable genuine multimode entanglement measure: {Gaussian} versus non-{Gaussian}},
volume = {102},
issn = {2469-9926, 2469-9934},
shorttitle = {Computable genuine multimode entanglement measure},
doi = {10.1103/PhysRevA.102.012421},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Roy, Saptarshi and Das, Tamoghna and Sen(De), Aditi},
month = jul,
year = {2020},
pages = {012421},
}
• E. Wolfe, D. Schmid, A. B. Sainz, R. Kunjwal, and R. W. Spekkens, “Quantifying Bell: the Resource Theory of Nonclassicality of Common-Cause Boxes,” Quantum, vol. 4, p. 280, 2020. doi:10.22331/q-2020-06-08-280
We take a resource-theoretic approach to the problem of quantifying nonclassicality in Bell scenarios. The resources are conceptualized as probabilistic processes from the setting variables to the outcome variables having a particular causal structure, namely, one wherein the wings are only connected by a common cause. We term them “common-cause boxes”. We define the distinction between classical and nonclassical resources in terms of whether or not a classical causal model can explain the correlations. One can then quantify the relative nonclassicality of resources by considering their interconvertibility relative to the set of operations that can be implemented using a classical common cause (which correspond to local operations and shared randomness). We prove that the set of free operations forms a polytope, which in turn allows us to derive an efficient algorithm for deciding whether one resource can be converted to another. We moreover define two distinct monotones with simple closed-form expressions in the two-party binary-setting binary-outcome scenario, and use these to reveal various properties of the pre-order of resources, including a lower bound on the cardinality of any complete set of monotones. In particular, we show that the information contained in the degrees of violation of facet-defining Bell inequalities is not sufficient for quantifying nonclassicality, even though it is sufficient for witnessing nonclassicality. Finally, we show that the continuous set of convexly extremal quantumly realizable correlations are all at the top of the pre-order of quantumly realizable correlations. In addition to providing new insights on Bell nonclassicality, our work also sets the stage for quantifying nonclassicality in more general causal networks.
@article{wolfe_quantifying_2020,
title = {Quantifying {Bell}: the {Resource} {Theory} of {Nonclassicality} of {Common}-{Cause} {Boxes}},
volume = {4},
issn = {2521-327X},
shorttitle = {Quantifying {Bell}},
url = {https://quantum-journal.org/papers/q-2020-06-08-280/},
doi = {10.22331/q-2020-06-08-280},
abstract = {We take a resource-theoretic approach to the problem of quantifying nonclassicality in Bell scenarios. The resources are conceptualized as probabilistic processes from the setting variables to the outcome variables having a particular causal structure, namely, one wherein the wings are only connected by a common cause. We term them "common-cause boxes". We define the distinction between classical and nonclassical resources in terms of whether or not a classical causal model can explain the correlations. One can then quantify the relative nonclassicality of resources by considering their interconvertibility relative to the set of operations that can be implemented using a classical common cause (which correspond to local operations and shared randomness). We prove that the set of free operations forms a polytope, which in turn allows us to derive an efficient algorithm for deciding whether one resource can be converted to another. We moreover define two distinct monotones with simple closed-form expressions in the two-party binary-setting binary-outcome scenario, and use these to reveal various properties of the pre-order of resources, including a lower bound on the cardinality of any complete set of monotones. In particular, we show that the information contained in the degrees of violation of facet-defining Bell inequalities is not sufficient for quantifying nonclassicality, even though it is sufficient for witnessing nonclassicality. Finally, we show that the continuous set of convexly extremal quantumly realizable correlations are all at the top of the pre-order of quantumly realizable correlations. In addition to providing new insights on Bell nonclassicality, our work also sets the stage for quantifying nonclassicality in more general causal networks.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Wolfe, Elie and Schmid, David and Sainz, Ana Belén and Kunjwal, Ravi and Spekkens, Robert W.},
month = jun,
year = {2020},
pages = {280},
}
• R. Ramanathan, M. Rosicka, K. Horodecki, S. Pironio, M. Horodecki, and P. Horodecki, “Gadget structures in proofs of the Kochen-Specker theorem,” Quantum, vol. 4, p. 308, 2020. doi:10.22331/q-2020-08-14-308
The Kochen-Specker theorem is a fundamental result in quantum foundations that has spawned massive interest since its inception. We show that within every Kochen-Specker graph, there exist interesting subgraphs which we term 01 -gadgets, that capture the essential contradiction necessary to prove the Kochen-Specker theorem, i.e,. every Kochen-Specker graph contains a 01 -gadget and from every 01 -gadget one can construct a proof of the Kochen-Specker theorem. Moreover, we show that the 01 -gadgets form a fundamental primitive that can be used to formulate state-independent and state-dependent statistical Kochen-Specker arguments as well as to give simple constructive proofs of an “extended” Kochen-Specker theorem first considered by Pitowsky in {\textbackslash}cite\{Pitowsky\}.
@article{ramanathan_gadget_2020,
title = {Gadget structures in proofs of the {Kochen}-{Specker} theorem},
volume = {4},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2020-08-14-308/},
doi = {10.22331/q-2020-08-14-308},
abstract = {The Kochen-Specker theorem is a fundamental result in quantum foundations that has spawned massive interest since its inception. We show that within every Kochen-Specker graph, there exist interesting subgraphs which we term
01
-gadgets, that capture the essential contradiction necessary to prove the Kochen-Specker theorem, i.e,. every Kochen-Specker graph contains a
01
01
-gadget one can construct a proof of the Kochen-Specker theorem. Moreover, we show that the
01
-gadgets form a fundamental primitive that can be used to formulate state-independent and state-dependent statistical Kochen-Specker arguments as well as to give simple constructive proofs of an extended'' Kochen-Specker theorem first considered by Pitowsky in {\textbackslash}cite\{Pitowsky\}.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Ramanathan, Ravishankar and Rosicka, Monika and Horodecki, Karol and Pironio, Stefano and Horodecki, Michał and Horodecki, Paweł},
month = aug,
year = {2020},
pages = {308},
}
• F. B. Maciejewski, Z. Zimborás, and M. Oszmaniec, “Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography,” Quantum, vol. 4, p. 257, 2020. doi:10.22331/q-2020-04-24-257
We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM’s and Rigetti’s quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM’s 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover’s search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.
@article{maciejewski_mitigation_2020,
title = {Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography},
volume = {4},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2020-04-24-257/},
doi = {10.22331/q-2020-04-24-257},
abstract = {We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM's and Rigetti's quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM's 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover's search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Maciejewski, Filip B. and Zimborás, Zoltán and Oszmaniec, Michał},
month = apr,
year = {2020},
pages = {257},
}
• K. Rosołek, M. Wieśniak, and L. Knips, “Quadratic Entanglement Criteria for Qutrits,” Acta physica polonica a, vol. 137, iss. 3, p. 374–378, 2020. doi:10.12693/APhysPolA.137.374
@article{rosolek_quadratic_2020,
title = {Quadratic {Entanglement} {Criteria} for {Qutrits}},
volume = {137},
issn = {1898-794X, 0587-4246},
url = {http://przyrbwn.icm.edu.pl/APP/PDF/137/app137z3p18.pdf},
doi = {10.12693/APhysPolA.137.374},
number = {3},
urldate = {2021-05-10},
journal = {Acta Physica Polonica A},
author = {Rosołek, K. and Wieśniak, M. and Knips, L.},
month = mar,
year = {2020},
pages = {374--378},
}
• M. Wieśniak, P. Pandya, O. Sakarya, and B. Woloncewicz, “Distance between Bound Entangled States from Unextendible Product Bases and Separable States,” Quantum reports, vol. 2, iss. 1, p. 49–56, 2020. doi:10.3390/quantum2010004
We discuss the use of the Gilbert algorithm to tailor entanglement witnesses for unextendible product basis bound entangled states (UPB BE states). The method relies on the fact that an optimal entanglement witness is given by a plane perpendicular to a line between the reference state, entanglement of which is to be witnessed, and its closest separable state (CSS). The Gilbert algorithm finds an approximation of CSS. In this article, we investigate if this approximation can be good enough to yield a valid entanglement witness. We compare witnesses found with Gilbert algorithm and those given by Bandyopadhyay–Ghosh–Roychowdhury (BGR) construction. This comparison allows us to learn about the amount of entanglement and we find a relationship between it and a feature of the construction of UPBBE states, namely the size of their central tile. We show that in most studied cases, witnesses found with the Gilbert algorithm in this work are more optimal than ones obtained by Bandyopadhyay, Ghosh, and Roychowdhury. This result implies the increased tolerance to experimental imperfections in a realization of the state.
@article{wiesniak_distance_2020,
title = {Distance between {Bound} {Entangled} {States} from {Unextendible} {Product} {Bases} and {Separable} {States}},
volume = {2},
issn = {2624-960X},
url = {https://www.mdpi.com/2624-960X/2/1/4},
doi = {10.3390/quantum2010004},
abstract = {We discuss the use of the Gilbert algorithm to tailor entanglement witnesses for unextendible product basis bound entangled states (UPB BE states). The method relies on the fact that an optimal entanglement witness is given by a plane perpendicular to a line between the reference state, entanglement of which is to be witnessed, and its closest separable state (CSS). The Gilbert algorithm finds an approximation of CSS. In this article, we investigate if this approximation can be good enough to yield a valid entanglement witness. We compare witnesses found with Gilbert algorithm and those given by Bandyopadhyay–Ghosh–Roychowdhury (BGR) construction. This comparison allows us to learn about the amount of entanglement and we find a relationship between it and a feature of the construction of UPBBE states, namely the size of their central tile. We show that in most studied cases, witnesses found with the Gilbert algorithm in this work are more optimal than ones obtained by Bandyopadhyay, Ghosh, and Roychowdhury. This result implies the increased tolerance to experimental imperfections in a realization of the state.},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Quantum Reports},
author = {Wieśniak, Marcin and Pandya, Palash and Sakarya, Omer and Woloncewicz, Bianka},
month = jan,
year = {2020},
pages = {49--56},
}
• T. Linowski, C. Gneiting, and Ł. Rudnicki, “Stabilizing entanglement in two-mode Gaussian states,” Physical review a, vol. 102, iss. 4, p. 42405, 2020. doi:10.1103/PhysRevA.102.042405
@article{linowski_stabilizing_2020,
title = {Stabilizing entanglement in two-mode {Gaussian} states},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.042405},
language = {en},
number = {4},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Linowski, Tomasz and Gneiting, Clemens and Rudnicki, Łukasz},
month = oct,
year = {2020},
pages = {042405},
}
• M. Eckstein and P. Horodecki, “The Experiment Paradox in Physics,” Foundations of science, 2020. doi:10.1007/s10699-020-09711-y
Abstract Modern physics is founded on two mainstays: mathematical modelling and empirical verification. These two assumptions are prerequisite for the objectivity of scientific discourse. Here we show, however, that they are contradictory, leading to the ‘experiment paradox’. We reveal that any experiment performed on a physical system is—by necessity—invasive and thus establishes inevitable limits to the accuracy of any mathematical model. We track its manifestations in both classical and quantum physics and show how it is overcome ‘in practice’ via the concept of environment. We argue that the unravelled paradox induces a new type of ‘ontic’ underdetermination, which has deep consequences for the methodological foundations of physics.
@article{eckstein_experiment_2020,
title = {The {Experiment} {Paradox} in {Physics}},
issn = {1233-1821, 1572-8471},
doi = {10.1007/s10699-020-09711-y},
abstract = {Abstract
Modern physics is founded on two mainstays: mathematical modelling and empirical verification. These two assumptions are prerequisite for the objectivity of scientific discourse. Here we show, however, that they are contradictory, leading to the ‘experiment paradox’. We reveal that any experiment performed on a physical system is—by necessity—invasive and thus establishes inevitable limits to the accuracy of any mathematical model. We track its manifestations in both classical and quantum physics and show how it is overcome ‘in practice’ via the concept of environment. We argue that the unravelled paradox induces a new type of ‘ontic’ underdetermination, which has deep consequences for the methodological foundations of physics.},
language = {en},
urldate = {2021-05-10},
journal = {Foundations of Science},
author = {Eckstein, Michał and Horodecki, Paweł},
month = oct,
year = {2020},
}
• A. Z. Goldberg, A. B. Klimov, M. Grassl, G. Leuchs, and L. L. Sánchez-Soto, “Extremal quantum states,” Avs quantum science, vol. 2, iss. 4, p. 44701, 2020. doi:10.1116/5.0025819
@article{goldberg_extremal_2020,
title = {Extremal quantum states},
volume = {2},
issn = {2639-0213},
url = {http://avs.scitation.org/doi/10.1116/5.0025819},
doi = {10.1116/5.0025819},
language = {en},
number = {4},
urldate = {2021-05-10},
journal = {AVS Quantum Science},
author = {Goldberg, Aaron Z. and Klimov, Andrei B. and Grassl, Markus and Leuchs, Gerd and Sánchez-Soto, Luis L.},
month = dec,
year = {2020},
pages = {044701},
}
• B. Groisman, M. Mc Gettrick, M. Mhalla, and M. Pawlowski, “How Quantum Information Can Improve Social Welfare,” Ieee journal on selected areas in information theory, vol. 1, iss. 2, p. 445–453, 2020. doi:10.1109/JSAIT.2020.3012922
@article{groisman_how_2020,
title = {How {Quantum} {Information} {Can} {Improve} {Social} {Welfare}},
volume = {1},
issn = {2641-8770},
url = {https://ieeexplore.ieee.org/document/9173538/},
doi = {10.1109/JSAIT.2020.3012922},
number = {2},
urldate = {2021-05-10},
journal = {IEEE Journal on Selected Areas in Information Theory},
author = {Groisman, Berry and Mc Gettrick, Michael and Mhalla, Mehdi and Pawlowski, Marcin},
month = aug,
year = {2020},
pages = {445--453},
}
• O. Sakarya, M. Winczewski, A. Rutkowski, and K. Horodecki, “Hybrid quantum network design against unauthorized secret-key generation, and its memory cost,” Physical review research, vol. 2, iss. 4, p. 43022, 2020. doi:10.1103/PhysRevResearch.2.043022
@article{sakarya_hybrid_2020,
title = {Hybrid quantum network design against unauthorized secret-key generation, and its memory cost},
volume = {2},
issn = {2643-1564},
doi = {10.1103/PhysRevResearch.2.043022},
language = {en},
number = {4},
urldate = {2021-05-10},
journal = {Physical Review Research},
author = {Sakarya, Omer and Winczewski, Marek and Rutkowski, Adam and Horodecki, Karol},
month = oct,
year = {2020},
pages = {043022},
}
• J. H. Selby and C. M. Lee, “Compositional resource theories of coherence,” Quantum, vol. 4, p. 319, 2020. doi:10.22331/q-2020-09-11-319
Quantum coherence is one of the most important resources in quantum information theory. Indeed, preventing the loss of coherence is one of the most important technical challenges obstructing the development of large-scale quantum computers. Recently, there has been substantial progress in developing mathematical resource theories of coherence, paving the way towards its quantification and control. To date however, these resource theories have only been mathematically formalised within the realms of convex-geometry, information theory, and linear algebra. This approach is limited in scope, and makes it difficult to generalise beyond resource theories of coherence for single system quantum states. In this paper we take a complementary perspective, showing that resource theories of coherence can instead be defined purely compositionally, that is, working with the mathematics of process theories, string diagrams and category theory. This new perspective offers several advantages: i) it unifies various existing approaches to the study of coherence, for example, subsuming both speakable and unspeakable coherence; ii) it provides a general treatment of the compositional multi-system setting; iii) it generalises immediately to the case of quantum channels, measurements, instruments, and beyond rather than just states; iv) it can easily be generalised to the setting where there are multiple distinct sources of decoherence; and, iv) it directly extends to arbitrary process theories, for example, generalised probabilistic theories and Spekkens toy model–-providing the ability to operationally characterise coherence rather than relying on specific mathematical features of quantum theory for its description. More importantly, by providing a new, complementary, perspective on the resource of coherence, this work opens the door to the development of novel tools which would not be accessible from the linear algebraic mind set.
@article{selby_compositional_2020,
title = {Compositional resource theories of coherence},
volume = {4},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2020-09-11-319/},
doi = {10.22331/q-2020-09-11-319},
abstract = {Quantum coherence is one of the most important resources in quantum information theory. Indeed, preventing the loss of coherence is one of the most important technical challenges obstructing the development of large-scale quantum computers. Recently, there has been substantial progress in developing mathematical resource theories of coherence, paving the way towards its quantification and control. To date however, these resource theories have only been mathematically formalised within the realms of convex-geometry, information theory, and linear algebra. This approach is limited in scope, and makes it difficult to generalise beyond resource theories of coherence for single system quantum states. In this paper we take a complementary perspective, showing that resource theories of coherence can instead be defined purely compositionally, that is, working with the mathematics of process theories, string diagrams and category theory. This new perspective offers several advantages: i) it unifies various existing approaches to the study of coherence, for example, subsuming both speakable and unspeakable coherence; ii) it provides a general treatment of the compositional multi-system setting; iii) it generalises immediately to the case of quantum channels, measurements, instruments, and beyond rather than just states; iv) it can easily be generalised to the setting where there are multiple distinct sources of decoherence; and, iv) it directly extends to arbitrary process theories, for example, generalised probabilistic theories and Spekkens toy model---providing the ability to operationally characterise coherence rather than relying on specific mathematical features of quantum theory for its description. More importantly, by providing a new, complementary, perspective on the resource of coherence, this work opens the door to the development of novel tools which would not be accessible from the linear algebraic mind set.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Selby, John H. and Lee, Ciarán M.},
month = sep,
year = {2020},
pages = {319},
}
• P. Pandya, O. Sakarya, and M. Wieśniak, “Hilbert-Schmidt distance and entanglement witnessing,” Physical review a, vol. 102, iss. 1, p. 12409, 2020. doi:10.1103/PhysRevA.102.012409
@article{pandya_hilbert-schmidt_2020,
title = {Hilbert-{Schmidt} distance and entanglement witnessing},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.012409},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Pandya, Palash and Sakarya, Omer and Wieśniak, Marcin},
month = jul,
year = {2020},
pages = {012409},
}
• J. Sikora and J. H. Selby, “Impossibility of coin flipping in generalized probabilistic theories via discretizations of semi-infinite programs,” Physical review research, vol. 2, iss. 4, p. 43128, 2020. doi:10.1103/PhysRevResearch.2.043128
@article{sikora_impossibility_2020,
title = {Impossibility of coin flipping in generalized probabilistic theories via discretizations of semi-infinite programs},
volume = {2},
issn = {2643-1564},
doi = {10.1103/PhysRevResearch.2.043128},
language = {en},
number = {4},
urldate = {2021-05-10},
journal = {Physical Review Research},
author = {Sikora, Jamie and Selby, John H.},
month = oct,
year = {2020},
pages = {043128},
}
• A. Hameedi, B. Marques, P. Mironowicz, D. Saha, M. Pawłowski, and M. Bourennane, “Experimental test of nonclassicality with arbitrarily low detection efficiency,” Physical review a, vol. 102, iss. 3, p. 32621, 2020. doi:10.1103/PhysRevA.102.032621
@article{hameedi_experimental_2020,
title = {Experimental test of nonclassicality with arbitrarily low detection efficiency},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.032621},
language = {en},
number = {3},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Hameedi, Alley and Marques, Breno and Mironowicz, Piotr and Saha, Debashis and Pawłowski, Marcin and Bourennane, Mohamed},
month = sep,
year = {2020},
pages = {032621},
}
• T. P. Le, P. Mironowicz, and P. Horodecki, “Blurred quantum Darwinism across quantum reference frames,” Physical review a, vol. 102, iss. 6, p. 62420, 2020. doi:10.1103/PhysRevA.102.062420
@article{le_blurred_2020,
title = {Blurred quantum {Darwinism} across quantum reference frames},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.062420},
language = {en},
number = {6},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Le, Thao P. and Mironowicz, Piotr and Horodecki, Paweł},
month = dec,
year = {2020},
pages = {062420},
}
• Sudha, H. S. Karthik, R. Pal, K. S. Akhilesh, S. Ghosh, K. S. Mallesh, and A. R. Usha Devi, “Canonical forms of two-qubit states under local operations,” Physical review a, vol. 102, iss. 5, p. 52419, 2020. doi:10.1103/PhysRevA.102.052419
@article{sudha_canonical_2020,
title = {Canonical forms of two-qubit states under local operations},
volume = {102},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.102.052419},
language = {en},
number = {5},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {{Sudha} and Karthik, H. S. and Pal, Rajarshi and Akhilesh, K. S. and Ghosh, Sibasish and Mallesh, K. S. and Usha Devi, A. R.},
month = nov,
year = {2020},
pages = {052419},
}
• A. Chaturvedi and D. Saha, “Quantum prescriptions are more ontologically distinct than they are operationally distinguishable,” Quantum, vol. 4, p. 345, 2020. doi:10.22331/q-2020-10-21-345
Based on an intuitive generalization of the Leibniz principle of the identity of indiscernibles’, we introduce a novel ontological notion of classicality, called bounded ontological distinctness. Formulated as a principle, bounded ontological distinctness equates the distinguishability of a set of operational physical entities to the distinctness of their ontological counterparts. Employing three instances of two-dimensional quantum preparations, we demonstrate the violation of bounded ontological distinctness or excess ontological distinctness of quantum preparations, without invoking any additional assumptions. Moreover, our methodology enables the inference of tight lower bounds on the extent of excess ontological distinctness of quantum preparations. Similarly, we demonstrate excess ontological distinctness of quantum transformations, using three two-dimensional unitary transformations. However, to demonstrate excess ontological distinctness of quantum measurements, an additional assumption such as outcome determinism or bounded ontological distinctness of preparations is required. Moreover, we show that quantum violations of other well-known ontological principles implicate quantum excess ontological distinctness. Finally, to showcase the operational vitality of excess ontological distinctness, we introduce two distinct classes of communication tasks powered by excess ontological distinctness.
@article{chaturvedi_quantum_2020,
title = {Quantum prescriptions are more ontologically distinct than they are operationally distinguishable},
volume = {4},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2020-10-21-345/},
doi = {10.22331/q-2020-10-21-345},
abstract = {Based on an intuitive generalization of the Leibniz principle of the identity of indiscernibles', we introduce a novel ontological notion of classicality, called bounded ontological distinctness. Formulated as a principle, bounded ontological distinctness equates the distinguishability of a set of operational physical entities to the distinctness of their ontological counterparts. Employing three instances of two-dimensional quantum preparations, we demonstrate the violation of bounded ontological distinctness or excess ontological distinctness of quantum preparations, without invoking any additional assumptions. Moreover, our methodology enables the inference of tight lower bounds on the extent of excess ontological distinctness of quantum preparations. Similarly, we demonstrate excess ontological distinctness of quantum transformations, using three two-dimensional unitary transformations. However, to demonstrate excess ontological distinctness of quantum measurements, an additional assumption such as outcome determinism or bounded ontological distinctness of preparations is required. Moreover, we show that quantum violations of other well-known ontological principles implicate quantum excess ontological distinctness. Finally, to showcase the operational vitality of excess ontological distinctness, we introduce two distinct classes of communication tasks powered by excess ontological distinctness.},
language = {en},
urldate = {2021-05-10},
journal = {Quantum},
author = {Chaturvedi, Anubhav and Saha, Debashis},
month = oct,
year = {2020},
pages = {345},
}
• M. Grassl, “Algebraic quantum codes: linking quantum mechanics and discrete mathematics,” International journal of computer mathematics: computer systems theory, p. 1–17, 2020. doi:10.1080/23799927.2020.1850530
@article{grassl_algebraic_2020,
title = {Algebraic quantum codes: linking quantum mechanics and discrete mathematics},
issn = {2379-9927, 2379-9935},
shorttitle = {Algebraic quantum codes},
url = {https://www.tandfonline.com/doi/full/10.1080/23799927.2020.1850530},
doi = {10.1080/23799927.2020.1850530},
language = {en},
urldate = {2021-05-10},
journal = {International Journal of Computer Mathematics: Computer Systems Theory},
author = {Grassl, Markus},
month = dec,
year = {2020},
pages = {1--17},
}
• M. Eckstein, P. Horodecki, R. Horodecki, and T. Miller, “Operational causality in spacetime,” Physical review a, vol. 101, iss. 4, p. 42128, 2020. doi:10.1103/PhysRevA.101.042128
The no-signalling principle preventing superluminal communication is a limiting paradigm for physical theories. Within the information-theoretic framework it is commonly understood in terms of admissible correlations in composite systems. Here we unveil its complementary incarnation –- the ‘dynamical no-signalling principle’ –-, which forbids superluminal signalling via measurements on simple physical objects (e.g. particles) evolving in time. We show that it imposes strong constraints on admissible models of dynamics. The posited principle is universal –- it can be applied to any theory (classical, quantum or post-quantum) with well-defined rules of calculating detection statistics in spacetime. As an immediate application we show how one could exploit the Schr{\textbackslash}”odinger equation to establish a fully operational superluminal protocol in the Minkowski spacetime. This example illustrates how the principle can be used to identify the limits of applicability of a given model of quantum or post-quantum dynamics.
@article{eckstein_operational_2020,
title = {Operational causality in spacetime},
volume = {101},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1902.05002},
doi = {10.1103/PhysRevA.101.042128},
abstract = {The no-signalling principle preventing superluminal communication is a limiting paradigm for physical theories. Within the information-theoretic framework it is commonly understood in terms of admissible correlations in composite systems. Here we unveil its complementary incarnation --- the 'dynamical no-signalling principle' ---, which forbids superluminal signalling via measurements on simple physical objects (e.g. particles) evolving in time. We show that it imposes strong constraints on admissible models of dynamics. The posited principle is universal --- it can be applied to any theory (classical, quantum or post-quantum) with well-defined rules of calculating detection statistics in spacetime. As an immediate application we show how one could exploit the Schr{\textbackslash}"odinger equation to establish a fully operational superluminal protocol in the Minkowski spacetime. This example illustrates how the principle can be used to identify the limits of applicability of a given model of quantum or post-quantum dynamics.},
number = {4},
urldate = {2021-05-11},
journal = {Physical Review A},
author = {Eckstein, Michał and Horodecki, Paweł and Horodecki, Ryszard and Miller, Tomasz},
month = apr,
year = {2020},
note = {arXiv: 1902.05002},
keywords = {Quantum Physics, General Relativity and Quantum Cosmology, Mathematical Physics, 81P16 (Primary), 81P15, 28E99, 60B05 (Secondary)},
pages = {042128},
}
• G. Tóth, T. Vértesi, P. Horodecki, and R. Horodecki, “Activating Hidden Metrological Usefulness,” Physical review letters, vol. 125, iss. 2, p. 20402, 2020. doi:10.1103/PhysRevLett.125.020402
@article{toth_activating_2020-1,
title = {Activating {Hidden} {Metrological} {Usefulness}},
volume = {125},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.125.020402},
language = {en},
number = {2},
urldate = {2021-07-28},
journal = {Physical Review Letters},
author = {Tóth, Géza and Vértesi, Tamás and Horodecki, Paweł and Horodecki, Ryszard},
month = jul,
year = {2020},
pages = {020402},
}
• M. Łobejko, P. Mazurek, and M. Horodecki, “Thermodynamics of Minimal Coupling Quantum Heat Engines,” Quantum, vol. 4, p. 375, 2020. doi:10.22331/q-2020-12-23-375
The minimal-coupling quantum heat engine is a thermal machine consisting of an explicit energy storage system, heat baths, and a working body, which alternatively couples to subsystems through discrete strokes – energy-conserving two-body quantum operations. Within this paradigm, we present a general framework of quantum thermodynamics, where a work extraction process is fundamentally limited by a flow of non-passive energy (ergotropy), while energy dissipation is expressed through a flow of passive energy. It turns out that small dimensionality of the working body and a restriction only to two-body operations make the engine fundamentally irreversible. Our main result is finding the optimal efficiency and work production per cycle within the whole class of irreversible minimal-coupling engines composed of three strokes and with the two-level working body, where we take into account all possible quantum correlations between the working body and the battery. One of the key new tools is the introduced “control-marginal state” – one which acts only on a working body Hilbert space, but encapsulates all features regarding work extraction of the total working body-battery system. In addition, we propose a generalization of the many-stroke engine, and we analyze efficiency vs extracted work trade-offs, as well as work fluctuations after many cycles of the running of the engine.
@article{lobejko_thermodynamics_2020,
title = {Thermodynamics of {Minimal} {Coupling} {Quantum} {Heat} {Engines}},
volume = {4},
issn = {2521-327X},
url = {http://arxiv.org/abs/2003.05788},
doi = {10.22331/q-2020-12-23-375},
abstract = {The minimal-coupling quantum heat engine is a thermal machine consisting of an explicit energy storage system, heat baths, and a working body, which alternatively couples to subsystems through discrete strokes -- energy-conserving two-body quantum operations. Within this paradigm, we present a general framework of quantum thermodynamics, where a work extraction process is fundamentally limited by a flow of non-passive energy (ergotropy), while energy dissipation is expressed through a flow of passive energy. It turns out that small dimensionality of the working body and a restriction only to two-body operations make the engine fundamentally irreversible. Our main result is finding the optimal efficiency and work production per cycle within the whole class of irreversible minimal-coupling engines composed of three strokes and with the two-level working body, where we take into account all possible quantum correlations between the working body and the battery. One of the key new tools is the introduced "control-marginal state" -- one which acts only on a working body Hilbert space, but encapsulates all features regarding work extraction of the total working body-battery system. In addition, we propose a generalization of the many-stroke engine, and we analyze efficiency vs extracted work trade-offs, as well as work fluctuations after many cycles of the running of the engine.},
urldate = {2021-07-28},
journal = {Quantum},
author = {Łobejko, Marcin and Mazurek, Paweł and Horodecki, Michał},
month = dec,
year = {2020},
note = {arXiv: 2003.05788},
keywords = {Quantum Physics},
pages = {375},
}
• P. Mazurek, Máté. Farkas, A. Grudka, M. Horodecki, and M. Studziński, “Quantum error correction codes and absolutely maximally entangled states,” Physical review a, vol. 101, iss. 4, p. 42305, 2020. doi:10.1103/PhysRevA.101.042305
For every stabiliser \$N\$-qudit absolutely maximally entangled state, we present a method for determining the stabiliser generators and logical operators of a corresponding quantum error correction code. These codes encode \$k\$ qudits into \$N-k\$ qudits, with \$k{\textbackslash}leq {\textbackslash}left {\textbackslash}lfloor\{N/2\} {\textbackslash}right {\textbackslash}rfloor\$, where the local dimension \$d\$ is prime. We use these methods to analyse the concatenation of such quantum codes and link this procedure to entanglement swapping. Using our techniques, we investigate the spread of quantum information on a tensor network code formerly used as a toy model for the AdS/CFT correspondence. In this network, we show how corrections arise to the Ryu-Takayanagi formula in the case of entangled input state, and that the bound on the entanglement entropy of the boundary state is saturated for absolutely maximally entangled input states.
@article{mazurek_quantum_2020-1,
title = {Quantum error correction codes and absolutely maximally entangled states},
volume = {101},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1910.07427},
doi = {10.1103/PhysRevA.101.042305},
abstract = {For every stabiliser \$N\$-qudit absolutely maximally entangled state, we present a method for determining the stabiliser generators and logical operators of a corresponding quantum error correction code. These codes encode \$k\$ qudits into \$N-k\$ qudits, with \$k{\textbackslash}leq {\textbackslash}left {\textbackslash}lfloor\{N/2\} {\textbackslash}right {\textbackslash}rfloor\$, where the local dimension \$d\$ is prime. We use these methods to analyse the concatenation of such quantum codes and link this procedure to entanglement swapping. Using our techniques, we investigate the spread of quantum information on a tensor network code formerly used as a toy model for the AdS/CFT correspondence. In this network, we show how corrections arise to the Ryu-Takayanagi formula in the case of entangled input state, and that the bound on the entanglement entropy of the boundary state is saturated for absolutely maximally entangled input states.},
number = {4},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Mazurek, Paweł and Farkas, Máté and Grudka, Andrzej and Horodecki, Michał and Studziński, Michał},
month = apr,
year = {2020},
note = {arXiv: 1910.07427},
keywords = {Quantum Physics},
pages = {042305},
}
• C. Cirstoiu, K. Korzekwa, and D. Jennings, “Robustness of Noether’s principle: Maximal disconnects between conservation laws and symmetries in quantum theory,” Physical review x, vol. 10, iss. 4, p. 41035, 2020. doi:10.1103/PhysRevX.10.041035
To what extent does Noether’s principle apply to quantum channels? Here, we quantify the degree to which imposing a symmetry constraint on quantum channels implies a conservation law, and show that this relates to physically impossible transformations in quantum theory, such as time-reversal and spin-inversion. In this analysis, the convex structure and extremal points of the set of quantum channels symmetric under the action of a Lie group \$G\$ becomes essential. It allows us to derive bounds on the deviation from conservation laws under any symmetric quantum channel in terms of the deviation from closed dynamics as measured by the unitarity of the channel. In particular, we investigate in detail the \$U(1)\$ and \$SU(2)\$ symmetries related to energy and angular momentum conservation laws. In the latter case, we provide fundamental limits on how much a spin-\$j_A\$ system can be used to polarise a larger spin-\$j_B\$ system, and on how much one can invert spin polarisation using a rotationally-symmetric operation. Finally, we also establish novel links between unitarity, complementary channels and purity that are of independent interest.
@article{cirstoiu_robustness_2020,
title = {Robustness of {Noether}'s principle: {Maximal} disconnects between conservation laws and symmetries in quantum theory},
volume = {10},
issn = {2160-3308},
shorttitle = {Robustness of {Noether}'s principle},
url = {http://arxiv.org/abs/1908.04254},
doi = {10.1103/PhysRevX.10.041035},
abstract = {To what extent does Noether's principle apply to quantum channels? Here, we quantify the degree to which imposing a symmetry constraint on quantum channels implies a conservation law, and show that this relates to physically impossible transformations in quantum theory, such as time-reversal and spin-inversion. In this analysis, the convex structure and extremal points of the set of quantum channels symmetric under the action of a Lie group \$G\$ becomes essential. It allows us to derive bounds on the deviation from conservation laws under any symmetric quantum channel in terms of the deviation from closed dynamics as measured by the unitarity of the channel. In particular, we investigate in detail the \$U(1)\$ and \$SU(2)\$ symmetries related to energy and angular momentum conservation laws. In the latter case, we provide fundamental limits on how much a spin-\$j\_A\$ system can be used to polarise a larger spin-\$j\_B\$ system, and on how much one can invert spin polarisation using a rotationally-symmetric operation. Finally, we also establish novel links between unitarity, complementary channels and purity that are of independent interest.},
number = {4},
urldate = {2021-07-28},
journal = {Physical Review X},
author = {Cirstoiu, Cristina and Korzekwa, Kamil and Jennings, David},
month = nov,
year = {2020},
note = {arXiv: 1908.04254},
keywords = {Quantum Physics},
pages = {041035},
}
• A. Tavakoli, M. Żukowski, and Č. Brukner, “Does violation of a Bell inequality always imply quantum advantage in a communication complexity problem?,” Quantum, vol. 4, p. 316, 2020. doi:10.22331/q-2020-09-07-316
Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, [Brukner et. al., Phys Rev. Lett. 89, 197901 (2002)]; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the \$I_\{3322\}\$ Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs
@article{tavakoli_does_2020-1,
title = {Does violation of a {Bell} inequality always imply quantum advantage in a communication complexity problem?},
volume = {4},
issn = {2521-327X},
url = {http://arxiv.org/abs/1907.01322},
doi = {10.22331/q-2020-09-07-316},
abstract = {Quantum correlations which violate a Bell inequality are presumed to power better-than-classical protocols for solving communication complexity problems (CCPs). How general is this statement? We show that violations of correlation-type Bell inequalities allow advantages in CCPs, when communication protocols are tailored to emulate the Bell no-signaling constraint (by not communicating measurement settings). Abandonment of this restriction on classical models allows us to disprove the main result of, inter alia, [Brukner et. al., Phys Rev. Lett. 89, 197901 (2002)]; we show that quantum correlations obtained from these communication strategies assisted by a small quantum violation of the CGLMP Bell inequalities do not imply advantages in any CCP in the input/output scenario considered in the reference. More generally, we show that there exists quantum correlations, with nontrivial local marginal probabilities, which violate the \$I\_\{3322\}\$ Bell inequality, but do not enable a quantum advantange in any CCP, regardless of the communication strategy employed in the quantum protocol, for a scenario with a fixed number of inputs and outputs},
urldate = {2021-07-28},
journal = {Quantum},
author = {Tavakoli, Armin and Żukowski, Marek and Brukner, Časlav},
month = sep,
year = {2020},
note = {arXiv: 1907.01322},
keywords = {Quantum Physics},
pages = {316},
}
• F. B. Maciejewski, Z. Zimborás, and M. Oszmaniec, “Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography,” Quantum, vol. 4, p. 257, 2020. doi:10.22331/q-2020-04-24-257
We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM’s and Rigetti’s quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM’s 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover’s search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.
@article{maciejewski_mitigation_2020-1,
title = {Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography},
volume = {4},
issn = {2521-327X},
url = {http://arxiv.org/abs/1907.08518},
doi = {10.22331/q-2020-04-24-257},
abstract = {We propose a simple scheme to reduce readout errors in experiments on quantum systems with finite number of measurement outcomes. Our method relies on performing classical post-processing which is preceded by Quantum Detector Tomography, i.e., the reconstruction of a Positive-Operator Valued Measure (POVM) describing the given quantum measurement device. If the measurement device is affected only by an invertible classical noise, it is possible to correct the outcome statistics of future experiments performed on the same device. To support the practical applicability of this scheme for near-term quantum devices, we characterize measurements implemented in IBM's and Rigetti's quantum processors. We find that for these devices, based on superconducting transmon qubits, classical noise is indeed the dominant source of readout errors. Moreover, we analyze the influence of the presence of coherent errors and finite statistics on the performance of our error-mitigation procedure. Applying our scheme on the IBM's 5-qubit device, we observe a significant improvement of the results of a number of single- and two-qubit tasks including Quantum State Tomography (QST), Quantum Process Tomography (QPT), the implementation of non-projective measurements, and certain quantum algorithms (Grover's search and the Bernstein-Vazirani algorithm). Finally, we present results showing improvement for the implementation of certain probability distributions in the case of five qubits.},
urldate = {2021-07-28},
journal = {Quantum},
author = {Maciejewski, Filip B. and Zimborás, Zoltán and Oszmaniec, Michał},
month = apr,
year = {2020},
note = {arXiv: 1907.08518},
keywords = {Quantum Physics},
pages = {257},
}
• D. J. Brod and M. Oszmaniec, “Classical simulation of linear optics subject to nonuniform losses,” Quantum, vol. 4, p. 267, 2020. doi:10.22331/q-2020-05-14-267
We present a comprehensive study of the impact of non-uniform, i.e.{\textbackslash} path-dependent, photonic losses on the computational complexity of linear-optical processes. Our main result states that, if each beam splitter in a network induces some loss probability, non-uniform network designs cannot circumvent the efficient classical simulations based on losses. To achieve our result we obtain new intermediate results that can be of independent interest. First, we show that, for any network of lossy beam-splitters, it is possible to extract a layer of non-uniform losses that depends on the network geometry. We prove that, for every input mode of the network it is possible to commute \$s_i\$ layers of losses to the input, where \$s_i\$ is the length of the shortest path connecting the \$i\$th input to any output. We then extend a recent classical simulation algorithm due to P. Clifford and R. Clifford to allow for arbitrary \$n\$-photon input Fock states (i.e. to include collision states). Consequently, we identify two types of input states where boson sampling becomes classically simulable: (A) when \$n\$ input photons occupy a constant number of input modes; (B) when all but \$O({\textbackslash}log n)\$ photons are concentrated on a single input mode, while an additional \$O({\textbackslash}log n)\$ modes contain one photon each.
@article{brod_classical_2020,
title = {Classical simulation of linear optics subject to nonuniform losses},
volume = {4},
issn = {2521-327X},
url = {http://arxiv.org/abs/1906.06696},
doi = {10.22331/q-2020-05-14-267},
abstract = {We present a comprehensive study of the impact of non-uniform, i.e.{\textbackslash} path-dependent, photonic losses on the computational complexity of linear-optical processes. Our main result states that, if each beam splitter in a network induces some loss probability, non-uniform network designs cannot circumvent the efficient classical simulations based on losses. To achieve our result we obtain new intermediate results that can be of independent interest. First, we show that, for any network of lossy beam-splitters, it is possible to extract a layer of non-uniform losses that depends on the network geometry. We prove that, for every input mode of the network it is possible to commute \$s\_i\$ layers of losses to the input, where \$s\_i\$ is the length of the shortest path connecting the \$i\$th input to any output. We then extend a recent classical simulation algorithm due to P. Clifford and R. Clifford to allow for arbitrary \$n\$-photon input Fock states (i.e. to include collision states). Consequently, we identify two types of input states where boson sampling becomes classically simulable: (A) when \$n\$ input photons occupy a constant number of input modes; (B) when all but \$O({\textbackslash}log n)\$ photons are concentrated on a single input mode, while an additional \$O({\textbackslash}log n)\$ modes contain one photon each.},
urldate = {2021-07-28},
journal = {Quantum},
author = {Brod, Daniel Jost and Oszmaniec, Michał{\textbackslash}},
month = may,
year = {2020},
note = {arXiv: 1906.06696},
keywords = {Quantum Physics, Mathematical Physics},
pages = {267},
}
• A. de Rosier, J. Gruca, F. Parisio, T. Vertesi, and W. Laskowski, “Strength and typicality of nonlocality in multisetting and multipartite Bell scenarios,” Physical review a, vol. 101, iss. 1, p. 12116, 2020. doi:10.1103/PhysRevA.101.012116
In this work we investigate the probability of violation of local realism under random measurements in parallel with the strength of these violations as described by resistance to white noise admixture. We address multisetting Bell scenarios involving up to 7 qubits. As a result, in the first part of this manuscript we report statistical distributions of a quantity reciprocal to the critical visibility for various multipartite quantum states subjected to random measurements. The statistical relevance of different classes of multipartite tight Bell inequalities violated with random measurements is investigated. We also introduce the concept of typicality of quantum correlations for pure states as the probability to generate a nonlocal behaviour with both random state and measurement. Although this typicality is slightly above 5.3{\textbackslash}\% for the CHSH scenario, for a modest increase in the number of involved qubits it quickly surpasses 99.99{\textbackslash}\%.
@article{de_rosier_strength_2020-1,
title = {Strength and typicality of nonlocality in multisetting and multipartite {Bell} scenarios},
volume = {101},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1906.03235},
doi = {10.1103/PhysRevA.101.012116},
abstract = {In this work we investigate the probability of violation of local realism under random measurements in parallel with the strength of these violations as described by resistance to white noise admixture. We address multisetting Bell scenarios involving up to 7 qubits. As a result, in the first part of this manuscript we report statistical distributions of a quantity reciprocal to the critical visibility for various multipartite quantum states subjected to random measurements. The statistical relevance of different classes of multipartite tight Bell inequalities violated with random measurements is investigated. We also introduce the concept of typicality of quantum correlations for pure states as the probability to generate a nonlocal behaviour with both random state and measurement. Although this typicality is slightly above 5.3{\textbackslash}\% for the CHSH scenario, for a modest increase in the number of involved qubits it quickly surpasses 99.99{\textbackslash}\%.},
number = {1},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {de Rosier, Anna and Gruca, Jacek and Parisio, Fernando and Vertesi, Tamas and Laskowski, Wieslaw},
month = jan,
year = {2020},
note = {arXiv: 1906.03235},
keywords = {Quantum Physics},
pages = {012116},
}
• M. Rosicka, P. Mazurek, A. Grudka, and M. Horodecki, “Generalized XOR non-locality games with graph description on a square lattice,” Journal of physics a: mathematical and theoretical, vol. 53, iss. 26, p. 265302, 2020. doi:10.1088/1751-8121/ab8f3e
We propose a family of non-locality unique games for 2 parties based on a square lattice on an arbitrary surface. We show that, due to structural similarities with error correction codes of Kitaev for fault tolerant quantum computation, the games have classical values computable in polynomial time for \$d=2\$ measurement outcomes. By representing games in their graph form, for arbitrary \$d\$ and underlying surface we provide their classification into equivalence classes with respect to relabeling of measurement outcomes, for a selected set of permutations which define the winning conditions. A case study of games with periodic boundary conditions is presented in order to verify their impact on classical and quantum values of the family of games. It suggests that quantum values suffer independently from presence of different winning conditions that can be imposed due to periodicity, as long as no local restrictions are in place.
@article{rosicka_generalized_2020-1,
title = {Generalized {XOR} non-locality games with graph description on a square lattice},
volume = {53},
issn = {1751-8113, 1751-8121},
url = {http://arxiv.org/abs/1902.11053},
doi = {10.1088/1751-8121/ab8f3e},
abstract = {We propose a family of non-locality unique games for 2 parties based on a square lattice on an arbitrary surface. We show that, due to structural similarities with error correction codes of Kitaev for fault tolerant quantum computation, the games have classical values computable in polynomial time for \$d=2\$ measurement outcomes. By representing games in their graph form, for arbitrary \$d\$ and underlying surface we provide their classification into equivalence classes with respect to relabeling of measurement outcomes, for a selected set of permutations which define the winning conditions. A case study of games with periodic boundary conditions is presented in order to verify their impact on classical and quantum values of the family of games. It suggests that quantum values suffer independently from presence of different winning conditions that can be imposed due to periodicity, as long as no local restrictions are in place.},
number = {26},
urldate = {2021-07-28},
journal = {Journal of Physics A: Mathematical and Theoretical},
author = {Rosicka, Monika and Mazurek, Paweł and Grudka, Andrzej and Horodecki, Michał},
month = jul,
year = {2020},
note = {arXiv: 1902.11053},
keywords = {Quantum Physics, Mathematics - Combinatorics},
pages = {265302},
}
• M. Eckstein, P. Horodecki, R. Horodecki, and T. Miller, “Operational causality in spacetime,” Physical review a, vol. 101, iss. 4, p. 42128, 2020. doi:10.1103/PhysRevA.101.042128
The no-signalling principle preventing superluminal communication is a limiting paradigm for physical theories. Within the information-theoretic framework it is commonly understood in terms of admissible correlations in composite systems. Here we unveil its complementary incarnation –- the ‘dynamical no-signalling principle’ –-, which forbids superluminal signalling via measurements on simple physical objects (e.g. particles) evolving in time. We show that it imposes strong constraints on admissible models of dynamics. The posited principle is universal –- it can be applied to any theory (classical, quantum or post-quantum) with well-defined rules of calculating detection statistics in spacetime. As an immediate application we show how one could exploit the Schr{\textbackslash}”odinger equation to establish a fully operational superluminal protocol in the Minkowski spacetime. This example illustrates how the principle can be used to identify the limits of applicability of a given model of quantum or post-quantum dynamics.
@article{eckstein_operational_2020-1,
title = {Operational causality in spacetime},
volume = {101},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1902.05002},
doi = {10.1103/PhysRevA.101.042128},
abstract = {The no-signalling principle preventing superluminal communication is a limiting paradigm for physical theories. Within the information-theoretic framework it is commonly understood in terms of admissible correlations in composite systems. Here we unveil its complementary incarnation --- the 'dynamical no-signalling principle' ---, which forbids superluminal signalling via measurements on simple physical objects (e.g. particles) evolving in time. We show that it imposes strong constraints on admissible models of dynamics. The posited principle is universal --- it can be applied to any theory (classical, quantum or post-quantum) with well-defined rules of calculating detection statistics in spacetime. As an immediate application we show how one could exploit the Schr{\textbackslash}"odinger equation to establish a fully operational superluminal protocol in the Minkowski spacetime. This example illustrates how the principle can be used to identify the limits of applicability of a given model of quantum or post-quantum dynamics.},
number = {4},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Eckstein, Michał and Horodecki, Paweł and Horodecki, Ryszard and Miller, Tomasz},
month = apr,
year = {2020},
note = {arXiv: 1902.05002},
keywords = {Quantum Physics, General Relativity and Quantum Cosmology, Mathematical Physics, 81P16 (Primary), 81P15, 28E99, 60B05 (Secondary)},
pages = {042128},
}
• J. Czartowski, D. Goyeneche, M. Grassl, and K. Życzkowski, “Isoentangled Mutually Unbiased Bases, Symmetric Quantum Measurements, and Mixed-State Designs,” Physical review letters, vol. 124, iss. 9, p. 90503, 2020. doi:10.1103/PhysRevLett.124.090503
@article{czartowski_isoentangled_2020,
title = {Isoentangled {Mutually} {Unbiased} {Bases}, {Symmetric} {Quantum} {Measurements}, and {Mixed}-{State} {Designs}},
volume = {124},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.124.090503},
language = {en},
number = {9},
urldate = {2020-04-22},
journal = {Physical Review Letters},
author = {Czartowski, Jakub and Goyeneche, Dardo and Grassl, Markus and Życzkowski, Karol},
month = mar,
year = {2020},
pages = {090503},
}
• R. Alicki and A. Jenkins, “Quantum theory of triboelectricity,” , vol. 125, 2020. doi:10.1103/physrevlett.125.186101
@Article{Alicki,
author = {Robert Alicki and Alejandro Jenkins},
title = {Quantum Theory of Triboelectricity},
year = {2020},
issn = {0031-9007},
volume = {125},
doi = {10.1103/physrevlett.125.186101},
url = {https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.125.186101},
}
2019
• A. Pozas-Kerstjens, R. Rabelo, Ł. Rudnicki, R. Chaves, D. Cavalcanti, M. Navascués, and A. Acín, “Bounding the sets of classical and quantum correlations in networks,” Physical review letters, vol. 123, iss. 14, p. 140503, 2019. doi:10.1103/PhysRevLett.123.140503
@article{pozas-kerstjens_bounding_2019,
title = {Bounding the sets of classical and quantum correlations in networks},
volume = {123},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.123.140503},
language = {en},
number = {14},
urldate = {2020-04-22},
journal = {Physical Review Letters},
author = {Pozas-Kerstjens, Alejandro and Rabelo, Rafael and Rudnicki, Łukasz and Chaves, Rafael and Cavalcanti, Daniel and Navascués, Miguel and Acín, Antonio},
month = oct,
year = {2019},
pages = {140503},
}
• D. Yang, K. Horodecki, and A. Winter, “Distributed private randomness distillation,” Physical review letters, vol. 123, iss. 17, p. 170501, 2019. doi:10.1103/PhysRevLett.123.170501
@article{yang_distributed_2019,
title = {Distributed private randomness distillation},
volume = {123},
issn = {0031-9007, 1079-7114},
doi = {10.1103/PhysRevLett.123.170501},
language = {en},
number = {17},
urldate = {2020-04-22},
journal = {Physical Review Letters},
author = {Yang, Dong and Horodecki, Karol and Winter, Andreas},
month = oct,
year = {2019},
pages = {170501},
}
• T. Van Himbeeck, J. Bohr Brask, S. Pironio, R. Ramanathan, A. B. Sainz, and E. Wolfe, “Quantum violations in the Instrumental scenario and their relations to the Bell scenario,” Quantum, vol. 3, p. 186, 2019. doi:10.22331/q-2019-09-16-186
The causal structure of any experiment implies restrictions on the observable correlations between measurement outcomes, which are different for experiments exploiting classical, quantum, or post-quantum resources. In the study of Bell nonlocality, these differences have been explored in great detail for more and more involved causal structures. Here, we go in the opposite direction and identify the simplest causal structure which exhibits a separation between classical, quantum, and post-quantum correlations. It arises in the so-called Instrumental scenario, known from classical causal models. We derive inequalities for this scenario and show that they are closely related to well-known Bell inequalities, such as the Clauser-Horne-Shimony-Holt inequality, which enables us to easily identify their classical, quantum, and post-quantum bounds as well as strategies violating the first two. The relations that we uncover imply that the quantum or post-quantum advantages witnessed by the violation of our Instrumental inequalities are not fundamentally different from those witnessed by the violations of standard inequalities in the usual Bell scenario. However, non-classical tests in the Instrumental scenario require fewer input choices than their Bell scenario counterpart, which may have potential implications for device-independent protocols.
@article{van_himbeeck_quantum_2019,
title = {Quantum violations in the {Instrumental} scenario and their relations to the {Bell} scenario},
volume = {3},
issn = {2521-327X},
url = {https://quantum-journal.org/papers/q-2019-09-16-186/},
doi = {10.22331/q-2019-09-16-186},
abstract = {The causal structure of any experiment implies restrictions on the observable correlations between measurement outcomes, which are different for experiments exploiting classical, quantum, or post-quantum resources. In the study of Bell nonlocality, these differences have been explored in great detail for more and more involved causal structures. Here, we go in the opposite direction and identify the simplest causal structure which exhibits a separation between classical, quantum, and post-quantum correlations. It arises in the so-called Instrumental scenario, known from classical causal models. We derive inequalities for this scenario and show that they are closely related to well-known Bell inequalities, such as the Clauser-Horne-Shimony-Holt inequality, which enables us to easily identify their classical, quantum, and post-quantum bounds as well as strategies violating the first two. The relations that we uncover imply that the quantum or post-quantum advantages witnessed by the violation of our Instrumental inequalities are not fundamentally different from those witnessed by the violations of standard inequalities in the usual Bell scenario. However, non-classical tests in the Instrumental scenario require fewer input choices than their Bell scenario counterpart, which may have potential implications for device-independent protocols.},
language = {en},
urldate = {2020-04-22},
journal = {Quantum},
author = {Van Himbeeck, Thomas and Bohr Brask, Jonatan and Pironio, Stefano and Ramanathan, Ravishankar and Sainz, Ana Belén and Wolfe, Elie},
month = sep,
year = {2019},
pages = {186},
}
• P. Mironowicz and M. Pawłowski, “Experimentally feasible semi-device-independent certification of four-outcome positive-operator-valued measurements,” Physical review a, vol. 100, iss. 3, p. 30301, 2019. doi:10.1103/PhysRevA.100.030301
@article{mironowicz_experimentally_2019,
title = {Experimentally feasible semi-device-independent certification of four-outcome positive-operator-valued measurements},
volume = {100},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.100.030301},
language = {en},
number = {3},
urldate = {2020-04-22},
journal = {Physical Review A},
author = {Mironowicz, Piotr and Pawłowski, Marcin},
month = sep,
year = {2019},
pages = {030301},
}
• R. Alicki, “A quantum open system model of molecular battery charged by excitons,” The journal of chemical physics, vol. 150, iss. 21, p. 214110, 2019. doi:10.1063/1.5096772
@article{alicki_quantum_2019,
title = {A quantum open system model of molecular battery charged by excitons},
volume = {150},
issn = {0021-9606},
url = {https://aip.scitation.org/doi/10.1063/1.5096772},
doi = {10.1063/1.5096772},
number = {21},
urldate = {2020-04-22},
journal = {The Journal of Chemical Physics},
author = {Alicki, Robert},
month = jun,
year = {2019},
pages = {214110},
}
• G. Baio, D. Chruściński, P. Horodecki, A. Messina, and G. Sarbicki, “Bounds on the entanglement of two-qutrit systems from fixed marginals,” Physical review a, vol. 99, iss. 6, p. 62312, 2019. doi:10.1103/PhysRevA.99.062312
@article{baio_bounds_2019,
title = {Bounds on the entanglement of two-qutrit systems from fixed marginals},
volume = {99},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.99.062312},
language = {en},
number = {6},
urldate = {2020-04-22},
journal = {Physical Review A},
author = {Baio, Giuseppe and Chruściński, Dariusz and Horodecki, Paweł and Messina, Antonino and Sarbicki, Gniewomir},
month = jun,
year = {2019},
pages = {062312},
}
• Robert Alicki, “Quantum Features of Macroscopic Fields: Entropy and Dynamics,” Entropy, vol. 21, iss. 7, p. 705, 2019. doi:10.3390/e21070705
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics.
@article{robert_alicki_quantum_2019,
title = {Quantum {Features} of {Macroscopic} {Fields}: {Entropy} and {Dynamics}},
volume = {21},
issn = {1099-4300},
shorttitle = {Quantum {Features} of {Macroscopic} {Fields}},
url = {https://www.mdpi.com/1099-4300/21/7/705},
doi = {10.3390/e21070705},
abstract = {Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics.},
language = {en},
number = {7},
urldate = {2020-04-22},
journal = {Entropy},
author = {{Robert Alicki}},
month = jul,
year = {2019},
pages = {705},
}
• P. Horodecki and R. Ramanathan, “The relativistic causality versus no-signaling paradigm for multi-party correlations,” Nature communications, vol. 10, iss. 1, p. 1701, 2019. doi:10.1038/s41467-019-09505-2
@article{horodecki_relativistic_2019,
title = {The relativistic causality versus no-signaling paradigm for multi-party correlations},
volume = {10},
issn = {2041-1723},
url = {http://www.nature.com/articles/s41467-019-09505-2},
doi = {10.1038/s41467-019-09505-2},
language = {en},
number = {1},
urldate = {2020-04-22},
journal = {Nature Communications},
author = {Horodecki, Paweł and Ramanathan, Ravishankar},
month = dec,
year = {2019},
pages = {1701},
}
• J. Ryu, B. Woloncewicz, M. Marciniak, M. Wieśniak, and M. Żukowski, “General mapping of multiqudit entanglement conditions to nonseparability indicators for quantum-optical fields,” Physical review research, vol. 1, iss. 3, p. 32041, 2019. doi:10.1103/PhysRevResearch.1.032041
@article{ryu_general_2019,
title = {General mapping of multiqudit entanglement conditions to nonseparability indicators for quantum-optical fields},
volume = {1},
issn = {2643-1564},
doi = {10.1103/PhysRevResearch.1.032041},
language = {en},
number = {3},
urldate = {2020-05-13},
journal = {Physical Review Research},
author = {Ryu, Junghee and Woloncewicz, Bianka and Marciniak, Marcin and Wieśniak, Marcin and Żukowski, Marek},
month = dec,
year = {2019},
pages = {032041},
}
• W. Kłobus, A. Burchardt, A. Kołodziejski, M. Pandit, T. Vértesi, K. Życzkowski, and W. Laskowski, “K -uniform mixed states,” Physical review a, vol. 100, iss. 3, p. 32112, 2019. doi:10.1103/PhysRevA.100.032112
@article{klobus_k_2019,
title = {k -uniform mixed states},
volume = {100},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.100.032112},
language = {en},
number = {3},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Kłobus, Waldemar and Burchardt, Adam and Kołodziejski, Adrian and Pandit, Mahasweta and Vértesi, Tamás and Życzkowski, Karol and Laskowski, Wiesław},
month = sep,
year = {2019},
pages = {032112},
}
• Máté. Farkas and J. Kaniewski, “Self-testing mutually unbiased bases in the prepare-and-measure scenario,” Physical review a, vol. 99, iss. 3, p. 32316, 2019. doi:10.1103/PhysRevA.99.032316
@article{farkas_self-testing_2019,
title = {Self-testing mutually unbiased bases in the prepare-and-measure scenario},
volume = {99},
issn = {2469-9926, 2469-9934},
doi = {10.1103/PhysRevA.99.032316},
language = {en},
number = {3},
urldate = {2021-05-10},
journal = {Physical Review A},
author = {Farkas, Máté and Kaniewski, Jędrzej},
month = mar,
year = {2019},
pages = {032316},
}
• W. Klobus, A. Burchardt, A. Kolodziejski, M. Pandit, T. Vertesi, K. Zyczkowski, and W. Laskowski, “\$k\$-uniform mixed states,” Physical review a, vol. 100, iss. 3, p. 32112, 2019. doi:10.1103/PhysRevA.100.032112
We investigate the maximum purity that can be achieved by k-uniform mixed states of N parties. Such N-party states have the property that all their k-party reduced states are maximally mixed. A scheme to construct explicitly k-uniform states using a set of specific N-qubit Pauli matrices is proposed. We provide several different examples of such states and demonstrate that in some cases the state corresponds to a particular orthogonal array. The obtained states, despite being mixed, reveal strong non-classical properties such as genuine multipartite entanglement or violation of Bell inequalities.
@article{klobus_$k$-uniform_2019,
title = {\$k\$-uniform mixed states},
volume = {100},
issn = {2469-9926, 2469-9934},
url = {http://arxiv.org/abs/1906.01311},
doi = {10.1103/PhysRevA.100.032112},
abstract = {We investigate the maximum purity that can be achieved by k-uniform mixed states of N parties. Such N-party states have the property that all their k-party reduced states are maximally mixed. A scheme to construct explicitly k-uniform states using a set of specific N-qubit Pauli matrices is proposed. We provide several different examples of such states and demonstrate that in some cases the state corresponds to a particular orthogonal array. The obtained states, despite being mixed, reveal strong non-classical properties such as genuine multipartite entanglement or violation of Bell inequalities.},
number = {3},
urldate = {2021-07-28},
journal = {Physical Review A},
author = {Klobus, Waldemar and Burchardt, Adam and Kolodziejski, Adrian and Pandit, Mahasweta and Vertesi, Tamas and Zyczkowski, Karol and Laskowski, Wieslaw},
month = sep,
year = {2019},
note = {arXiv: 1906.01311},
keywords = {Quantum Physics},
pages = {032112},
}
• J. Ryu, B. Woloncewicz, M. Marciniak, M. Wieśniak, and M. Żukowski, “General mapping of multi-qu\$d\$it entanglement conditions to non-separability indicators for quantum optical fields,” Physical review research, vol. 1, iss. 3, p. 32041, 2019. doi:10.1103/PhysRevResearch.1.032041
We show that any multi-qudit entanglement witness leads to a non-separability indicator for quantum optical fields, which involves intensity correlations. We get, e.g., necessary and sufficient conditions for intensity or intensity-rate correlations to reveal polarization entanglement. We also derive separability conditions for experiments involving multiport interferometers, now feasible with integrated optics. We show advantages of using intensity rates rather than intensities, e.g., a mapping of Bell inequalities to ones for optical fields. The results have implication for studies of non-classicality of “macroscopic” systems of undefined or uncontrollable number of “particles”.
@article{ryu_general_2019-1,
title = {General mapping of multi-qu\$d\$it entanglement conditions to non-separability indicators for quantum optical fields},
volume = {1},
issn = {2643-1564},
url = {http://arxiv.org/abs/1903.03526},
doi = {10.1103/PhysRevResearch.1.032041},
abstract = {We show that any multi-qudit entanglement witness leads to a non-separability indicator for quantum optical fields, which involves intensity correlations. We get, e.g., necessary and sufficient conditions for intensity or intensity-rate correlations to reveal polarization entanglement. We also derive separability conditions for experiments involving multiport interferometers, now feasible with integrated optics. We show advantages of using intensity rates rather than intensities, e.g., a mapping of Bell inequalities to ones for optical fields. The results have implication for studies of non-classicality of "macroscopic" systems of undefined or uncontrollable number of "particles".},
number = {3},
urldate = {2021-07-28},
journal = {Physical Review Research},
author = {Ryu, Junghee and Woloncewicz, Bianka and Marciniak, Marcin and Wieśniak, Marcin and Żukowski, Marek},
month = dec,
year = {2019},
note = {arXiv: 1903.03526},
keywords = {Quantum Physics},
pages = {032041},
}
2018
• A. Dutta, T. Nahm, J. Lee, and M. Żukowski, “Geometric extension of Clauser–Horne inequality to more qubits,” New journal of physics, vol. 20, iss. 9, p. 93006, 2018. doi:10.1088/1367-2630/aadc78
@article{dutta_geometric_2018,
title = {Geometric extension of {Clauser}–{Horne} inequality to more qubits},
volume = {20},
issn = {1367-2630},
number = {9},
urldate = {2020-04-22},
journal = {New Journal of Physics},
author = {Dutta, Arijit and Nahm, Tschang-Uh and Lee, Jinhyoung and Żukowski, Marek},
month = sep,
year = {2018},
pages = {093006},
}
arXiv preprints
2021
• I. Reena, H. S. Karthik, P. J. Tej, U. A. R. Devi, Sudha, and A. K. Rajagopal, “Entanglement detection in permutation symmetric states based on violation of local sum uncertainty relation,” Arxiv:2103.15731 [quant-ph], 2021.
We show that violation of variance based local sum uncertainty relation (LSUR) for angular momentum operators of a bipartite system, proposed by Hofmann and Takeuchi, Phys. Rev. A 68, 032103 (2003)], is necessary and sufficient for entanglement in two-qubit permutation symmetric state. Moreover, we also establish its one-to-one connection with negativity of covariance matrix [Phys. Lett. A 364, 203 (2007)] of the two-qubit reduced system of a permutation symmetric N-qubit state. Consequently, it is seen that the violation of the angular momentum LSUR serves as a necessary condition for pairwise entanglement in \$N\$-qubit system, obeying exchange symmetry. We illustrate physical examples of entangled permutation symmetric N-qubit systems, where violation of the local sum uncertainty relation manifests itself as a signature of pairwise entanglement.
@Article{reena_entanglement_2021,
author = {Reena, I. and Karthik, H. S. and Tej, J. Prabhu and Devi, A. R. Usha and {Sudha} and Rajagopal, A. K.},
journal = {arXiv:2103.15731 [quant-ph]},
title = {Entanglement detection in permutation symmetric states based on violation of local sum uncertainty relation},
year = {2021},
month = mar,
note = {arXiv: 2103.15731},
abstract = {We show that violation of variance based local sum uncertainty relation (LSUR) for angular momentum operators of a bipartite system, proposed by Hofmann and Takeuchi, Phys. Rev. A 68, 032103 (2003)], is necessary and sufficient for entanglement in two-qubit permutation symmetric state. Moreover, we also establish its one-to-one connection with negativity of covariance matrix [Phys. Lett. A 364, 203 (2007)] of the two-qubit reduced system of a permutation symmetric N-qubit state. Consequently, it is seen that the violation of the angular momentum LSUR serves as a necessary condition for pairwise entanglement in \$N\$-qubit system, obeying exchange symmetry. We illustrate physical examples of entangled permutation symmetric N-qubit systems, where violation of the local sum uncertainty relation manifests itself as a signature of pairwise entanglement.},
eprint = {arXiv:2103.15731},
keywords = {Quantum Physics},
url = {http://www.arxiv.org/abs/2103.15731},
urldate = {2021-07-28},
}
• R. Alicki and A. Jenkins, “Quantum thermodynamics of coronal heating,” Arxiv:2103.08746 [astro-ph, physics:physics, physics:quant-ph], 2021.
Using the Markovian master equation for quantum quasiparticles, we show that convection in the stellar photosphere generates plasma waves by an irreversible process akin to Zeldovich superradiance and sonic booms. In the Sun, this mechanism is most efficient in quiet regions with magnetic fields of order one gauss. Most energy is carried by Alfven waves with megahertz frequencies, which travel upwards until they reach a height at which they dissipate via mode conversion. This gives the right power flux for the observed energy transport from the colder photosphere to the hotter corona.
@article{alicki_quantum_2021,
title = {Quantum thermodynamics of coronal heating},
url = {http://arxiv.org/abs/2103.08746},
abstract = {Using the Markovian master equation for quantum quasiparticles, we show that convection in the stellar photosphere generates plasma waves by an irreversible process akin to Zeldovich superradiance and sonic booms. In the Sun, this mechanism is most efficient in quiet regions with magnetic fields of order one gauss. Most energy is carried by Alfven waves with megahertz frequencies, which travel upwards until they reach a height at which they dissipate via mode conversion. This gives the right power flux for the observed energy transport from the colder photosphere to the hotter corona.},
urldate = {2021-07-28},
journal = {arXiv:2103.08746 [astro-ph, physics:physics, physics:quant-ph]},
author = {Alicki, Robert and Jenkins, Alejandro},
month = may,
year = {2021},
note = {arXiv: 2103.08746},
keywords = {Astrophysics - Solar and Stellar Astrophysics, Astrophysics - High Energy Astrophysical Phenomena, Physics - Plasma Physics, Quantum Physics},
}
• M. Markiewicz, M. Karczewski, and P. Kurzynski, “Borromean states in discrete-time quantum walks,” Arxiv:2005.13588 [quant-ph], 2021.
In the right conditions, removing one particle from a multipartite bound state can make it fall apart. This feature, known as the “Borromean property”, has been recently demonstrated experimentally in Efimov states. One could expect that such peculiar behavior should be linked with the presence of strong inter-particle correlations. However, any exploration of this connection is hindered by the complexity of the physical systems exhibiting the Borromean property. To overcome this problem, we introduce a simple dynamical toy model based on a discrete-time quantum walk of many interacting particles. We show that the particles described by it need to exhibit the Greenberger-Horne-Zeillinger (GHZ) entanglement to form Borromean bound states. As this type of entanglement is very prone to particle losses, our work demonstrates an intuitive link between correlations and Borromean properties of the system. Moreover, we discuss our findings in the context of the formation of composite particles.
@article{markiewicz_borromean_2021,
title = {Borromean states in discrete-time quantum walks},
url = {http://arxiv.org/abs/2005.13588},
abstract = {In the right conditions, removing one particle from a multipartite bound state can make it fall apart. This feature, known as the "Borromean property", has been recently demonstrated experimentally in Efimov states. One could expect that such peculiar behavior should be linked with the presence of strong inter-particle correlations. However, any exploration of this connection is hindered by the complexity of the physical systems exhibiting the Borromean property. To overcome this problem, we introduce a simple dynamical toy model based on a discrete-time quantum walk of many interacting particles. We show that the particles described by it need to exhibit the Greenberger-Horne-Zeillinger (GHZ) entanglement to form Borromean bound states. As this type of entanglement is very prone to particle losses, our work demonstrates an intuitive link between correlations and Borromean properties of the system. Moreover, we discuss our findings in the context of the formation of composite particles.},
urldate = {2021-07-28},
journal = {arXiv:2005.13588 [quant-ph]},
author = {Markiewicz, Marcin and Karczewski, Marcin and Kurzynski, Pawel},
month = mar,
year = {2021},
note = {arXiv: 2005.13588},
keywords = {Quantum Physics},
}
• P. Blasiak, E. Borsuk, and M. Markiewicz, “On safe post-selection for Bell nonlocality: Causal diagram approach,” Arxiv:2012.07285 [quant-ph], 2021.
Reasoning about Bell nonlocality from the correlations observed in post-selected data is always a matter of concern. This is because conditioning on the outcomes is a source of non-causal correlations, known as a selection bias, rising doubts whether the conclusion concerns the actual causal process or maybe it is just an effect of processing the data. Yet, even in the idealised case without detection inefficiencies, post-selection is an integral part of every experimental design, not least because it is a part of the entanglement generation process itself. In this paper we discuss a broad class of scenarios with post-selection on multiple spatially distributed outcomes. A simple criterion is worked out, called the all-but-one principle, showing when the conclusions about nonlocality from breaking Bell inequalities with post-selected data remain in force. Generality of this result, attained by adopting the high-level diagrammatic tools of causal inference, provides safe grounds for systematic reasoning based on the standard form of multipartite Bell inequalities in a wide array of entanglement generation schemes without worrying about the dangers of selection bias.
@article{blasiak_safe_2021,
title = {On safe post-selection for {Bell} nonlocality: {Causal} diagram approach},
shorttitle = {On safe post-selection for {Bell} nonlocality},
url = {http://arxiv.org/abs/2012.07285},
abstract = {Reasoning about Bell nonlocality from the correlations observed in post-selected data is always a matter of concern. This is because conditioning on the outcomes is a source of non-causal correlations, known as a selection bias, rising doubts whether the conclusion concerns the actual causal process or maybe it is just an effect of processing the data. Yet, even in the idealised case without detection inefficiencies, post-selection is an integral part of every experimental design, not least because it is a part of the entanglement generation process itself. In this paper we discuss a broad class of scenarios with post-selection on multiple spatially distributed outcomes. A simple criterion is worked out, called the all-but-one principle, showing when the conclusions about nonlocality from breaking Bell inequalities with post-selected data remain in force. Generality of this result, attained by adopting the high-level diagrammatic tools of causal inference, provides safe grounds for systematic reasoning based on the standard form of multipartite Bell inequalities in a wide array of entanglement generation schemes without worrying about the dangers of selection bias.},
urldate = {2021-07-28},
journal = {arXiv:2012.07285 [quant-ph]},
author = {Blasiak, Pawel and Borsuk, Ewa and Markiewicz, Marcin},
month = apr,
year = {2021},
note = {arXiv: 2012.07285},
keywords = {Quantum Physics},
}
• T. Das, M. Karczewski, A. Mandarino, M. Markiewicz, B. Woloncewicz, and M. Żukowski, “No-go for device independent protocols with Tan-Walls-Collett nonlocality of a single photon’,” Arxiv:2102.03254 [quant-ph], 2021.
We investigate the interferometric scheme put forward by Tan, Walls and Collett [Phys. Rev. Lett. \{{\textbackslash}bf 66\}, 256 (1991)] that aims to reveal Bell non-classicality of a single photon. By providing a local hidden variable model that reproduces their results, we decisively refute this claim. In particular, this means that the scheme cannot be used in device-independent protocols.
@article{das_no-go_2021,
title = {No-go for device independent protocols with {Tan}-{Walls}-{Collett} nonlocality of a single photon'},
url = {http://arxiv.org/abs/2102.03254},
abstract = {We investigate the interferometric scheme put forward by Tan, Walls and Collett [Phys. Rev. Lett. \{{\textbackslash}bf 66\}, 256 (1991)] that aims to reveal Bell non-classicality of a single photon. By providing a local hidden variable model that reproduces their results, we decisively refute this claim. In particular, this means that the scheme cannot be used in device-independent protocols.},
urldate = {2021-07-28},
journal = {arXiv:2102.03254 [quant-ph]},
author = {Das, Tamoghna and Karczewski, Marcin and Mandarino, Antonio and Markiewicz, Marcin and Woloncewicz, Bianka and Żukowski, Marek},
month = feb,
year = {2021},
note = {arXiv: 2102.03254},
keywords = {Quantum Physics},
}
• T. Das, M. Karczewski, A. Mandarino, M. Markiewicz, B. Woloncewicz, and M. Żukowski, “Can single photon excitation of two spatially separated modes lead to a violation of Bell inequality via homodyne measurements?,” Arxiv:2102.06689 [quant-ph], 2021.
We reconsider the all-optical homodyne-measurement based experimental schemes that aim to reveal Bell nonclassicality of a single photon, often termed nonlocality’. We focus on the schemes put forward by Tan, Walls and Collett (TWC, 1991) and Hardy (1994). In the light of our previous work the Tan, Walls and Collett setup can be described by a precise local hidden variable model, hence the claimed nonclassicality of this proposal is apparent, whereas the nonclassicality proof proposed by Hardy is impeccable. In this work we resolve the following problem: which feature of the Hardy’s approach is crucial for its successful confirmation of nonclassicality. The scheme of Hardy differs from the Tan, Walls and Collett setup in two aspects. (i) It introduces a superposition of a single photon excitation with vacuum as the initial state of one of the input modes of a 50-50 beamsplitter, which creates the superposition state of two separable (exit) modes under investigation. (ii) In the final measurements Hardy’s proposal utilises a varying strengths of the local oscillator fields, whereas in the TWC case they are constant. In fact the local oscillators in Hardy’s scheme are either on or off (the local setting is specified by the presence or absence of the local auxiliary field). We show that it is the varying strength of the local oscillators, from setting to setting, which is the crucial feature enabling violation of local realism in the Hardy setup, whereas it is not necessary to use initial superposition of a single photon excitation with vacuum as the initial state of the input mode. Neither one needs to operate in the fully on/off detection scheme. Despite the failure of the Tan, Walls and Collett scheme in proving Bell nonclassicality, we show that their scheme can serve as an entanglement indicator.
@article{das_can_2021,
title = {Can single photon excitation of two spatially separated modes lead to a violation of {Bell} inequality via homodyne measurements?},
url = {http://arxiv.org/abs/2102.06689},
abstract = {We reconsider the all-optical homodyne-measurement based experimental schemes that aim to reveal Bell nonclassicality of a single photon, often termed nonlocality'. We focus on the schemes put forward by Tan, Walls and Collett (TWC, 1991) and Hardy (1994). In the light of our previous work the Tan, Walls and Collett setup can be described by a precise local hidden variable model, hence the claimed nonclassicality of this proposal is apparent, whereas the nonclassicality proof proposed by Hardy is impeccable. In this work we resolve the following problem: which feature of the Hardy's approach is crucial for its successful confirmation of nonclassicality. The scheme of Hardy differs from the Tan, Walls and Collett setup in two aspects. (i) It introduces a superposition of a single photon excitation with vacuum as the initial state of one of the input modes of a 50-50 beamsplitter, which creates the superposition state of two separable (exit) modes under investigation. (ii) In the final measurements Hardy's proposal utilises a varying strengths of the local oscillator fields, whereas in the TWC case they are constant. In fact the local oscillators in Hardy's scheme are either on or off (the local setting is specified by the presence or absence of the local auxiliary field). We show that it is the varying strength of the local oscillators, from setting to setting, which is the crucial feature enabling violation of local realism in the Hardy setup, whereas it is not necessary to use initial superposition of a single photon excitation with vacuum as the initial state of the input mode. Neither one needs to operate in the fully on/off detection scheme. Despite the failure of the Tan, Walls and Collett scheme in proving Bell nonclassicality, we show that their scheme can serve as an entanglement indicator.},
urldate = {2021-07-28},
journal = {arXiv:2102.06689 [quant-ph]},
author = {Das, Tamoghna and Karczewski, Marcin and Mandarino, Antonio and Markiewicz, Marcin and Woloncewicz, Bianka and Żukowski, Marek},
month = feb,
year = {2021},
note = {arXiv: 2102.06689},
keywords = {Quantum Physics},
}
• P. Blasiak, E. Borsuk, M. Markiewicz, and Y. Kim, “Efficient linear optical generation of a multipartite W state,” Arxiv:2103.02206 [quant-ph], 2021.
A novel scheme is presented for generation of a multipartite W state for arbitrary number of qubits. Based on a recent proposal of entanglement without touching, it serves to demonstrate the potential of particle indistinguishability as a useful resource of entanglement for practical applications. The devised scheme is efficient in design, meaning that it is built with linear optics without the need for auxiliary particles nor measurements. Yet, the success probability is shown to be highly competitive compared with the existing proposals (i.e. decreases polynomially with the number of qubits) and remains insensitive to particle statistics (i.e. has the same efficiency for bosons and fermions).
@article{blasiak_efficient_2021,
title = {Efficient linear optical generation of a multipartite {W} state},
url = {http://arxiv.org/abs/2103.02206},
abstract = {A novel scheme is presented for generation of a multipartite W state for arbitrary number of qubits. Based on a recent proposal of entanglement without touching, it serves to demonstrate the potential of particle indistinguishability as a useful resource of entanglement for practical applications. The devised scheme is efficient in design, meaning that it is built with linear optics without the need for auxiliary particles nor measurements. Yet, the success probability is shown to be highly competitive compared with the existing proposals (i.e. decreases polynomially with the number of qubits) and remains insensitive to particle statistics (i.e. has the same efficiency for bosons and fermions).},
urldate = {2021-07-28},
journal = {arXiv:2103.02206 [quant-ph]},
author = {Blasiak, Pawel and Borsuk, Ewa and Markiewicz, Marcin and Kim, Yong-Su},
month = mar,
year = {2021},
note = {arXiv: 2103.02206},
keywords = {Quantum Physics},
}
• T. Das, M. Karczewski, A. Mandarino, M. Markiewicz, B. Woloncewicz, and M. Żukowski, “On detecting violation of local realism with photon-number resolving weak-field homodyne measurements,” Arxiv:2104.10703 [quant-ph], 2021.
Non-existence of a local hidden variables (LHV) model for a phenomenon benchmarks its use in device-independent quantum protocols. Nowadays photon-number resolving weak-field homodyne measurements allow realization of emblematic gedanken experiments. Alas, claims that we can have no LHV models for such experiments on (a) excitation of a pair of spatial modes by a single photon, and (b) two spatial modes in a weakly squeezed vacuum state, involving constant local oscillator strengths, are unfounded. For (a) an exact LHV model resolves the dispute on the “non-locality of a single photon” in its original formulation. It is measurements with local oscillators on or off that do not have LHV models.
@article{das_detecting_2021,
title = {On detecting violation of local realism with photon-number resolving weak-field homodyne measurements},
url = {http://arxiv.org/abs/2104.10703},
abstract = {Non-existence of a local hidden variables (LHV) model for a phenomenon benchmarks its use in device-independent quantum protocols. Nowadays photon-number resolving weak-field homodyne measurements allow realization of emblematic gedanken experiments. Alas, claims that we can have no LHV models for such experiments on (a) excitation of a pair of spatial modes by a single photon, and (b) two spatial modes in a weakly squeezed vacuum state, involving constant local oscillator strengths, are unfounded. For (a) an exact LHV model resolves the dispute on the "non-locality of a single photon" in its original formulation. It is measurements with local oscillators on or off that do not have LHV models.},
urldate = {2021-07-28},
journal = {arXiv:2104.10703 [quant-ph]},
author = {Das, Tamoghna and Karczewski, Marcin and Mandarino, Antonio and Markiewicz, Marcin and Woloncewicz, Bianka and Żukowski, Marek},
month = apr,
year = {2021},
note = {arXiv: 2104.10703},
keywords = {Quantum Physics},
}
• G. Scala, K. Słowik, P. Facchi, S. Pascazio, and F. Pepe, “Beyond the Rabi model: light interactions with polar atomic systems in a cavity,” Arxiv:2103.11232 [quant-ph], 2021.
The Rabi Hamiltonian, describing the interaction between a two-level atomic system and a single cavity mode of the electromagnetic field, is one of the fundamental models in quantum optics. The model becomes exactly solvable by considering an atom without permanent dipole moments, whose excitation energy is quasi-resonant with the cavity photon energy, and by neglecting the non resonant (counter-rotating) terms. In this case, after including the decay of either the atom or the cavity mode to a continuum, one is able to derive the well-known phenomenology of quasi-resonant transitions, including the fluorescence triplets. In this work we consider the most general Rabi model, incorporating the effects of permanent atomic electric dipole moments, and, based on a perturbative analysis, we compare the intensities of emission lines induced by rotating terms, counter-rotating terms and parity-symmetry-breaking terms. The analysis reveals that the emission strength related to the existence of permanent dipoles may surpass the one due to the counter-rotating interaction terms, but is usually much weaker than the emission due to the main, resonant coupling. This ratio can be modified in systems with a reduced dimensionality or by engineering the energy spectral density of the continuum.
@article{scala_beyond_2021,
title = {Beyond the {Rabi} model: light interactions with polar atomic systems in a cavity},
shorttitle = {Beyond the {Rabi} model},
url = {http://arxiv.org/abs/2103.11232},
abstract = {The Rabi Hamiltonian, describing the interaction between a two-level atomic system and a single cavity mode of the electromagnetic field, is one of the fundamental models in quantum optics. The model becomes exactly solvable by considering an atom without permanent dipole moments, whose excitation energy is quasi-resonant with the cavity photon energy, and by neglecting the non resonant (counter-rotating) terms. In this case, after including the decay of either the atom or the cavity mode to a continuum, one is able to derive the well-known phenomenology of quasi-resonant transitions, including the fluorescence triplets. In this work we consider the most general Rabi model, incorporating the effects of permanent atomic electric dipole moments, and, based on a perturbative analysis, we compare the intensities of emission lines induced by rotating terms, counter-rotating terms and parity-symmetry-breaking terms. The analysis reveals that the emission strength related to the existence of permanent dipoles may surpass the one due to the counter-rotating interaction terms, but is usually much weaker than the emission due to the main, resonant coupling. This ratio can be modified in systems with a reduced dimensionality or by engineering the energy spectral density of the continuum.},
urldate = {2021-07-28},
journal = {arXiv:2103.11232 [quant-ph]},
author = {Scala, Giovanni and Słowik, Karolina and Facchi, Paolo and Pascazio, Saverio and Pepe, Francesco},
month = mar,
year = {2021},
note = {arXiv: 2103.11232},
keywords = {Quantum Physics},
}
• M. Gachechiladze, B. Bąk, M. Pawłowski, and N. Miklin, “Quantum Bell inequalities from Information Causality – tight for Macroscopic Locality,” Arxiv:2103.05029 [quant-ph], 2021.
Quantum generalizations of Bell inequalities are analytical expressions of correlations observed in the Bell experiment that are used to explain or estimate the set of correlations that quantum theory allows. Unlike standard Bell inequalities, their quantum analogs are rare in the literature, as no known algorithm can be used to find them systematically. In this work, we present a family of quantum Bell inequalities in scenarios where the number of settings or outcomes can be arbitrarily high. We derive these inequalities from the principle of Information Causality, and thus, we do not assume the formalism of quantum mechanics. Considering the symmetries of the derived inequalities, we show that the latter give the necessary and sufficient condition for the correlations to comply with Macroscopic Locality. As a result, we conclude that the principle of Information Causality is strictly stronger than the principle of Macroscopic Locality in the subspace defined by these symmetries.
@article{gachechiladze_quantum_2021,
title = {Quantum {Bell} inequalities from {Information} {Causality} -- tight for {Macroscopic} {Locality}},
url = {http://arxiv.org/abs/2103.05029},
abstract = {Quantum generalizations of Bell inequalities are analytical expressions of correlations observed in the Bell experiment that are used to explain or estimate the set of correlations that quantum theory allows. Unlike standard Bell inequalities, their quantum analogs are rare in the literature, as no known algorithm can be used to find them systematically. In this work, we present a family of quantum Bell inequalities in scenarios where the number of settings or outcomes can be arbitrarily high. We derive these inequalities from the principle of Information Causality, and thus, we do not assume the formalism of quantum mechanics. Considering the symmetries of the derived inequalities, we show that the latter give the necessary and sufficient condition for the correlations to comply with Macroscopic Locality. As a result, we conclude that the principle of Information Causality is strictly stronger than the principle of Macroscopic Locality in the subspace defined by these symmetries.},
urldate = {2021-07-28},
journal = {arXiv:2103.05029 [quant-ph]},
author = {Gachechiladze, Mariami and Bąk, Bartłomiej and Pawłowski, Marcin and Miklin, Nikolai},
month = mar,
year = {2021},
note = {arXiv: 2103.05029},
keywords = {Quantum Physics},
}
• D. Schmid, H. Du, J. H. Selby, and M. F. Pusey, “The only noncontextual model of the stabilizer subtheory is Gross’s,” Arxiv:2101.06263 [quant-ph], 2021.
We prove that there is a unique nonnegative and diagram-preserving quasiprobability representation of the stabilizer subtheory in all odd dimensions, namely Gross’s discrete Wigner function. This representation is equivalent to Spekkens’ epistemically restricted toy theory, which is consequently singled out as the unique noncontextual ontological model for the stabilizer subtheory. Strikingly, the principle of noncontextuality is powerful enough (at least in this setting) to single out one particular classical realist interpretation. Our result explains the practical utility of Gross’s representation, e.g. why (in the setting of the stabilizer subtheory) negativity in this particular representation implies generalized contextuality, and hence sheds light on why negativity of this particular representation is a resource for quantum computational speedup. It also allows us to prove that generalized contextuality is a necessary resource for universal quantum computation in the state injection model. In all even dimensions, we prove that there does not exist any nonnegative and diagram-preserving quasiprobability representation of the stabilizer subtheory, and, hence, that the stabilizer subtheory is contextual in all even dimensions. Together, these results constitute a complete characterization of the (non)classicality of all stabilizer subtheories.
@article{schmid_only_2021,
title = {The only noncontextual model of the stabilizer subtheory is {Gross}'s},
url = {http://arxiv.org/abs/2101.06263},
abstract = {We prove that there is a unique nonnegative and diagram-preserving quasiprobability representation of the stabilizer subtheory in all odd dimensions, namely Gross's discrete Wigner function. This representation is equivalent to Spekkens' epistemically restricted toy theory, which is consequently singled out as the unique noncontextual ontological model for the stabilizer subtheory. Strikingly, the principle of noncontextuality is powerful enough (at least in this setting) to single out one particular classical realist interpretation. Our result explains the practical utility of Gross's representation, e.g. why (in the setting of the stabilizer subtheory) negativity in this particular representation implies generalized contextuality, and hence sheds light on why negativity of this particular representation is a resource for quantum computational speedup. It also allows us to prove that generalized contextuality is a necessary resource for universal quantum computation in the state injection model. In all even dimensions, we prove that there does not exist any nonnegative and diagram-preserving quasiprobability representation of the stabilizer subtheory, and, hence, that the stabilizer subtheory is contextual in all even dimensions. Together, these results constitute a complete characterization of the (non)classicality of all stabilizer subtheories.},
urldate = {2021-07-28},
journal = {arXiv:2101.06263 [quant-ph]},
author = {Schmid, David and Du, Haoxing and Selby, John H. and Pusey, Matthew F.},
month = feb,
year = {2021},
note = {arXiv: 2101.06263},
keywords = {Quantum Physics},
}
• T. D. Galley, F. Giacomini, and J. H. Selby, “A no-go theorem on the nature of the gravitational field beyond quantum theory,” Arxiv:2012.01441 [gr-qc, physics:quant-ph], 2021.
Recently, table-top experiments involving massive quantum systems have been proposed to test the interface of quantum theory and gravity. In particular, the crucial point of the debate is whether it is possible to conclude anything on the quantum nature of the gravitational field, provided that two quantum systems become entangled due to solely the gravitational interaction. Typically, this question has been addressed by assuming an underlying physical theory to describe the gravitational interaction, but no systematic approach to characterise the set of possible gravitational theories which are compatible with the observation of entanglement has been proposed. Here, we introduce the framework of Generalised Probabilistic Theories (GPTs) to the study of the nature of the gravitational field. This framework has the advantage that it only relies on the set of operationally accessible states, transformations, and measurements, without presupposing an underlying theory. Hence, it provides a framework to systematically study all theories compatible with the detection of entanglement generated via the gravitational interaction between two non-classical systems. Assuming that such entanglement is observed we prove a no-go theorem stating that the following statements are incompatible: i) the two non-classical systems are independent subsystems, ii) the gravitational field is a physical degree of freedom which mediates the interaction and iii) the gravitational field is classical. Moreover we argue that conditions i) and ii) should be met, and hence that the gravitational field is non-classical. Non-classicality does not imply that the gravitational field is quantum, and to illustrate this we provide examples of non-classical and non-quantum theories which are logically consistent with the other conditions.
@article{galley_no-go_2021,
title = {A no-go theorem on the nature of the gravitational field beyond quantum theory},
url = {http://arxiv.org/abs/2012.01441},
abstract = {Recently, table-top experiments involving massive quantum systems have been proposed to test the interface of quantum theory and gravity. In particular, the crucial point of the debate is whether it is possible to conclude anything on the quantum nature of the gravitational field, provided that two quantum systems become entangled due to solely the gravitational interaction. Typically, this question has been addressed by assuming an underlying physical theory to describe the gravitational interaction, but no systematic approach to characterise the set of possible gravitational theories which are compatible with the observation of entanglement has been proposed. Here, we introduce the framework of Generalised Probabilistic Theories (GPTs) to the study of the nature of the gravitational field. This framework has the advantage that it only relies on the set of operationally accessible states, transformations, and measurements, without presupposing an underlying theory. Hence, it provides a framework to systematically study all theories compatible with the detection of entanglement generated via the gravitational interaction between two non-classical systems. Assuming that such entanglement is observed we prove a no-go theorem stating that the following statements are incompatible: i) the two non-classical systems are independent subsystems, ii) the gravitational field is a physical degree of freedom which mediates the interaction and iii) the gravitational field is classical. Moreover we argue that conditions i) and ii) should be met, and hence that the gravitational field is non-classical. Non-classicality does not imply that the gravitational field is quantum, and to illustrate this we provide examples of non-classical and non-quantum theories which are logically consistent with the other conditions.},
urldate = {2021-07-28},
journal = {arXiv:2012.01441 [gr-qc, physics:quant-ph]},
author = {Galley, Thomas D. and Giacomini, Flaminia and Selby, John H.},
month = jun,
year = {2021},
note = {arXiv: 2012.01441},
keywords = {Quantum Physics, General Relativity and Quantum Cosmology},
}
• D. Schmid, J. H. Selby, and R. W. Spekkens, “Unscrambling the omelette of causation and inference: The framework of causal-inferential theories,” Arxiv:2009.03297 [quant-ph], 2021.
Using a process-theoretic formalism, we introduce the notion of a causal-inferential theory: a triple consisting of a theory of causal influences, a theory of inferences (of both the Boolean and Bayesian varieties), and a specification of how these interact. Recasting the notions of operational and realist theories in this mold clarifies what a realist account of an experiment offers beyond an operational account. It also yields a novel characterization of the assumptions and implications of standard no-go theorems for realist representations of operational quantum theory, namely, those based on Bell’s notion of locality and those based on generalized noncontextuality. Moreover, our process-theoretic characterization of generalised noncontextuality is shown to be implied by an even more natural principle which we term Leibnizianity. Most strikingly, our framework offers a way forward in a research program that seeks to circumvent these no-go results. Specifically, we argue that if one can identify axioms for a realist causal-inferential theory such that the notions of causation and inference can differ from their conventional (classical) interpretations, then one has the means of defining an intrinsically quantum notion of realism, and thereby a realist representation of operational quantum theory that salvages the spirit of locality and of noncontextuality.
@article{schmid_unscrambling_2021,
title = {Unscrambling the omelette of causation and inference: {The} framework of causal-inferential theories},
shorttitle = {Unscrambling the omelette of causation and inference},
url = {http://arxiv.org/abs/2009.03297},
abstract = {Using a process-theoretic formalism, we introduce the notion of a causal-inferential theory: a triple consisting of a theory of causal influences, a theory of inferences (of both the Boolean and Bayesian varieties), and a specification of how these interact. Recasting the notions of operational and realist theories in this mold clarifies what a realist account of an experiment offers beyond an operational account. It also yields a novel characterization of the assumptions and implications of standard no-go theorems for realist representations of operational quantum theory, namely, those based on Bell's notion of locality and those based on generalized noncontextuality. Moreover, our process-theoretic characterization of generalised noncontextuality is shown to be implied by an even more natural principle which we term Leibnizianity. Most strikingly, our framework offers a way forward in a research program that seeks to circumvent these no-go results. Specifically, we argue that if one can identify axioms for a realist causal-inferential theory such that the notions of causation and inference can differ from their conventional (classical) interpretations, then one has the means of defining an intrinsically quantum notion of realism, and thereby a realist representation of operational quantum theory that salvages the spirit of locality and of noncontextuality.},
urldate = {2021-07-28},
journal = {arXiv:2009.03297 [quant-ph]},
author = {Schmid, David and Selby, John H. and Spekkens, Robert W.},
month = may,
year = {2021},
note = {arXiv: 2009.03297},
keywords = {Quantum Physics},
}
• J. H. Selby, A. B. Sainz, and P. Horodecki, “Revisiting dynamics of quantum causal structures – when can causal order evolve?,” Arxiv:2008.12757 [quant-ph], 2021.
Recently, there has been substantial interest in studying the dynamics of quantum theory beyond that of states, in particular, the dynamics of channels, measurements, and higher-order transformations. Ref. [Phys. Rev. X 8(1), 011047 (2018)] pursues this using the process matrix formalism, together with a definition of the possible dynamics of such process matrices, and focusing especially on the question of evolution of causal structures. One of its major conclusions is a strong theorem saying that, within the formalism, under continuous and reversible transformations, the causal order between operations must be preserved. Here we find a surprising result: if one is to take into account a full picture of the physical evolution of operations within the standard quantum-mechanical formalism, then one can actually draw the opposite conclusion. That is, we show that under certain continuous and reversible dynamics the causal order between operations is not necessarily preserved. We moreover identify and analyse the root of this apparent contradiction, specifically, that the commonly accepted and widely applied framework of higher-order processes, whilst mathematically sound, is not always appropriate for drawing conclusions on the fundamentals of physical dynamics. Finally we show how to reconcile the elements of the whole picture following the intuition based on entanglement processing by local operations and classical communication.
@article{selby_revisiting_2021,
title = {Revisiting dynamics of quantum causal structures -- when can causal order evolve?},
url = {http://arxiv.org/abs/2008.12757},
abstract = {Recently, there has been substantial interest in studying the dynamics of quantum theory beyond that of states, in particular, the dynamics of channels, measurements, and higher-order transformations. Ref. [Phys. Rev. X 8(1), 011047 (2018)] pursues this using the process matrix formalism, together with a definition of the possible dynamics of such process matrices, and focusing especially on the question of evolution of causal structures. One of its major conclusions is a strong theorem saying that, within the formalism, under continuous and reversible transformations, the causal order between operations must be preserved. Here we find a surprising result: if one is to take into account a full picture of the physical evolution of operations within the standard quantum-mechanical formalism, then one can actually draw the opposite conclusion. That is, we show that under certain continuous and reversible dynamics the causal order between operations is not necessarily preserved. We moreover identify and analyse the root of this apparent contradiction, specifically, that the commonly accepted and widely applied framework of higher-order processes, whilst mathematically sound, is not always appropriate for drawing conclusions on the fundamentals of physical dynamics. Finally we show how to reconcile the elements of the whole picture following the intuition based on entanglement processing by local operations and classical communication.},
urldate = {2021-07-28},
journal = {arXiv:2008.12757 [quant-ph]},
author = {Selby, John H. and Sainz, Ana Belén and Horodecki, Paweł},
month = mar,
year = {2021},
note = {arXiv: 2008.12757},
keywords = {Quantum Physics},
}
• D. Schmid, T. C. Fraser, R. Kunjwal, A. B. Sainz, E. Wolfe, and R. W. Spekkens, “Understanding the interplay of entanglement and nonlocality: motivating and developing a new branch of entanglement theory,” Arxiv:2004.09194 [quant-ph], 2021.
A standard approach to quantifying resources is to determine which operations on the resources are freely available, and to deduce the partial order over resources that is induced by the relation of convertibility under the free operations. If the resource of interest is the nonclassicality of the correlations embodied in a quantum state, i.e., entanglement, then the common assumption is that the appropriate choice of free operations is Local Operations and Classical Communication (LOCC). We here advocate for the study of a different choice of free operations, namely, Local Operations and Shared Randomness (LOSR), and demonstrate its utility in understanding the interplay between the entanglement of states and the nonlocality of the correlations in Bell experiments. Specifically, we show that the LOSR paradigm (i) provides a resolution of the anomalies of nonlocality, wherein partially entangled states exhibit more nonlocality than maximally entangled states, (ii) entails new notions of genuine multipartite entanglement and nonlocality that are free of the pathological features of the conventional notions, and (iii) makes possible a resource-theoretic account of the self-testing of entangled states which generalizes and simplifies prior results. Along the way, we derive some fundamental results concerning the necessary and sufficient conditions for convertibility between pure entangled states under LOSR and highlight some of their consequences, such as the impossibility of catalysis for bipartite pure states. The resource-theoretic perspective also clarifies why it is neither surprising nor problematic that there are mixed entangled states which do not violate any Bell inequality. Our results motivate the study of LOSR-entanglement as a new branch of entanglement theory.
@article{schmid_understanding_2021,
title = {Understanding the interplay of entanglement and nonlocality: motivating and developing a new branch of entanglement theory},
shorttitle = {Understanding the interplay of entanglement and nonlocality},
url = {http://arxiv.org/abs/2004.09194},
abstract = {A standard approach to quantifying resources is to determine which operations on the resources are freely available, and to deduce the partial order over resources that is induced by the relation of convertibility under the free operations. If the resource of interest is the nonclassicality of the correlations embodied in a quantum state, i.e., entanglement, then the common assumption is that the appropriate choice of free operations is Local Operations and Classical Communication (LOCC). We here advocate for the study of a different choice of free operations, namely, Local Operations and Shared Randomness (LOSR), and demonstrate its utility in understanding the interplay between the entanglement of states and the nonlocality of the correlations in Bell experiments. Specifically, we show that the LOSR paradigm (i) provides a resolution of the anomalies of nonlocality, wherein partially entangled states exhibit more nonlocality than maximally entangled states, (ii) entails new notions of genuine multipartite entanglement and nonlocality that are free of the pathological features of the conventional notions, and (iii) makes possible a resource-theoretic account of the self-testing of entangled states which generalizes and simplifies prior results. Along the way, we derive some fundamental results concerning the necessary and sufficient conditions for convertibility between pure entangled states under LOSR and highlight some of their consequences, such as the impossibility of catalysis for bipartite pure states. The resource-theoretic perspective also clarifies why it is neither surprising nor problematic that there are mixed entangled states which do not violate any Bell inequality. Our results motivate the study of LOSR-entanglement as a new branch of entanglement theory.},
urldate = {2021-07-28},
journal = {arXiv:2004.09194 [quant-ph]},
author = {Schmid, David and Fraser, Thomas C. and Kunjwal, Ravi and Sainz, Ana Belen and Wolfe, Elie and Spekkens, Robert W.},
month = may,
year = {2021},
note = {arXiv: 2004.09194},
keywords = {Quantum Physics},
}
• P. J. Cavalcanti, J. H. Selby, J. Sikora, T. D. Galley, and A. B. Sainz, “Witworld: A generalised probabilistic theory featuring post-quantum steering,” Arxiv:2102.06581 [quant-ph], 2021.
We introduce Witworld: a generalised probabilistic theory with strong post-quantum features, which subsumes Boxworld. Witworld is the first theory that features post-quantum steering, and also the first that outperforms quantum theory at the task of remote state preparation. We further show post-quantum steering to be the source of this advantage, and hence present the first instance where post-quantum steering is a stronger-than-quantum resource for information processing.
@article{cavalcanti_witworld:_2021,
title = {Witworld: {A} generalised probabilistic theory featuring post-quantum steering},
shorttitle = {Witworld},
url = {http://arxiv.org/abs/2102.06581},
abstract = {We introduce Witworld: a generalised probabilistic theory with strong post-quantum features, which subsumes Boxworld. Witworld is the first theory that features post-quantum steering, and also the first that outperforms quantum theory at the task of remote state preparation. We further show post-quantum steering to be the source of this advantage, and hence present the first instance where post-quantum steering is a stronger-than-quantum resource for information processing.},
urldate = {2021-07-28},
journal = {arXiv:2102.06581 [quant-ph]},
author = {Cavalcanti, Paulo J. and Selby, John H. and Sikora, Jamie and Galley, Thomas D. and Sainz, Ana Belén},
month = feb,
year = {2021},
note = {arXiv: 2102.06581},
keywords = {Quantum Physics},
}
• M. Grassl, F. Huber, and A. Winter, “Entropic proofs of Singleton bounds for quantum error-correcting codes,” Arxiv:2010.07902 [quant-ph], 2021.
We show that a relatively simple reasoning using von Neumann entropy inequalities yields a robust proof of the quantum Singleton bound for quantum error-correcting codes (QECC). For entanglement-assisted quantum error-correcting codes (EAQECC) and catalytic codes (CQECC), the generalised quantum Singleton bound was believed to hold for many years until recently one of us found a counterexample [MG, arXiv:2007.01249]. Here, we rectify this state of affairs by proving the correct generalised quantum Singleton bound for CQECC, extending the above-mentioned proof method for QECC; we also prove information-theoretically tight bounds on the entanglement-communication tradeoff for EAQECC. All of the bounds relate block length \$n\$ and code length \$k\$ for given minimum distance \$d\$ and we show that they are robust, in the sense that they hold with small perturbations for codes which only correct most of the erasure errors of less than \$d\$ letters. In contrast to the classical case, the bounds take on qualitatively different forms depending on whether the minimum distance is smaller or larger than half the block length. We also provide a propagation rule, where any pure QECC yields an EAQECC with the same distance and dimension but of shorter block length.
@article{grassl_entropic_2021,
title = {Entropic proofs of {Singleton} bounds for quantum error-correcting codes},
url = {http://arxiv.org/abs/2010.07902},
abstract = {We show that a relatively simple reasoning using von Neumann entropy inequalities yields a robust proof of the quantum Singleton bound for quantum error-correcting codes (QECC). For entanglement-assisted quantum error-correcting codes (EAQECC) and catalytic codes (CQECC), the generalised quantum Singleton bound was believed to hold for many years until recently one of us found a counterexample [MG, arXiv:2007.01249]. Here, we rectify this state of affairs by proving the correct generalised quantum Singleton bound for CQECC, extending the above-mentioned proof method for QECC; we also prove information-theoretically tight bounds on the entanglement-communication tradeoff for EAQECC. All of the bounds relate block length \$n\$ and code length \$k\$ for given minimum distance \$d\$ and we show that they are robust, in the sense that they hold with small perturbations for codes which only correct most of the erasure errors of less than \$d\$ letters. In contrast to the classical case, the bounds take on qualitatively different forms depending on whether the minimum distance is smaller or larger than half the block length. We also provide a propagation rule, where any pure QECC yields an EAQECC with the same distance and dimension but of shorter block length.},
urldate = {2021-07-28},
journal = {arXiv:2010.07902 [quant-ph]},
author = {Grassl, Markus and Huber, Felix and Winter, Andreas},
month = feb,
year = {2021},
note = {arXiv: 2010.07902},
keywords = {Quantum Physics, Computer Science - Information Theory},
}
• B. Ahmadi, S. Salimi, and A. S. Khorashad, “Refined Definitions of Heat and Work in Quantum Thermodynamics,” Arxiv:1912.01983 [quant-ph], 2021.
In this paper, unambiguous redefinitions of heat and work are presented for quantum thermodynamic systems. We will use genuine reasoning based on which Clausius originally defined work and heat in establishing thermodynamics. The change in the energy which is accompanied by a change in the entropy is identified as heat, while any change in the energy which does not lead to a change in the entropy is known as work. It will be seen that quantum coherence does not allow all the energy exchanged between two quantum systems to be only of the heat form. Several examples will also be discussed. Finally, it will be shown that these refined definitions will strongly affect the entropy production of quantum thermodynamic processes giving new insight into the irreversibility of quantum processes.
@article{ahmadi_refined_2021,
title = {Refined {Definitions} of {Heat} and {Work} in {Quantum} {Thermodynamics}},
url = {http://arxiv.org/abs/1912.01983},
abstract = {In this paper, unambiguous redefinitions of heat and work are presented for quantum thermodynamic systems. We will use genuine reasoning based on which Clausius originally defined work and heat in establishing thermodynamics. The change in the energy which is accompanied by a change in the entropy is identified as heat, while any change in the energy which does not lead to a change in the entropy is known as work. It will be seen that quantum coherence does not allow all the energy exchanged between two quantum systems to be only of the heat form. Several examples will also be discussed. Finally, it will be shown that these refined definitions will strongly affect the entropy production of quantum thermodynamic processes giving new insight into the irreversibility of quantum processes.},
urldate = {2021-07-28},
journal = {arXiv:1912.01983 [quant-ph]},
month = jul,
year = {2021},
note = {arXiv: 1912.01983},
keywords = {Quantum Physics},
}
2020
• D. Schmid, J. H. Selby, M. F. Pusey, and R. W. Spekkens, “A structure theorem for generalized-noncontextual ontological models,” Arxiv:2005.07161 [quant-ph], 2020.
It is useful to have a criterion for when the predictions of an operational theory should be considered classically explainable. Here we take the criterion to be that the theory admits of a generalized-noncontextual ontological model. Existing works on generalized noncontextuality have focused on experimental scenarios having a simple structure, typically, prepare-measure scenarios. Here, we formally extend the framework of ontological models as well as the principle of generalized noncontextuality to arbitrary compositional scenarios. We leverage this process-theoretic framework to prove that, under some reasonable assumptions, every generalized-noncontextual ontological model of a tomographically local operational theory has a surprisingly rigid and simple mathematical structure; in short, it corresponds to a frame representation which is not overcomplete. One consequence of this theorem is that the largest number of ontic states possible in any such model is given by the dimension of the associated generalized probabilistic theory. This constraint is useful for generating noncontextuality no-go theorems as well as techniques for experimentally certifying contextuality. Along the way, we extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios. Specifically, we prove a correspondence between the following three notions of classical explainability of an operational theory: (i) admitting a noncontextual ontological model, (ii) admitting of a positive quasiprobability representation, and (iii) being simplex-embeddable.
@article{schmid_structure_2020,
title = {A structure theorem for generalized-noncontextual ontological models},
url = {http://arxiv.org/abs/2005.07161},
abstract = {It is useful to have a criterion for when the predictions of an operational theory should be considered classically explainable. Here we take the criterion to be that the theory admits of a generalized-noncontextual ontological model. Existing works on generalized noncontextuality have focused on experimental scenarios having a simple structure, typically, prepare-measure scenarios. Here, we formally extend the framework of ontological models as well as the principle of generalized noncontextuality to arbitrary compositional scenarios. We leverage this process-theoretic framework to prove that, under some reasonable assumptions, every generalized-noncontextual ontological model of a tomographically local operational theory has a surprisingly rigid and simple mathematical structure; in short, it corresponds to a frame representation which is not overcomplete. One consequence of this theorem is that the largest number of ontic states possible in any such model is given by the dimension of the associated generalized probabilistic theory. This constraint is useful for generating noncontextuality no-go theorems as well as techniques for experimentally certifying contextuality. Along the way, we extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios. Specifically, we prove a correspondence between the following three notions of classical explainability of an operational theory: (i) admitting a noncontextual ontological model, (ii) admitting of a positive quasiprobability representation, and (iii) being simplex-embeddable.},
urldate = {2020-09-04},
journal = {arXiv:2005.07161 [quant-ph]},
author = {Schmid, David and Selby, John H. and Pusey, Matthew F. and Spekkens, Robert W.},
month = may,
year = {2020},
note = {arXiv: 2005.07161},
keywords = {Quantum Physics},
}
• M. Gachechiladze, N. Miklin, and R. Chaves, “Quantifying causal influences in the presence of a quantum common cause,” Arxiv:2007.01221 [quant-ph, stat], 2020.
Quantum mechanics challenges our intuition on the cause-effect relations in nature. Some fundamental concepts, including Reichenbach’s common cause principle or the notion of local realism, have to be reconsidered. Traditionally, this is witnessed by the violation of a Bell inequality. But are Bell inequalities the only signature of the incompatibility between quantum correlations and causality theory? Motivated by this question we introduce a general framework able to estimate causal influences between two variables, without the need of interventions and irrespectively of the classical, quantum, or even post-quantum nature of a common cause. In particular, by considering the simplest instrumental scenario – for which violation of Bell inequalities is not possible – we show that every pure bipartite entangled state violates the classical bounds on causal influence, thus answering in negative to the posed question and opening a new venue to explore the role of causality within quantum theory.
@article{gachechiladze_quantifying_2020,
title = {Quantifying causal influences in the presence of a quantum common cause},
url = {http://arxiv.org/abs/2007.01221},
abstract = {Quantum mechanics challenges our intuition on the cause-effect relations in nature. Some fundamental concepts, including Reichenbach's common cause principle or the notion of local realism, have to be reconsidered. Traditionally, this is witnessed by the violation of a Bell inequality. But are Bell inequalities the only signature of the incompatibility between quantum correlations and causality theory? Motivated by this question we introduce a general framework able to estimate causal influences between two variables, without the need of interventions and irrespectively of the classical, quantum, or even post-quantum nature of a common cause. In particular, by considering the simplest instrumental scenario -- for which violation of Bell inequalities is not possible -- we show that every pure bipartite entangled state violates the classical bounds on causal influence, thus answering in negative to the posed question and opening a new venue to explore the role of causality within quantum theory.},
urldate = {2020-09-04},
journal = {arXiv:2007.01221 [quant-ph, stat]},
author = {Gachechiladze, Mariami and Miklin, Nikolai and Chaves, Rafael},
month = jul,
year = {2020},
note = {arXiv: 2007.01221},
keywords = {Quantum Physics, Statistics - Machine Learning},
}
• L. Knips, J. Dziewior, W. Kłobus, W. Laskowski, T. Paterek, P. J. Shadbolt, H. Weinfurter, and J. D. A. Meinecke, “Multipartite entanglement analysis from random correlations,” Npj quantum information, vol. 6, iss. 1, p. 51, 2020. doi:10.1038/s41534-020-0281-5
Abstract Quantum entanglement is usually revealed via a well aligned, carefully chosen set of measurements. Yet, under a number of experimental conditions, for example in communication within multiparty quantum networks, noise along the channels or fluctuating orientations of reference frames may ruin the quality of the distributed states. Here, we show that even for strong fluctuations one can still gain detailed information about the state and its entanglement using random measurements. Correlations between all or subsets of the measurement outcomes and especially their distributions provide information about the entanglement structure of a state. We analytically derive an entanglement criterion for two-qubit states and provide strong numerical evidence for witnessing genuine multipartite entanglement of three and four qubits. Our methods take the purity of the states into account and are based on only the second moments of measured correlations. Extended features of this theory are demonstrated experimentally with four photonic qubits. As long as the rate of entanglement generation is sufficiently high compared to the speed of the fluctuations, this method overcomes any type and strength of localized unitary noise.
@article{knips_multipartite_2020,
title = {Multipartite entanglement analysis from random correlations},
volume = {6},
issn = {2056-6387},
url = {http://www.nature.com/articles/s41534-020-0281-5},
doi = {10.1038/s41534-020-0281-5},
abstract = {Abstract
Quantum entanglement is usually revealed via a well aligned, carefully chosen set of measurements. Yet, under a number of experimental conditions, for example in communication within multiparty quantum networks, noise along the channels or fluctuating orientations of reference frames may ruin the quality of the distributed states. Here, we show that even for strong fluctuations one can still gain detailed information about the state and its entanglement using random measurements. Correlations between all or subsets of the measurement outcomes and especially their distributions provide information about the entanglement structure of a state. We analytically derive an entanglement criterion for two-qubit states and provide strong numerical evidence for witnessing genuine multipartite entanglement of three and four qubits. Our methods take the purity of the states into account and are based on only the second moments of measured correlations. Extended features of this theory are demonstrated experimentally with four photonic qubits. As long as the rate of entanglement generation is sufficiently high compared to the speed of the fluctuations, this method overcomes any type and strength of localized unitary noise.},
language = {en},
number = {1},
urldate = {2021-05-10},
journal = {npj Quantum Information},
author = {Knips, Lukas and Dziewior, Jan and Kłobus, Waldemar and Laskowski, Wiesław and Paterek, Tomasz and Shadbolt, Peter J. and Weinfurter, Harald and Meinecke, Jasmin D. A.},
month = dec,
year = {2020},
pages = {51},
}
• R. Ramanathan, M. Horodecki, H. Anwer, S. Pironio, K. Horodecki, M. Grünfeld, S. Muhammad, M. Bourennane, and P. Horodecki, “Practical No-Signalling proof Randomness Amplification using Hardy paradoxes and its experimental implementation,” Arxiv:1810.11648 [quant-ph], 2020.
Device-Independent (DI) security is the best form of quantum cryptography, providing information-theoretic security based on the very laws of nature. In its highest form, security is guaranteed against adversaries limited only by the no-superluminal signalling rule of relativity. The task of randomness amplification, to generate secure fully uniform bits starting from weakly random seeds, is of both cryptographic and foundational interest, being important for the generation of cryptographically secure random numbers as well as bringing deep connections to the existence of free-will. DI no-signalling proof protocols for this fundamental task have thus far relied on esoteric proofs of non-locality termed pseudo-telepathy games, complicated multi-party setups or high-dimensional quantum systems, and have remained out of reach of experimental implementation. In this paper, we construct the first practically relevant no-signalling proof DI protocols for randomness amplification based on the simplest proofs of Bell non-locality and illustrate them with an experimental implementation in a quantum optical setup using polarised photons. Technically, we relate the problem to the vast field of Hardy paradoxes, without which it would be impossible to achieve amplification of arbitrarily weak sources in the simplest Bell non-locality scenario consisting of two parties choosing between two binary inputs. Furthermore, we identify a deep connection between proofs of the celebrated Kochen-Specker theorem and Hardy paradoxes that enables us to construct Hardy paradoxes with the non-zero probability taking any value in \$(0,1]\$. Our methods enable us, under the fair-sampling assumption of the experiment, to realize up to \$25\$ bits of randomness in \$20\$ hours of experimental data collection from an initial private source of randomness \$0.1\$ away from uniform.
@article{ramanathan_practical_2020,
title = {Practical {No}-{Signalling} proof {Randomness} {Amplification} using {Hardy} paradoxes and its experimental implementation},
url = {http://arxiv.org/abs/1810.11648},
abstract = {Device-Independent (DI) security is the best form of quantum cryptography, providing information-theoretic security based on the very laws of nature. In its highest form, security is guaranteed against adversaries limited only by the no-superluminal signalling rule of relativity. The task of randomness amplification, to generate secure fully uniform bits starting from weakly random seeds, is of both cryptographic and foundational interest, being important for the generation of cryptographically secure random numbers as well as bringing deep connections to the existence of free-will. DI no-signalling proof protocols for this fundamental task have thus far relied on esoteric proofs of non-locality termed pseudo-telepathy games, complicated multi-party setups or high-dimensional quantum systems, and have remained out of reach of experimental implementation. In this paper, we construct the first practically relevant no-signalling proof DI protocols for randomness amplification based on the simplest proofs of Bell non-locality and illustrate them with an experimental implementation in a quantum optical setup using polarised photons. Technically, we relate the problem to the vast field of Hardy paradoxes, without which it would be impossible to achieve amplification of arbitrarily weak sources in the simplest Bell non-locality scenario consisting of two parties choosing between two binary inputs. Furthermore, we identify a deep connection between proofs of the celebrated Kochen-Specker theorem and Hardy paradoxes that enables us to construct Hardy paradoxes with the non-zero probability taking any value in \$(0,1]\$. Our methods enable us, under the fair-sampling assumption of the experiment, to realize up to \$25\$ bits of randomness in \$20\$ hours of experimental data collection from an initial private source of randomness \$0.1\$ away from uniform.},
urldate = {2021-05-11},
journal = {arXiv:1810.11648 [quant-ph]},
author = {Ramanathan, Ravishankar and Horodecki, Michał and Anwer, Hammad and Pironio, Stefano and Horodecki, Karol and Grünfeld, Marcus and Muhammad, Sadiq and Bourennane, Mohamed and Horodecki, Paweł},
month = sep,
year = {2020},
note = {arXiv: 1810.11648},
keywords = {Quantum Physics},
}
• M. Banacki, M. Marciniak, K. Horodecki, and P. Horodecki, “Information backflow may not indicate quantum memory,” Arxiv:2008.12638 [quant-ph], 2020.
We analyze recent approaches to quantum Markovianity and how they relate to the proper definition of quantum memory. We point out that the well-known criterion of information backflow may not correctly report character of the memory falsely signaling its quantumness. Therefore, as a complement to the well-known criteria, we propose several concepts of elementary dynamical maps. Maps of this type do not increase distinguishability of states which are indistinguishable by von Neumann measurements in a given basis. Those notions and convexity allows us to define general classes of processes without quantum memory in a weak and strong sense. Finally, we provide a practical characterization of the most intuitive class in terms of the new concept of witness of quantum information backflow.
@article{banacki_information_2020,
title = {Information backflow may not indicate quantum memory},
url = {http://arxiv.org/abs/2008.12638},
abstract = {We analyze recent approaches to quantum Markovianity and how they relate to the proper definition of quantum memory. We point out that the well-known criterion of information backflow may not correctly report character of the memory falsely signaling its quantumness. Therefore, as a complement to the well-known criteria, we propose several concepts of elementary dynamical maps. Maps of this type do not increase distinguishability of states which are indistinguishable by von Neumann measurements in a given basis. Those notions and convexity allows us to define general classes of processes without quantum memory in a weak and strong sense. Finally, we provide a practical characterization of the most intuitive class in terms of the new concept of witness of quantum information backflow.},
urldate = {2021-07-28},
journal = {arXiv:2008.12638 [quant-ph]},
author = {Banacki, Michal and Marciniak, Marcin and Horodecki, Karol and Horodecki, Pawel},
month = aug,
year = {2020},
note = {arXiv: 2008.12638},
keywords = {Quantum Physics},
}
• R. Ramanathan, M. Banacki, R. R. Rodríguez, and P. Horodecki, “Single trusted qubit is necessary and sufficient for quantum realisation of extremal no-signaling correlations,” Arxiv:2004.14782 [quant-ph], 2020.
Quantum statistics can be considered from the perspective of postquantum no-signaling theories in which either none or only a certain number of quantum systems are trusted. In these scenarios, the role of states is played by the so-called no-signaling boxes or no-signaling assemblages respectively. It has been shown so far that in the usual Bell non-locality scenario with a single measurement run, quantum statistics can never reproduce an extremal non-local point within the set of no-signaling boxes. We provide here a general no-go rule showing that the latter stays true even if arbitrary sequential measurements are allowed. On the other hand, we prove a positive result showing that already a single trusted qubit is enough for quantum theory to produce a self-testable extremal point within the corresponding set of no-signaling assemblages. This result opens up the possibility for security proofs of cryptographic protocols against general no-signaling adversaries.
@article{ramanathan_single_2020,
title = {Single trusted qubit is necessary and sufficient for quantum realisation of extremal no-signaling correlations},
url = {http://arxiv.org/abs/2004.14782},
abstract = {Quantum statistics can be considered from the perspective of postquantum no-signaling theories in which either none or only a certain number of quantum systems are trusted. In these scenarios, the role of states is played by the so-called no-signaling boxes or no-signaling assemblages respectively. It has been shown so far that in the usual Bell non-locality scenario with a single measurement run, quantum statistics can never reproduce an extremal non-local point within the set of no-signaling boxes. We provide here a general no-go rule showing that the latter stays true even if arbitrary sequential measurements are allowed. On the other hand, we prove a positive result showing that already a single trusted qubit is enough for quantum theory to produce a self-testable extremal point within the corresponding set of no-signaling assemblages. This result opens up the possibility for security proofs of cryptographic protocols against general no-signaling adversaries.},
urldate = {2021-07-28},
journal = {arXiv:2004.14782 [quant-ph]},
author = {Ramanathan, Ravishankar and Banacki, Michał and Rodríguez, Ricard Ravell and Horodecki, Paweł},
month = apr,
year = {2020},
note = {arXiv: 2004.14782},
keywords = {Quantum Physics},
}
• M. Markiewicz, M. Pandit, and W. Laskowski, “Multiparameter estimation in generalized Mach-Zehnder interferometer,” Arxiv:2012.07645 [quant-ph], 2020.
In this work, we investigate the problem of multiphase estimation using generalized \$3\$- and \$4\$-mode Mach-Zehnder interferometer. In our setup, we assume that the number of unknown phases is the same as the number of modes in the interferometer, which introduces strong correlations between estimators of the phases. We show that despite these correlations and despite the lack of optimisation of a measurement strategy (a fixed interferometer is used) we can still obtain the Heisenberg-like scaling of precision of estimation of all the parameters. Our estimation scheme can be applied to the task of quantum-enhanced sensing in 3-dimensional interferometric configurations.
@article{markiewicz_multiparameter_2020,
title = {Multiparameter estimation in generalized {Mach}-{Zehnder} interferometer},
url = {http://arxiv.org/abs/2012.07645},
abstract = {In this work, we investigate the problem of multiphase estimation using generalized \$3\$- and \$4\$-mode Mach-Zehnder interferometer. In our setup, we assume that the number of unknown phases is the same as the number of modes in the interferometer, which introduces strong correlations between estimators of the phases. We show that despite these correlations and despite the lack of optimisation of a measurement strategy (a fixed interferometer is used) we can still obtain the Heisenberg-like scaling of precision of estimation of all the parameters. Our estimation scheme can be applied to the task of quantum-enhanced sensing in 3-dimensional interferometric configurations.},
urldate = {2021-07-28},
journal = {arXiv:2012.07645 [quant-ph]},
author = {Markiewicz, Marcin and Pandit, Mahasweta and Laskowski, Wieslaw},
month = dec,
year = {2020},
note = {arXiv: 2012.07645},
keywords = {Quantum Physics},
}
• Ł. Czekaj, A. B. Sainz, J. Selby, and M. Horodecki, “Correlations constrained by composite measurements,” Arxiv:2009.04994 [quant-ph], 2020.
How to understand the set of correlations admissible in nature is one outstanding open problem in the core of the foundations of quantum theory. Here we take a complementary viewpoint to the device-independent approach, and explore the correlations that physical theories may feature when restricted by some particular constraints on their measurements. We show that demanding that a theory exhibits a composite measurement imposes a hierarchy of constraints on the structure of its sets of states and effects, which translate to a hierarchy of constraints on the allowed correlations themselves. We moreover focus on the particular case where one demands the existence of an entangled measurement that reads out the parity of local fiducial measurements. By formulating a non-linear Optimisation Problem, and semidefinite relaxations of it, we explore the consequences of the existence of such a parity reading measurement for violations of Bell inequalities. In particular, we show that in certain situations this assumption has surprisingly strong consequences, namely, that Tsirelson’s bound can be recovered.
@article{czekaj_correlations_2020,
title = {Correlations constrained by composite measurements},
url = {http://arxiv.org/abs/2009.04994},
abstract = {How to understand the set of correlations admissible in nature is one outstanding open problem in the core of the foundations of quantum theory. Here we take a complementary viewpoint to the device-independent approach, and explore the correlations that physical theories may feature when restricted by some particular constraints on their measurements. We show that demanding that a theory exhibits a composite measurement imposes a hierarchy of constraints on the structure of its sets of states and effects, which translate to a hierarchy of constraints on the allowed correlations themselves. We moreover focus on the particular case where one demands the existence of an entangled measurement that reads out the parity of local fiducial measurements. By formulating a non-linear Optimisation Problem, and semidefinite relaxations of it, we explore the consequences of the existence of such a parity reading measurement for violations of Bell inequalities. In particular, we show that in certain situations this assumption has surprisingly strong consequences, namely, that Tsirelson's bound can be recovered.},
urldate = {2021-07-28},
journal = {arXiv:2009.04994 [quant-ph]},
author = {Czekaj, Łukasz and Sainz, Ana Belén and Selby, John and Horodecki, Michał},
month = sep,
year = {2020},
note = {arXiv: 2009.04994},
keywords = {Quantum Physics},
}
• A. Z. Goldberg, P. de la Hoz, G. Bjork, A. B. Klimov, M. Grassl, G. Leuchs, and L. L. Sanchez-Soto, “Quantum concepts in optical polarization,” Arxiv:2011.03979 [quant-ph], 2020.
We comprehensively review the quantum theory of the polarization properties of light. In classical optics, these traits are characterized by the Stokes parameters, which can be geometrically interpreted using the Poincar{\textbackslash}’e sphere. Remarkably, these Stokes parameters can also be applied to the quantum world, but then important differences emerge: now, because fluctuations in the number of photons are unavoidable, one is forced to work in the three-dimensional Poincar{\textbackslash}’e space that can be regarded as a set of nested spheres. Additionally, higher-order moments of the Stokes variables might play a substantial role for quantum states, which is not the case for most classical Gaussian states. This brings about important differences between these two worlds that we review in detail. In particular, the classical degree of polarization produces unsatisfactory results in the quantum domain. We compare alternative quantum degrees and put forth that they order various states differently. Finally, intrinsically nonclassical states are explored and their potential applications in quantum technologies are discussed.
@article{goldberg_quantum_2020,
title = {Quantum concepts in optical polarization},
url = {http://arxiv.org/abs/2011.03979},
abstract = {We comprehensively review the quantum theory of the polarization properties of light. In classical optics, these traits are characterized by the Stokes parameters, which can be geometrically interpreted using the Poincar{\textbackslash}'e sphere. Remarkably, these Stokes parameters can also be applied to the quantum world, but then important differences emerge: now, because fluctuations in the number of photons are unavoidable, one is forced to work in the three-dimensional Poincar{\textbackslash}'e space that can be regarded as a set of nested spheres. Additionally, higher-order moments of the Stokes variables might play a substantial role for quantum states, which is not the case for most classical Gaussian states. This brings about important differences between these two worlds that we review in detail. In particular, the classical degree of polarization produces unsatisfactory results in the quantum domain. We compare alternative quantum degrees and put forth that they order various states differently. Finally, intrinsically nonclassical states are explored and their potential applications in quantum technologies are discussed.},
urldate = {2021-07-28},
journal = {arXiv:2011.03979 [quant-ph]},
author = {Goldberg, Aaron Z. and de la Hoz, Pablo and Bjork, Gunnar and Klimov, Andrei B. and Grassl, Markus and Leuchs, Gerd and Sanchez-Soto, Luis L.},
month = nov,
year = {2020},
note = {arXiv: 2011.03979},
keywords = {Quantum Physics},
}
• B. Ahmadi, S. Salimi, and A. S. Khorashad, “No Entropy Production in Quantum Thermodynamics,” Arxiv:2002.10747 [quant-ph], 2020.
In this work we will show that there exists a fundamental difference between microscopic quantum thermodynamics and macroscopic classical thermodynamics. It will be proved that the entropy production in quantum thermodynamics always vanishes for both closed and open quantum thermodynamic systems. This novel and very surprising result is derived based on the genuine reasoning Clausius used to establish the science of thermodynamics in the first place. This result will interestingly lead to define the generalized temperature for any non-equilibrium quantum system.
@article{ahmadi_no_2020,
title = {No {Entropy} {Production} in {Quantum} {Thermodynamics}},
url = {http://arxiv.org/abs/2002.10747},
abstract = {In this work we will show that there exists a fundamental difference between microscopic quantum thermodynamics and macroscopic classical thermodynamics. It will be proved that the entropy production in quantum thermodynamics always vanishes for both closed and open quantum thermodynamic systems. This novel and very surprising result is derived based on the genuine reasoning Clausius used to establish the science of thermodynamics in the first place. This result will interestingly lead to define the generalized temperature for any non-equilibrium quantum system.},
urldate = {2021-07-28},
journal = {arXiv:2002.10747 [quant-ph]},
month = feb,
year = {2020},
note = {arXiv: 2002.10747},
keywords = {Quantum Physics},
}
• S. Das, S. Bäuml, M. Winczewski, and K. Horodecki, “Universal limitations on quantum key distribution over a network,” Arxiv:1912.03646 [quant-ph], 2020.
The possibility to achieve secure communication among trusted parties by means of the quantum entanglement is intriguing both from a fundamental and an application purpose. In this work, we show that any state (after distillation) from which a quantum secret key can be obtained by local measurements has to be genuinely multipartite entangled. We introduce the most general form of memoryless network quantum channel: quantum multiplex channels. We define and determine asymptotic and non-asymptotic LOCC assisted conference key agreement capacities for quantum multiplex channels and provide various strong and weak converse bounds in terms of the divergence based entanglement measures of the quantum multiplex channels. The structure of our protocol manifested by an adaptive strategy of secret key and entanglement (GHZ state) distillation over an arbitrary multiplex quantum channel is generic. In particular, it provides a universal framework to study the performance of quantum key repeaters and – for the first time – of the MDI-QKD setups of channels. For teleportation-covariant multiplex quantum channels, which are channels with certain symmetries, we get upper bounds on the secret key agreement capacities in terms of the entanglement measures of their Choi states. For some network prototypes of practical relevance, we evaluate upper bounds on the conference key agreement capacities and MDI-QKD capacities. Upper bounds on the LOCC-assisted conference key agreement rates are also upper bounds on the distillation rates of GHZ states, a class of genuinely entangled pure states. We also obtain bounds on the rates at which conference key and GHZ states can be distilled from a finite number of copies of an arbitrary multipartite quantum state. Using our bounds, in particular cases, we are able to determine the capacities for quantum key distribution channels and rates of GHZ-state distillation.
@article{das_universal_2020,
title = {Universal limitations on quantum key distribution over a network},
url = {http://arxiv.org/abs/1912.03646},
abstract = {The possibility to achieve secure communication among trusted parties by means of the quantum entanglement is intriguing both from a fundamental and an application purpose. In this work, we show that any state (after distillation) from which a quantum secret key can be obtained by local measurements has to be genuinely multipartite entangled. We introduce the most general form of memoryless network quantum channel: quantum multiplex channels. We define and determine asymptotic and non-asymptotic LOCC assisted conference key agreement capacities for quantum multiplex channels and provide various strong and weak converse bounds in terms of the divergence based entanglement measures of the quantum multiplex channels. The structure of our protocol manifested by an adaptive strategy of secret key and entanglement (GHZ state) distillation over an arbitrary multiplex quantum channel is generic. In particular, it provides a universal framework to study the performance of quantum key repeaters and - for the first time - of the MDI-QKD setups of channels. For teleportation-covariant multiplex quantum channels, which are channels with certain symmetries, we get upper bounds on the secret key agreement capacities in terms of the entanglement measures of their Choi states. For some network prototypes of practical relevance, we evaluate upper bounds on the conference key agreement capacities and MDI-QKD capacities. Upper bounds on the LOCC-assisted conference key agreement rates are also upper bounds on the distillation rates of GHZ states, a class of genuinely entangled pure states. We also obtain bounds on the rates at which conference key and GHZ states can be distilled from a finite number of copies of an arbitrary multipartite quantum state. Using our bounds, in particular cases, we are able to determine the capacities for quantum key distribution channels and rates of GHZ-state distillation.},
urldate = {2021-07-28},
journal = {arXiv:1912.03646 [quant-ph]},
author = {Das, Siddhartha and Bäuml, Stefan and Winczewski, Marek and Horodecki, Karol},
month = sep,
year = {2020},
note = {arXiv: 1912.03646},
keywords = {Quantum Physics, Computer Science - Information Theory},
}
• W. Song, M. Wieśniak, N. Liu, M. Pawłowski, J. Lee, J. Kim, and J. Bang, “Tangible Reduction of Sample Complexity with Large Classical Samples and Small Quantum System,” Arxiv:1905.05751 [quant-ph], 2020.
Quantum computation requires large classical datasets to be embedded into quantum states in order to exploit quantum parallelism. However, this embedding requires considerable resources. It would therefore be desirable to avoid it, if possible, for noisy intermediate-scale quantum (NISQ) implementation. Accordingly, we consider a classical-quantum hybrid architecture, which allows large classical input data, with a relatively small-scale quantum system. This hybrid architecture is used to implement an oracle. It is shown that in the presence of noise in the hybrid oracle, the effects of internal noise can cancel each other out and thereby improve the query success rate. It is also shown that such an immunity of the hybrid oracle to noise directly and tangibly reduces the sample complexity in the probably-approximately-correct learning framework. This NISQ-compatible learning advantage is attributed to the oracle’s ability to handle large input features.
@article{song_tangible_2020,
title = {Tangible {Reduction} of {Sample} {Complexity} with {Large} {Classical} {Samples} and {Small} {Quantum} {System}},
url = {http://arxiv.org/abs/1905.05751},
abstract = {Quantum computation requires large classical datasets to be embedded into quantum states in order to exploit quantum parallelism. However, this embedding requires considerable resources. It would therefore be desirable to avoid it, if possible, for noisy intermediate-scale quantum (NISQ) implementation. Accordingly, we consider a classical-quantum hybrid architecture, which allows large classical input data, with a relatively small-scale quantum system. This hybrid architecture is used to implement an oracle. It is shown that in the presence of noise in the hybrid oracle, the effects of internal noise can cancel each other out and thereby improve the query success rate. It is also shown that such an immunity of the hybrid oracle to noise directly and tangibly reduces the sample complexity in the probably-approximately-correct learning framework. This NISQ-compatible learning advantage is attributed to the oracle's ability to handle large input features.},
urldate = {2021-07-28},
journal = {arXiv:1905.05751 [quant-ph]},
author = {Song, Wooyeong and Wieśniak, Marcin and Liu, Nana and Pawłowski, Marcin and Lee, Jinhyoung and Kim, Jaewan and Bang, Jeongho},
month = jun,
year = {2020},
note = {arXiv: 1905.05751},
keywords = {Quantum Physics},
}
2019
• M. Eckstein and P. Horodecki, “The experiment paradox in physics,” Arxiv:1904.04117 [gr-qc, physics:hep-th, physics:physics, physics:quant-ph], 2019.
Modern physics is founded on two mainstays: mathematical modelling and empirical verification. These two assumptions are prerequisite for the objectivity of scientific discourse. Here we show, however, that they are contradictory, leading to the experiment paradox’. We reveal that any experiment performed on a physical system is – by necessity – invasive and thus establishes inevitable limits to the accuracy of any mathematical model. We track its manifestations in both classical and quantum physics and show how it is overcome in practice’ via the concept of environment. We argue that the scientific pragmatism ordains two methodological principles of compressibility and stability.
@article{eckstein_experiment_2019,
title = {The experiment paradox in physics},
url = {http://arxiv.org/abs/1904.04117},
abstract = {Modern physics is founded on two mainstays: mathematical modelling and empirical verification. These two assumptions are prerequisite for the objectivity of scientific discourse. Here we show, however, that they are contradictory, leading to the experiment paradox'. We reveal that any experiment performed on a physical system is - by necessity - invasive and thus establishes inevitable limits to the accuracy of any mathematical model. We track its manifestations in both classical and quantum physics and show how it is overcome in practice' via the concept of environment. We argue that the scientific pragmatism ordains two methodological principles of compressibility and stability.},
urldate = {2021-07-28},
journal = {arXiv:1904.04117 [gr-qc, physics:hep-th, physics:physics, physics:quant-ph]},
author = {Eckstein, Michał and Horodecki, Paweł},
month = apr,
year = {2019},
note = {arXiv: 1904.04117},
keywords = {Physics - History and Philosophy of Physics, General Relativity and Quantum Cosmology, High Energy Physics - Theory, Physics - Classical Physics, Quantum Physics},
} | 2021-10-17 07:10:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508469939231873, "perplexity": 2390.8302600344114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00397.warc.gz"} |
https://mathalino.com/reviewer/mechanics-and-strength-of-materials/solution-to-problem-312-torsion | # 312 Deformation of Flexible Shaft Made From Steel Wire Encased in Stationary Tube
### Why is the upper limit 20pi?
Why is the upper limit 20pi?
### That is the total length of
That is the total length of the shaft L, the solution for $L = 20\pi ~\text{inches}$ is also shown above. | 2022-12-02 13:16:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281889200210571, "perplexity": 1131.193555823332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00113.warc.gz"} |
https://www.transtutors.com/questions/greenwood-healthcare-pty-ltd-greenwood-owns-a-medical-facility-in-brisbane-it-also-o-3951597.htm | # Greenwood Healthcare Pty Ltd (“Greenwood”) owns a medical facility in Brisbane. It also owns a...
Greenwood Healthcare Pty Ltd (“Greenwood”) owns a medical facility in Brisbane. It also owns a warehouse at Wacol where it stores all the materials necessary to run its facility. Greenwood wants to enter into a contract for the supply of products from HCP Pty Ltd (HCP) a corporation that specialises in the sale of chemicals suitable for use in hospitals and aged care facilities. In particular Greenwood wants to purchase bulk supplies of hospital grade disinfectant and glass cleaner. The agreement is for HCP to supply the following products on the first business day of each month: 1. D2 Disinfectant – 1,000 litres at $2000 2. G5 Glass cleaner – 500 litres at$1000 While the agreement is for a period of 1 year commencing on 1 January 2020, Greenwood does require the right to end the contract early if it sells its business. Greenwood requires delivery of the products to its Wacol warehouse at 7 Mica Street which is staffed from 7am to 3pm Monday to Friday. HCP staff must comply with the directions of warehouse staff. HCP requires payment within 7 days from the date Greenwood receives an invoice. If there is any error in an invoice, Greenwood is to notify HCP. Greenwood may withhold payment of the disputed portion of the invoice until the dispute is resolved. Greenwood requires an acknowledgement from the supplier that the products are safe and suitable for use in hospitals. Greenwood also requires the supplier to hold insurance to cover any defects in the goods and also public liability insurance. Insurance should be for not less than \$20 million for each event. HCP has confirmed it has this insurance. Greenwood wants a written contract that incorporates these matters. Your boss will prepare a contract but you are asked to use the above information to draft some clauses to be included. You are required to: 1. Draft a clause that would be considered an essential term (or condition) in the supply contract between Greenwood and HCP; (2 marks) 2. Draft a clause that would be considered an intermediate term in the supply contract between Greenwood and HCP; (2 marks) 3. Draft a clause that would be considered a warranty in the supply contract between Greenwood and HCP; (2 marks) 4. Draft a clause that would be considered a condition subsequent in the supply contract between Greenwood and HCP ; (2 marks) 5. Provide Greenwood with a summary of the theory that underpins these types of clauses including the differences between them and the reason why you drafted the clauses as you did. In your answer, you should provide a definition of an essential term, an intermediate term, a warranty and a condition subsequent. You should include reference to the important Australian case authorities in this area. Please do not refer to legislation in your response. (12 marks)
Attachments: | 2020-01-19 02:21:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20319025218486786, "perplexity": 3132.4686922774695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594101.10/warc/CC-MAIN-20200119010920-20200119034920-00498.warc.gz"} |
https://www.baeldung.com/cs/floating-point-numbers-inaccuracy | ## 1. Introduction
Computer memory has a finite capacity. Real numbers in , do not, in general, however, have finite uniform representation. For example, consider representing the rational number as a decimal value: it is – it is impossible to do so exactly! Since only a finite number of digits can be stored in computer memory, the reals have to be approximated in some fashion (rounded or truncated), when represented on a computer.
In this tutorial, we’ll go over the basic ideas of floating-point representation and learn the limits of floating-point accuracy, when doing practical numerical computing.
## 2. Rounding and Chopping
There are two distinguishable ways of rounding off a real number to a given number of decimals.
In chopping, we simply leave off all decimals to the right of the th digit. For example, truncated to decimal place yields .
In rounding to nearest, we choose a number with decimals, which is the closest to . For example, consider rounding off to decimal place. There are two possible candidates: and . On the real number line, is at a distance of from , whilst is at a distance of . So, is the nearest and we round to .
Intuitively, we are comparing the fractional part of the number to the right of the th digit, which is with . As , we incremented the th digit, which is by .
What if, we wished to round off to decimal places? Observe that, the fractional part is . As , we leave the th digit unchanged. So, the result is , which is indeed nearest to .
In general, let be the fractional part of the number to the right of th digit (after the decimal point). If , we increment the th digit. If , we leave the th digit unchanged.
In the case of a tie, when is equidistant to two decimal -digit numbers, we raise the th decimal if it is odd or leave it unchanged if it is even. In this way, the error in rounding off a decimal number is positive or negative equally often.
Let’s see some more examples of rounding and chopping, to make sure, this sinks in. Assume that, we are interested in rounding off all quantities to decimal places:
The difference between chopping and rounding has real-world implications! The Vancouver stock exchange index began trading in 1982, with a base value of . Although the underlying stocks were performing decent, the index began hitting the low 500s at the end of 1983. A computer program re-calculated the value of the index thousands of times each day, and the program used chopping instead of rounding to the nearest. A rounded calculation gave a value of .
## 3. Computer Number Systems
In our daily life, we represent numbers using the decimal number system with base . In the decimal system, we’re aware, that if a digit stands digits to the right of the decimal point, the value it contributes is . For example the sequence of digits means:
In fact any integer (or can be used as a base. Analogously, every real number has a unique representation of the form:
or compactly , where the coefficients , the digits in system , are positive integers such that .
### 3.1. Conversion Algorithm Between Two Number Systems
Consider the problem of conversion between two number systems with different bases. For the sake of concreteness, let’s try to convert to the binary format. We may write:
We can pull out a factor of from each term, except the last, and equivalently write:
Intuitively, therefore, if we were to divide by , the expression in brackets, let’s call it , is the quotient and is the remainder of the division. Similarly, since , division by would return the expression in the brackets as the quotient; call it , and as the remainder.
In general, if is an integer with base and we want to determine it’s representation in a number system with base , we perform successive divisions of with : set and
is the quotient and is the remainder in the division.
Let’s look at the result of applying the algorithm to :
Therefore, .
### 3.2. Converting Fractions to Another Number System
If the real number is not an integer, we write it as , where is the integer part and
is the fractional part, where are to be determined.
Observe that, multiplying both sides by the base yields,
an integer and a fractional portion. The integer portion is precisely – the first digit of represented in base . Consecutive digits are obtained as the integer parts, when successively multiplying by .
In general, if a fraction must be converted to another number system with base , we perform successive multiplications of with : set and
Let’s look at an example of converting to binary number system:
Therefore, in the binary system.
### 3.3. Fixed and Floating-Point Representation
Computers are equipped to handle pieces of information of a fixed size called a word. Typical word lengths are 32-bits or 64-bits.
Early computers made numerical calculations in a fixed-point number system. That is, real numbers were represented using a fixed number of binary digits in the fractional part. If the word-length of the computer is bits, (including the sign bit), then only numbers in the bounded interval are permitted. This limitation is problematic, since, for example, even when and , it is possible that is outside the bounds of the interval .
In a floating-point number system, a real number is represented as:
The fractional part of the number is called the mantissa or significand. is called the exponent and is the base. It’s clear that , because if , then we can always decrease the exponent by and shift the decimal point one place to the right.
The mantissa and the exponent are limited by the fixed word length in computers. is rounded off to a number with digits and the exponent lies in a certain range.
Thus, we can only represent floating-point numbers of the form:
A floating-point number system is completely characterized by the base , precision , the numbers and . Since , the set , contains, including the number ,
numbers. Intuitively, there are choices for the sign. The digit can be chosen from the set . Each of the successive digits can be chosen from . The exponent can be chosen from numbers. By the multiplication rule, there are distinguishable numbers. Including the number gives us the expression above.
### 3.4. Why Are Floating-Point Numbers Inaccurate?
Consider a toy floating-point number system . The set contains exactly numbers. The positive numbers in the set are shown below:
It is apparent that not all real numbers, for instance in, are present in . Moreover, not all floating-point numbers are equally spaced; the spacing jumps by a factor of at each power of .
The spacing of the floating-point numbers is characterized by the machine epsilon , which is the distance from to the next largest floating-point number.
If a real number is in the range of the floating-point system, the obvious thing to do is to round off to , where is the floating-point number in , that is closest to . It should already be clear that representing by introduces an error.
One interesting question is, how large is this error? It would be great if we could guarantee, that the round-off error at no point exceeds a certain amount. Everybody loves a guarantee! In other words, we seek an upper bound for the relative error:
Recall, from the earlier discussion, that when rounding to decimals, we leave the th decimal unchanged, if the part , of the number to right of the th decimal, is smaller than , else raise the th decimal by . Here, we are working with the generic base instead of the decimal base . Consequently, the round-off error in the mantissa is bounded by:
The relative round-off error in is thus bounded by:
Modern floating-point standards such as the IEEE guarantee this upper bound. This upper bound is called the rounding unit, denoted by .
## 4. IEEE Floating-point Standard in a Nutshell
Actual computer hardware implementations of floating-point systems differ from the toy system, we just designed. Most current computers conform to the IEEE 754 standard for binary floating-point arithmetic. There is two main basic formats – single and double precision, requiring 32-bit and 64-bit storage.
In single-precision, a floating-point number is stored as the sign ( bit), the exponent ( bits), the mantissa ( bits).
In double-precision, bits are allocated to the exponent , whereas bits are allocated to the mantissa . The exponent is stored as .
The value of the floating-point number in the normal case is:
Note, that the digit before the binary point is always , similar to the scientific notation we studied in high school. In that way, bit is gained for the mantissa.
### 4.1. Quirks in Floating-Point Arithmetic
Consider the following comparison:
(0.10 + 0.20) == 0.30
The result of this logical comparison is false. This abrupt behavior is expected because the floating-point system is broken. However, let’s take a deeper look at what’s going on.
Let’s put in the double-precision format. Because is positive, the sign bit .
in base- scientific notation can be written as . This means we must factor into a number in the range and a power of . If we divide by different powers of , we get:
Therefore, . The exponent is stored as , so the bit pattern in the exponent part is .
The mantissa is the fractional part in the binary form. Successive multiplication by quickly yields . However, the double-precision format allocates bits to the mantissa, so we must round off to digits. The fractional part , after the nd digit, exceeds . So, we raise the th decimal by . Thus, the rounded mantissa is .
Finally, we put the binary strings in the correct order. So, in the IEEE double-precision format is:
This machine number is approximately in base . In a similar fashion, the closest machine number to in the floating-point system is approximately . Therefore, . On the right hand side, the closest machine number to is . So, .
To summarize, algebraically equivalent statements are not necessarily numerically equivalent.
## 5. Conclusion
In this article, we’ve learned how the rules of chopping and rounding to the nearest work. We developed an algorithm for conversion between number systems. We also learnt that, any floating-point system is characterized by a quadruple ). For example, the IEEE double-precision format is given as . Because computers have finite memory capacity, the set of machine numbers, are only a subset of the real numbers . The spacing between machine numbers in technical speak is called the machine epsilon.
Further, any real number is always rounded off to the nearest machine number on a computer. The IEEE standards guarantee that the relative roundoff error is no more than a certain amount. Moreover, as programmers, we need to take proper care when doing floating-point operations. | 2023-03-21 15:22:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008458852767944, "perplexity": 547.2738868310408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00363.warc.gz"} |
https://mathhelpboards.com/threads/rayanjafars-parametric-integral-question-for-y-answers.2787/ | # Rayanjafar's parametric integral question for Y!Answers
#### CaptainBlack
##### Well-known member
the curve C has parametric equations x = sint , y = sin2t, 0<t<pi/2
a) find the area of the region bounded by C and the x-axis
and, if this region is revolved through 2pi radians about the x-axis,
b) find the volume of the solid formed
How do you do this question. Can anyone please show me step by step???"
C4 here denotes a question appropriate to the UK Core 4 A-Level Maths Exam
Last edited:
#### CaptainBlack
##### Well-known member
(a) First sketch the curve. It obviously starts with slope $$2$$ at $$(0,0)$$ and rises to a maximum of $$y=1$$ at $$x=1/\sqrt(2)$$ and then falls to $$y=0$$ at $$x=1$$.
The area we want is the integral:
$I = \int_{x=0}^1 y(x) dx$
Use the substitution $$t=arcsin(x), x=sin(t)$$. Then $$dx = cos(t) dt$$, and the integral becomes:
$I = \int_{t=0}^{\pi/2} sin(2t) cos(t) dt$
Now we replace the $$sin(2t)$$ using the double angle formula by $$2 sin(t) cos(t)$$ to get:
$I = \int_{t=0}^{\pi/2} 2sin(t) (cos(t))^2 dt$
As the integrand is the derivative of $$-(2/3) (cos(t))^3$$ we get:
$I = -(2/3) [0-1] = 2/3$.
The second part proceeds in much the same way once we write down the volume of revolution:
$V= \int_{x=0}^1 \pi (y(x))^2 dx$
and proceed in much the same way as before
CB
Last edited: | 2021-06-17 18:10:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9810611009597778, "perplexity": 613.402414703169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00342.warc.gz"} |
https://labs.tib.eu/arxiv/?author=P.%20McBride | • A joint measurement is presented of the branching fractions $B^0_s\to\mu^+\mu^-$ and $B^0\to\mu^+\mu^-$ in proton-proton collisions at the LHC by the CMS and LHCb experiments. The data samples were collected in 2011 at a centre-of-mass energy of 7 TeV, and in 2012 at 8 TeV. The combined analysis produces the first observation of the $B^0_s\to\mu^+\mu^-$ decay, with a statistical significance exceeding six standard deviations, and the best measurement of its branching fraction so far. Furthermore, evidence for the $B^0\to\mu^+\mu^-$ decay is obtained with a statistical significance of three standard deviations. The branching fraction measurements are statistically compatible with SM predictions and impose stringent constraints on several theories beyond the SM.
• ### Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 1: Summary(1401.6075)
Jan. 23, 2014 hep-th, hep-ph, hep-ex, hep-lat, astro-ph.CO
These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S. Chapter 1 contains the Executive Summary and the summaries of the reports of the nine working groups. | 2020-08-10 11:02:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.707524299621582, "perplexity": 1111.1347035150698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738674.42/warc/CC-MAIN-20200810102345-20200810132345-00467.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/2/lesson/2.2.2/problem/2-70 | ### Home > CALC > Chapter 2 > Lesson 2.2.2 > Problem2-70
2-70.
Alter your sigma notation from problem 2-69 to estimate the area with $16$ rectangles and use it to approximate the area. Were the results the same?
$n$ represents number of rectangles.
$Δx$ represents width of each rectangle.
a represents starting value on the $x$-axis. | 2022-07-01 13:41:47 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9176594614982605, "perplexity": 2048.147200231734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00306.warc.gz"} |
https://brilliant.org/problems/liquid-friction/ | # Liquid Friction!
A Thin circular disk of radius $R$, rotates in a cylindrical chamber, which is filled with water (coefficient of viscosity $\eta$). It is rotated with constant angular velocity $\omega$.
Find the Power developed (in watts)by the viscous forces, if the distance between the top-bottom walls and the disk is $h$.
Details and Assumptions:
$\bullet$ Neglect end effects.
$\bullet$ $\eta = 8.9\times 10^{-4}$
$\bullet$ $R = 2m$
$\bullet$ $h = 1mm$
$\bullet$ $w = 2 rad/s$
× | 2021-03-08 00:56:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831477403640747, "perplexity": 1490.8490925215349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381230.99/warc/CC-MAIN-20210307231028-20210308021028-00109.warc.gz"} |
https://nilsfran.co/blog/ | # Blog
## Launching CouchDB
Created a new EC2 instance on Linux 2018.
Install dependencies
yum install gcc gcc-c++ libtool curl-devel ruby-rdoc zlib-devel openssl-devel make automake rubygems perl git-core
Enable EPEL repository
sudo yum-config-manager --enable epel
Build SpiderMonkey JS Engine
wget http://ftp.mozilla.org/pub/mozilla.org/js/js185-1.0.0.tar.gz tar xvfz js185-1.0.0.tar.gz cd js-1.8.5/js/src ./configure make sudo make install
The challenge is
Once you have installed all of the dependencies, you should download a copy of the CouchDB source. This should give you an archive that you’ll need to unpack. Open up a terminal and change directory to your newly unpacked archive.
Configure the source by running:
./configure
But I don’t have a good way yet to download the CouchDB source. I believe I will need to use curl url-to-couchdb-source.bin –output usr/local/couchdb-bins.bin
RedHat 8: Place the following text into /etc/yum.repos.d/bintray-apache-couchdb-rpm.repo:
[bintray--apache-couchdb-rpm]
name=bintray--apache-couchdb-rpm
baseurl=http://apache.bintray.com/couchdb-rpm/el8/$basearch/ gpgcheck=0 repo_gpgcheck=0 enabled=1 ^ I used vi ....filename above and found that it wouldn't let me write - try again as root? Update: sort of fixed by using nano and the CentOS option. Now the next step sudo yum -y install couchdb gives: Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libmozjs-60.so.0(js)(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libmozjs-60.so.0()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: systemd Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libcrypto.so.1.1()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libtinfo.so.6()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libicudata.so.60()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libicuuc.so.60()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libicui18n.so.60()(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libcrypto.so.1.1(OPENSSL_1_1_0)(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: libcrypto.so.1.1(OPENSSL_1_1_1)(64bit) Error: Package: couchdb-3.1.1-1.el8.x86_64 (bintray–apache-couchdb-rpm) Requires: mozjs60 Now, did sudo yum --enablerepo=epel updatesudo yum groupinstall "Development Tools"sudo curl -O https://ftp.mozilla.org/pub/js/js185-1.0.0.tar.gztar -xvf js185-1.0.0.tar.gzcd js-1.8.5/js/src/./configuremakesudo make installsudo yum install libicu-devel ncurses-devel openssl-develmkdir couch-compile && cd couch-compile This is from , but sudo curl -O https://ftp.mozilla.org/pub/js/js185-1.0.0.tar.gz had to be prepended by sudo. ## Going through the motions This is a blog. And I committed to publishing one of these things a day. ## Personal Statement Prep Academic Statement: How will a ### Masters degree help you meet your career and educational objectives? (max ~250 words) * I anticipate using a ## Masters primarily to prepare to lead organizations that use novel data and methods about complex systems to improve the effectiveness of public- and private-sector interventions aimed at U.S. social and health problems. This may include future work in academia. To me, creative financial instruments, long-standard technologies, and existing streams of data are underused in planning such interventions. The organizations that lead them often are incapable of leveraging such resources — and often view them as distractions or nice-to-haves. Finally, such resources are underused in financial decisionmaking, systematically excluding financially viable interventions from investment by private investors. Personal Statement: How have your personal background and life experiences, including social, cultural, familial, educational, or other opportunities or challenges, motivated your decision to pursue a graduate degree? (max ~250 words) I have a complex background and identity, but one that consistently reminds me of the urgency of resourceful action by public institution and the civil sector. I was raised mostly by my brother, 3 years older than me, while my mother worked tirelessly to establish a new California State University and my father conducted environmental research in Mexico. With my brother, I grew up in a military-base-turned-university-campus where education loomed large; relatedly, in both our boredom and our interactions with our parents we fostered bold curiosity and a methodic and cheery approach to conducting meaningful projects. Time spent in Mexico showed me how institutional failures could harm everyone, by way of drawing stark contrasts to California. Simultaneously, I returned to America to reckon with the inequities in my community: how my school had no art program while the school three towns over funded new stone arches for its large theatre. ## Planning a Hectic Quarter Priority Activities • Project Report • Seeking funding • Creating pool of prospective TA sites • Developing automated TA products • Creating reproducible research • Publication Approach I will feel great momentum if I get a draft of the project report together, using a clear structure and leaving placeholders where I am not best suited for the writing task. I should keep the data dictionary open while integrating the code segments from across the Shiny app, Rmd, and R scripts. I should continue with my approach, creating an R script that outputs pngs and csvs (add), and then pulling these and outputing them in the Rmd. The Rmd could present a lot more info by using tables in lieu of some vizzes. Week of 10/5 Activities • Project Report Draft • Data dictionary • Combining the maps, histograms, and other graphs into a single document • List of leads for (1) government interest (2) academics who could use this (3) prospective funders (4) prospective TA sites • Identify waitlist data options – note that waitlists may be capped or diverted • % Elders on SNAP % Elders receiving meals on site (day centers, MOW) Projects Beyond Eldercare Data • Water – ECHO and SDWIS; Census ACS to most granular level – all in R. • SDOH – Write up common forms of bias seen in social-intervention literature that monetizes and evaluates health effects of social interventions. – before Thursday and after RWJ? ## Mindful and Reflexive Compassion Today, I’m thinking about how many people need/expect their friends to reflexively express emotional support when they experience hardships. How often and cheaply our friends will lob insults at the perceived wrongdoers (never present) in a story we recount. How ready and fake the ego-reinforcing mantras are, when reflexively served up after we concede our shortcomings or failures to win external validation. I could never do that: I was the silent one off to the side, wishing we could talk about action steps, about structuring a response for our despondent friend to effect, or about talking about what our emotions really are telling us – surely we feel something more nuanced when we break it down to it’s nuts and bolts. Kassie and I talked about the lost opportunity in my approach, which is also so common among my friends. I viewed it as cheap, as fake, and as degrading of the person we are trying to cheer up. We are treating them like a one dimensional object, a leaf being blown around in a wind of emotion. We offer them nothing substantive to reflect on or learn from. We neuter their drive to improve themselves and exert agency over the situation. But that is not the only story, and many people do require that reflexive compassion to reestablish their emotional footing. It’s a matter of communication style, and I would frankly be doing my friends wrong to impose my style – seeking to dig in during the rawest feeling of emotion – onto other people. Understanding their style and meeting them with what they need to heal -/ that is compassion, and it is attainable. Creating a habit of supporting certain friends in this way – with my own flavor of reflexive compassion that is meant to be genuine — can coexist with a mindset that looks for improvement opportunities, for agency, and for exploring the emotions granulay and honestly. Two solutions: • Ask my friend, “hey, I see things are not going to plan, and you’re having a hard time but also preparing yourself to deal with it.” This leads us to meet the style that’s right in the moment, and to have an open conversation about emotions (centering on them). • Consider “wise compassion,” the sort of helpful scrutiny most people expect from a therapist, in appropriate dose and at appropriate times. Offer “reflexive compassion” as a rule, and “wise compassion” as an act of lovingkindness for my friends. ## The Business Candidates Notoriously, the race for at-large city council member in DC is 24 candidates deep. The seat is open because of the retirement of David Grosso, an incumbent I’ve respected since 2015 for his work on juvenile justice reform. Two business-friendly candidates running for the seat are worth analyzing. An established former at-large member, Vincent Orange, is in the mix after losing his seat in 2016 and then stepping away from the Council before his term ended, to lead the DC Chamber of Commerce. That resignation process was telling: he resisted calls to leave the seat, which would have prevented any conflicts of interest. The Chamber is, after all, the primary advocacy and lobbying heavyweight representing the big tent of the business community, and thus a major lobbying force jockeying for influence in the Council. In resisting the calls to resign and avoid overlapping job obligations, he tried to justify his situation by comparing it with (or more generously, “comparing it against”) the tragically low standard set by Jack Evans’ conduct at that time. Evan’s, then Ward 2’s councilmember, is since ousted. He is both a snollygoster and a two-time Wikipedia vandal. What could be more of a conflict-of-interest red flag than Orange’s past intention to simultaneously steer the council and lobby it on behalf of the broad business community? What Marcus Goodwin might do as a developer-turned-Councilmenber: actively engaging in business activity that has a narrower, more concentrated policy interest than the Chamber’s — a special interest premised on excluding large swaths of DC’s residents and homeowners from the Council’s priorities. Goodwin, who is a real-estate developer and second-time candidate for at-large Councilmember, could be a new guard in the Democratic establishment of DC: his Neighborhood Development Company can immediately point to its Benning Market development, at 3451 Benning Road NE for progressive credibility. That “food hall” concept would supposedly provide a venue in Ward 7 for a community market and black-owned businesses, at least for now. This dream has yet to come to fruition; its key tenant has an out-of-date website (from December 2019, as of October 2020), suggesting little forward momentum several years from its founding, and a year since NDC’s crowdsourced quarter-million dollar funding round. Other NDC projects are more run-of-the-mill: a lot of gentrifying projects and a sideshow of affordable housing, which can be targeted toward people with fairly high incomes because of DC’s high median income, which is used for defining affordable housing eligibility. Do we overlook our concerns about Goodwin’s development-industry policy interests because his company sponsored one as-yet-unfulfilled project that it asserts will elevate community interests in Ward 7? Can we even trust that his company, the landlords, will keep black-owned businesses at center-stage in the property’s future, if Ward 7 continues to gentrify? Or will its joint food hall–grocery store design be home to the next Whole Foods by decade’s end? Development ties imply several priorities that are out of step with many DC residents’ own. Developers want the opportunity to raise rents for businesses and homes. This is fastest aided by deprioritizing low-income residents’ needs and pushing out low-income residents from their current neighborhoods. Developers take this approach to create large swaths of the city that are highly appealing to high-income residents. (High-income residents in turn can sign high-end, high-profit leases and can purchase bottomless volumes of high-profit items from nearby businesses, increasing the businesses’ rent potential.) Many pressing policy areas — like criminal justice, tenants’ rights, education, and behavioral health — are instrumental to push out or keep out lower income residents (including those trying to make an honest living) just as the city reshapes itself to appeal to prospective tenants in the upper classes. Appealing to higher-income residents can involve policies that improve services and rights for all — but this is slower. If Goodwin takes the at-large seat, what broad approach he will take to reshape DC via policy makes all the difference. At best, he would expand the pie and shepherd corporate interests to generate opportunities for the lower-income DC residents. Or he could shepherd lower-income DC residents out of town to generate opportunities for corporate interests. DC’s progress in offering economic opportunity and household stability to all residents is threatened if we replace the inclusivity-oriented ethic of David Grosso with someone who will not be a fierce negotiator on behalf of the district when facing business interests who want a place in our city’s economic activity. Those concerns about how his development background would shape his political priorities should be weighted heavily because he is so young, and his career so promising: voters mistaking his intentions in this election could accidentally invite development-friendly policy angles — and perhaps related graft — for the next decades. He is 30, and last year was elected president of DC’s club for young Democrats. Goodwin’s statement in the Post focused on his experience: that “the work that I’ve done in commercial development gives me the knowledge and savvy to know how public-private partnerships should be structured and how to preserve housing for residents that is truly affordable.” This is a fair point, for those looking for a community-oriented business voice. Goodwin is a DC high school graduate, with a pedigree from St. Albans, UPenn, and Harvard. He is deep into the financial and government-facing part of NDC’s work. He previously worked at Four Points and JBG Smith, according to his NDC profile. On one hand, his election would reshape which groups of DC residents he will champion, with concerning likelihood that the poor would lose. On the other hand, his election would welcome new frontiers for graft and cronyism. His Neighborhood Development Company would be intensely embedded in DC or whatever firm he goes in later decades with his coupled political power and development skills. If Goodwin takes political office, development companies offering a better vision for DC land use might get cut out of a fair process — or might avoid competing with his firms altogether in key areas, to avoid his disfavor. For that reason, keeping Goodwin on the bench seems a good idea, at least until he reveals more about his priorities and shows real commitment to equity, or until he commits to disentangling from his own industry to take on the$130,000/year salaried work of DC Councilmember. If he’s committed to equitable development for the right reasons, he will have much to do in the next decade anyway in our city. If he’s committed to bringing his business experience to negotiate and legislate on behalf of DC’s citizenry, I hope he would return to take a third shot at a Council seat in future elections.
I’m moving with way too little structure around learning about post-acute care. I have data that I want to analyze about patients’ post-acute care spending, and about conditions of the post-acute care users in Medicaid in each year.
Post-acute care has seen some efforts to make the payments setting-neutral. There are also alternative payment models that are bundling pay for post-acute care.
## Understanding the options on DiD
With my Part D work, I am concerned and trying to protect our ability to do a strong causal inference study when I am worried about the power of the data structure that we have to do such a study.
We are trying to estimate the effect that pharmaceutical access expansions have on long-term care use.
We assume that for the treatment population ages 65-69, the population ages 60-64 offers a good counterfactual for change in long-term care use. The population 65-69 has a shock, exogenous to long-term care use trends, that causes a portion of its uninsured population to switch to an insured population, and this uninsured-to-insured population is a good representation of [##ask-control or treated##] population of interest.
The policy question that Aparna is looking to address is
The empirical intention is to “estimate the impact of prescription drug insurance on elderly individuals’ utilization of formal and informal long-term care” – but impact for whom? the uninsured is assumed
The second empirical intention is to examine how changes in long-term care use affects informal caregivers, but is written as “how changes in LTC use affected labor market and mental health outcomes of informal caregivers.” I’ll need clarification here.
Furthermore, we will do “heterogeneity tests”
The treated group is all Medicare-eligibles. Because they had a shift in drug access caused by Part D. But grouping the three major treated categories (uninsured->uninsured, uninsured->insured, insured->insured) together will dilute the ability to best test our hypothesis about the effect of increased access upon LTC.
The whole endeavor seems ripe for SEM.
But I am also coming around to using
Future topics to cover:
– heterogeneity
– power analyses
– ATET or ATC estimand – or can we develop a weighted
When you selected the IV method in your 2018 proposal, did you choose not to do propensity score matching/weighting or synthetic control methods for any particular reason? Do you view the IV method as equivalent to using propensity score weights, or as fundamentally different? (I’m thinking that using uninsured-hat changes the estimand from average treatment effect on the treated to ATE on the treated & uninsured.) Are we generally flexible about the estimand? i.e., do we want to estimate average treatment effect of Part D on the treated (which is an overlap group – it includes pre-treatment uninsured and insured), and/or on the treated & pre-treatment uninsured?
If we’re flexible, perhaps we could try to use our study to extend the estimated effect over to today’s elderly population, weighting our study’s estimated ATE based on the demographics & insurance characteristics in today’s post-Part D Medicare-eligibles.
I continue to study ways we can strengthen our causal
inference, which we would need to settle prior to specifying a power analysis.
I am concerned about the applicability of an IV method using demographics
because I think, in theory, that demographics were relevant to changes in LTC
use during the studied period. I am worried about the pooling of effects of the
Part D treatment across multiple groups. I.e. they would not satisfy exogeneity
to the DV except for the ways they relate to Rx insurance. There may be a set of
demographic variables that we could carefully select as instruments that in
theory relate only to Rx insurance, and we could test their exogeneity in the
data. I also am studying if we could generate an estimand of ATE for
## Estimating Power Analyses for Diff-Diff
but first, I want a fresh understanding of the alternatives to Diff-Diff designs.
## Synthetic Control Method
Synthetic control method (SCM) matches according to the Y variable in pre-intervention periods, as a time series. Untreated comparison cases are identified according to similarity to the treated case during the period (can be multiple but typically one or few case[s]).
– Parallel trends assumption is dubious
– Assume unobservable confounders influence the Y variable and desire to get most accurate (how) estimates of treatment effect \alpha = \Y_treated_t=1,i=1 \minus \Y_untreated_t=1,i=1
– Economists with stronger design backgrounds tend to pool multiple treated cases – notably, they have also had multiple treatments, multiple cases. The inventors of SCM are usually
Kreif, Noémi, Richard Grieve, Dominik Hangartner, Alex James Turner, Silviya Nikolova, and Matt Sutton. “Examination of the Synthetic Control Method for Evaluating Health Policies with Multiple Treated Units.” Health Economics 25, no. 12 (2016): 1514–28. https://doi.org/10.1002/hec.3258.
“This paper extends the limited extant literature on the synthetic
control method for multiple treated units. A working paper by Acemoglu et al. (2013) uses the synthetic control method to construct the treatment‐free potential outcome for each multiple treated unit and is similar to the approach we take in the sensitivity analysis, but weights the estimated unit‐level treatment effects according to the closeness of the synthetic control. Their inferential procedure is similar to the one developed here, in that they re‐sample placebo‐treated units from the control pool. Dube and Zipperer (2013) pool multiple estimates of treatment effects to generalise inference for a setting with multiple treated units and policies. Xu (2015) propose a generalisation for the synthetic control approach, for multiple treated units with a factor model that predicts counterfactual outcomes. Our approach is most closely related to the suggestion initially made by Abadie et al. (2010), to aggregate multiple treated units into a single treated unit. In preliminary simulation studies, we find the method reports relatively low levels of bias in similar settings to the AQ study.”
## Propensity Score Matching
is ideal in cases where
– Assignment to the treatment group correlates with variables relevant to the outcome variable (treatment assignment bias)
– Few cases eligible for comparison group are comparable to treatment group case (on covariates deemed relevant)
– Many relevant dimensions on which to match
Tactic: Generate a “propensity score” via logit regression of participation on confounders, giving the predicted probablity of participation in the treatment group.
Then: Each treatment participating case gets one or more matched comparison cases based on their confounding variables, which give their propensity to have been participants. To do this, we need measures and thresholds of nearness.
I’m unclear about: But is it nearness on P-hat or is it nearness on the confounders? If the latter, does that still involve it like there is a variable P for participation and P~X and P~Y, so model P~X, pick comparisons that look like group for whom P=1, and then assume relationship to Y operates similarly in treatment & comparison groups?
## Liberalism’s Golden Boy Echoes “Why Liberalism Failed”
Neoclassical economics runs in my ideological veins. In my office, I am often the sounding board for economic policy ideas: couldn’t we impose a \$30/hour wage for in-home caregivers? “No – inevitably,” I begin, “… inevitably, the demand for formal in-home care will plummet, and in fact families will turn to a black market for low-wage work.” This inevitability is despite the enormous value caregivers provide to society, and this is also because of the low socioeconomic status of the caregiving workforce — they’re immigrant women, and if they held any higher status, they would not be in the industry: better options would be available at McDonald’s or at Walmart. The issue is too far gone.
This example embodies the failure of liberalism as Patrick J. Deneen sees it in Why Liberalism Failed. We are not free, and the more we learn about how our state and our markets work, the less we consider our world to be one of opportunity. This example is a failure of our state (a failed immigration system) and our markets (a failure to meet dire need, reward valuable work, and establish true freedom for either consumer or worker).
Deneen claims that trends in the markets and the state are pushing liberalism away from its core values for humankind — “to secure liberty and human dignity through the constraint of tyranny, arbitrary rule, and oppression” — and toward a “remaking of the world in the image of a false anthropology.”
This is Deneen’s key thesis: we began with a structure that held honorable goals that protected us from oppression but required us to collectively define the liberty, dignity, and culture we desired; at some point, our definitions rotted; since that point, the entire structure is decaying from its core, spread only more quickly by a polar politics pulling that ideology out to every last piece of the structure.
For the conscious-but-partisan among us, we see the rot only when it is spread by the other team’s favored institution: the market or the state. But in fact, we adopted the rotted definitions ourselves, and we are responsible for its spread. At best, we are left to be angry cynics mad at the state of affairs but acquiescent, unwilling to improve it except marginally. At worst, we are cruel cynics: both unwilling to act and uncaring about the harms arising. In either case, we are shepherded toward cynicism while chanting that the arch of the system bends toward liberty if it will bend our way. As Deneen wrote, “our liberation renders us incapable of resisting these defining forces — the promise of freedom results in thralldom to inevitabilities to which we have no choice but to submit.”
## Take the Ideology of Competition, for Example
Now I want to return to that “false anthropology,” and to use the ideology of competition as an example of what Deneen may have meant by “false anthropology.” I interpret “anthropology” to mean a mental model of what humans value. Leaning on my caregiver example, the most obvious example of a false anthropology that’s common in liberalism (and neoclassical economics) is the equivalence of human incentives with competition: that direct competition is a key component of human life, and therefore that competition can suitably motivate all structures that govern human activity in our liberal society. The market is competitive, therefore its results are good. (Ah, Milton Friedman.) And the effect of this on human life and the state are clear. We made humanity subservient to competition.
Specifically, under this competition ideology — just one ideology of many that has rotted the core of liberalism by improperly defining “liberty and human dignity” — human life has been shifted into a near-constant state of anxiety. We are not secure in our access to necessities. We are never secure. We live in a society where the elderly — even with the Social Security benefits due from a lifetime of hard (but low-paid or “competitively paid”) work — face 8-month waiting lists to access Meals on Wheels in many cities. The cruel cynic supposes they can simply make do in those 8 months; the angry cynic calls for more government action to support the individual’s need for state support under a “glitch” in competition’s benefits. But it’s not a glitch. The system is not a leaky boat (which we tell ourselves it is), but rather a life vest (which we’ve held onto for far too long). Its inadequacies are uncountable, but I’ll continue: your access to health care could disappear if your employer went out of business (especially pre-Obamacare). And, because of the subservience of humanity to the “false anthropology” of competition, we feel no responsibility to help our nearest neighbors in the event that they are without home, food, or health care. The competition example is just one of many to be made. But I will explore it further still.
(As I explore the competition example, I’d like you to consider what “false anthropology” means as you read it, and what exemplary definitions of liberty, dignity, and culture draw from a “false anthropology” as you see it. I’m reachable on Signal and iMessage at 831-402-2736.)
For a jarring break of pace about competition, I will recount a story from my childhood. In a made-for-radio contest called “hold your wee for a Wii,” a local mother competed for a Wii to give her children by drinking gallons and gallons of water. She died. The pressure of competition placed blinders over her vision, disappearing her humanity, long-term thinking, and health.
A true anthropology would exalt humankind’s ability to plan for the long term, to take care of future generations and of health and well-being, and would consider connectivity and collaboration as a wellspring of problem-solving. It need not be utopia, but it would be hard to summarize in a word and diminish to a process that induces anxiety to motivate human action. In conclusion, and applying it to a contemporary crisis: a true anthropology would not struggle to reconcile “motivat[ing] liberty and human dignity through the constraint of tyranny, arbitrary rule, and oppression” with the changes in human activity required to confront climate change.
This concludes my example about competition and draws me into some very-current events. Today, it is 2020; today, the political moment is coronavirus pandemic tumult. Today, debate rages about whether to reopen society from shelter-in-place orders after a month of social distancing. Merkel is on her way out — steady leader of the liberal order that she was — and Macron was embattled but is newly seen as steady-handed. Trump, Bolsonaro, Obrador, Erdogan, Duerte are in, but their styles are currently questioned by the fearful masses. For all the fuss over US’ top-level populism, it is Europe’s nationalists that set the pace for ascendant illiberalism in the West. The last five years, from Brexit to the EU elections, have demonstrated visceral disgust by the individuals liberalism liberated and the markets and states that perpetuate their individual liberties. But hold on, what liberties? The definitions we take to answer that question answer Why Liberalism Failed, and I have sampled above how those definitions shortchange humanity. Want to move to a new system and stop shortchanging ourselves? Pick a new, truer, richer anthropology to guide the future. Define and defend culture.
## Macron’s Anthropological Shift
This is precisely what French president Emmanuel Macron charted out yesterday, in an interview with FT. (The article, the video.) He may well have read Why Liberalism Failed himself during his sheltering in place. (I imagine he is having less travel time.)
France is notorious among my peers in economics for the “friction” they introduce in their markets: “inevitably,” we jeer, “making harder to fire people makes a company slower to hire people.” After his bold actions to reduce these frictions to ensure economic competitiveness, the people seemed liable to fire Macron and hire a nationalist for president. Years-long protests undermined his political credibility. His pro-EU position faced the obvious skepticism of a people anxious about their security to access the necessities and anxious about their jobs and companies in the flows of trade, migration, and finance. But it is unlikely that the people would be satisfied with a tyranny of the majority and the results of a nationalist pivot — even if they were to vote one in. Nationalism does not solve the problems of liberalism; it extends them, because it has at its core the same rotten ideologies. Macron doubtless would make that argument on nationalism.
What Macron instead proposes is to replace the rotten core of liberalism and hope that the structure is renewed by a stronger, truer anthropology at the very center of it all. (Even if rotted, perhaps the structure is salvageable with a new set of definitions of liberty, dignity, and culture.) Macron names — multiple times — “anthropology” as the crux of what he wants to restore. This suggests that Macron is now thinking beyond just how policies can bandage the individuals’ wounds from competition or assuage a key constituency’s anxieties. No: Macron is thinking about nothing less than redefining what the state and market are geared toward, e.g. subbing out competition with collaboration.
Rather than hiding how the structure is shortchanging our dignity and culture, he lists off the anxieties liberalism wrought — anxieties he exacerbated — with a serious tone. He soberly states that liberalism is not just a post-Covid failure: it was a failure before Covid, and the writing was on the wall. The writing on the wall was addressed to him, liberalism’s poster boy.
He expresses and exudes optimism, and he promises to replace the core ideology of our state and market. If he — and our generation generally — succeed to replace the ideology, we could move sustainably past this fail-state tug-of-war between liberalism and nationalism, between right and left. Human dignity and connectivity should come first, and economics should be subservient to those intuitions. We should not feel insecure in a society that’s so technically advanced. Speaking of Covid’s role in this shift, Macron hits the nail on the head: “I think it’s a profound anthropological shock.” he says. “We have stopped half the planet to save lives, there are no precedents for that in our history.” Covid is a temporary shock to the liberal order, not from a nationalist front but from an exogenous mortal reckoning. If used correctly, we can use it to pivot away from a painful decline. A decline of liberalism and resultant, unsatisfactory surge in nationalism. Instead, we can pivot toward enhancing our collective ability to plan and act on shared priorities, and to cherish life and a dignified existence.
We will know society’s rot has been excised from its liberal core when we neither ignore the suffering of a neighbor due to the supposed inevitability of that suffering nor feel so routinely insecure that we cannot act collectively or individually to protect our lives and dignity. And, you and I must pray that we excise the rot without excising the rest of the protective structure we cherish about this society. (In which case we will face both arbitrary oppression from above and anxiety from the dreadful sights just below.)
It would be easy to see how Macron would be moved upon reading Deneen: he is a former investment banker, recently confronted with the incredible pains his rural constituents are feeling. He was the type of guy who began counterarguments with “inevitably, under competition,” but suddenly he was faced with the projections of havoc from a climate crisis, and then he was faced with the real pain of the French layperson, and now he is faced with most of his country fearing their lives or those of their parents. After facing the enormity of the challenges, the scope of his concern likely changed. “No,” he realized, “trade adjustment assistance and free community college won’t heal these anxieties.” (My apologies to Bernie, Warren, Biden, Buttigieg, 2019 Macron, Merkel, Obama, and Cameron.) The takeaway from Covid-19 is that our economic and state structures do not value what we value — even if they have bandages.
You can’t repeat “suffering is inevitable” for long before you realize you’re shortchanging anthropology. In fact, now that we look around at our desolate place in the big wide ocean, and our mortality becomes salient, we realize we’ve been holding onto a rotten life jacket all along.
We need something more solid: we need our friends and family to be valued, and ourselves to be secure. That was supposed to be the promise of society. What ever happened?
We need something more solid now. We need Macron, liberalism’s heir apparent at the end of this first quarter-century of a new millennium, to (1) keep thinking about that “truer anthropology,” (2) install credible changes in French society that better define its core values, including needed actions that humbly reverse his past policy positions and that genuinely empower the French people including his current opponents, (3) report honestly how this project succeeds and why it met resistance, and (4) remain committed to a true anthropology above his commitment to saving the vestiges of liberalism.
After all, he will have only several more years to take this Covid awakening forward into practice and into structural changes. There will be an unprecedented demand for those changes, but unprecedented external turbulence as well. The projects of fixing the Covid fallout must be seen as the substrate of piloting a new social order, not seen as a distraction from doing so.
Excerpts from: Patrick J. Deneen. “Why Liberalism Failed.” Apple Books. https://books.apple.com/us/book/why-liberalism-failed/id1327280661 | 2020-10-28 10:34:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27549582719802856, "perplexity": 3974.4345861997026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898499.49/warc/CC-MAIN-20201028103215-20201028133215-00413.warc.gz"} |
http://ccapp.osu.edu/abstracts_archive.html | CCAPP Seminars
# Early Stages of Ultra High Energy Cosmic Ray Air Showers as a Diagnostic of Exotic Primaries
10/06/06
Eun-Joo Ahn (Bristol Institute/U.Delaware)
The nature of ultra high energy cosmic rays (UHECRs) remains an enigma. UHECR detection rate is increasing with new generation detectors which will speed up the process of understanding these energetic particles. The field of UHECRs is briefly reviewed with a focus on air shower characterisation of the primaries. I show that the study of the first (few) interaction(s) can substitute the full scale Monte Carlo in analysing the air shower characteristics. This method is advantageous for testing new models with many parameters. Exotic primaries can be compared with well studied primaries such as protons and iron nuclei. One such exotic case is the TeV black hole creation which can happen in models of large extra dimensions. High energy neutrinos interacting with air molecules may form these objects in the Earth's atmosphere, and a good way of discriminating them from other backgrounds is through air shower studies.
# Dark Matter vs. Modified Gravity
10/17/06
Scott Dodelson (Fermilab/Chicago)
There is abundant evidence for dark matter in the universe. Why even consider the alternative of modifying gravity? Despite its role as an underdog, modified gravity has scored a number of successes. Several recent advances, both theoretical and observational, have given new life to this old idea. I will try to convince you that the game is not over, and that the struggle between the two approaches is as exciting as ever.
# Concepts and Challenges of the Accelerating Universe
10/27/06
Eric Linder (UC Berkeley/Berkeley Lab)
Recent developments in understanding the influence of dark energy dynamics on cosmological observables have led to several insights in how to reveal the nature of dark energy. This includes the categorization of many physics models for the dark energy into either freezing or thawing behavior, recognition of differences from the inflation scenario, and methods for robustly distinguishing a physical dark energy from a modification of gravitational physics. These have definite consequences for experiment design, such as prescription of the relative precision needed for dynamics measurements, the need for probes of both cosmological expansion and large scale structure growth, and how dark energy microphysics can contribute a theory-induced systematics limit on many techniques.
# Exploring the Dark Energy Domain
11/03/06
Dragan Huterer (Chicago)
One of the great mysteries of modern cosmology is the origin and nature of dark energy - a smooth component that contributes about 70% of the total energy density in the universe and causes its accelerated expansion. In this talk I describe and critically evaluate a variety of methods, from simple parametrizations to non-parametric methods, to model the background expansion history in the presence of dark energy. Then I present results from a comprehensive study of a class of dark energy models, commenting on current and expected future constraints, insights into the dynamics of dark energy, figures of merit, and a classification of theoretical models.
# Galaxy groups in DEEP2: Implications for cosmic evolution
11/07/06
Brian Gerke (UC-Berkeley)
Groups and clusters of galaxies, as the largest, most recently formed objects in the universe, carry much information about the recent history of the cosmos. By studying these systems at a variety of epochs, it is possible to reconstruct both the evolution of clusters and the history of large-scale structure formation. Such studies provide important constraints on theories of galaxy formation and on cosmological parameters. With the recent completion of the DEEP2 Galaxy Redshift Survey at z~1, it is now possible to perform detailed studies of galaxy groups and clusters over a wider redshift range than ever before. In this talk I will present recent results suggesting that, at the DEEP2 epoch, galaxy groups had *only recently* become suitable environments for shutting off star formation in galaxies. I will also present evidence that DEEP2 groups are underluminous in the X-ray band, when compared with local systems. Finally, I will describe an ongoing project to compare the DEEP2 group population to the local sample detected in the 2dFGRS. This work will allow new tests of galaxy-formation theory by probing evolution in cluster mass-to-light ratios. It will also permit new constraints on cosmological parameters by measuring the evolution of the group abundance between z~1 and the present day; in particular, this study should provide the first-ever constraint from cluster counts on the dark energy equation of state parameter.
# Light and Shadows from Dark Matter
11/17/06
Stefano Profumo (Caltech)
Even though the Dark Matter is dark - and therefore features very suppressed electro-magnetic interactions - photons can be, in principle, very sensitive probes of this as yet undetected and unknown (in its fundamental particle physics nature) component of the Universe. Dark matter can pair annihilate into Standard Model particles that yield photons in their subsequent decays, or it can directly pair annihilate into monochromatic photons, or decay into photons. In certain scenarios, photons can also resonantly scatter off Dark Matter, depleting the photon flux from sources located behind (or at the center) of high density Dark Matter concentrations. In this talk, I will review and present new results on photons as a probe of the fundamental nature of Dark Matter.
# From Outer Space to Inner Space: Particle Physics in the Age of Precision Cosmology
12/1/06
Will Kinney (U. of Buffalo)
I will give an overview of exciting new developments at the interface between astrophysics and particle physics, focusing on the physics of inflation in the very early universe. New cosmological observations such as that from the WMAP satellite and the Sloan Digital Sky Survey have achieved unprecedented precision: Uncertainties in cosmological parameters such as the curvature of space and the density of matter have shifted from order unity to of order a few percent. As a result, it is possible for the first time to place meaningful constraints on the physics of the universe during the epoch of inflation, when the universe is believed to have expanded exponentially and quantum processes created the seeds for structure in the universe. This epoch is of great interest for fundamental physics, and cosmology is giving us the first observational hints of physics at ultra-high energy, where Grand Unification and perhaps even quantum gravity may be relevant.
# The Radial Distribution of Galactic Satellites
12/12/06
Jacqueline Chen (Bonn)
The spatial distribution of satellite galaxies around host galaxies can illuminate the relationship between satellites and dark matter subhalos and aid in developing and testing galaxy formation models. The projected cross-correlation of bright and faint galaxies offers a promising avenue to putting constraints on the radial distribution of satellite galaxies. Previous efforts to constrain the distribution attempted to eliminate interlopers from the measured projected number density of satellites and found that the distribution is generally consistent with the expected dark matter halo profile of the parent hosts. The measured projected cross-correlation can be used to analyze contributions from satellites and interlopers together, using a halo occupation distribution (HOD) based analytic model for galaxy clustering. Tests on mock catalogs constructed from simulations show promise in this approach. Analysis of Sloan Digital Sky Survey (SDSS) data shows results generally consistent with interloper subtraction methods, although the radial distribution is poorly constrained with the current dataset and larger samples are required.
# STARCaL: Precision Astrophysics & Cosmology Enabled by a Tunable Laser in Space
01/12/07
Justin Albert (U. of Victoria)
We propose a tunable laser-based satellite-mounted spectroscopic, spectrophotometric, and absolute flux calibration system, to be utilized by ground- and space-based telescopes. As spectrophotometric calibration plays a significant role in the accuracy of photometric redshift measurement, and photometric redshift accuracy is important for measuring dark energy using SNIa, weak gravitational lensing, and baryon oscillations, a method for reducing such uncertainties is needed. We propose to improve spectrophotometric calibration, currently obtained using standard stars, by placing a tunable laser and a wide-angle light source on a satellite by early next decade (perhaps included in the upgrade to the GPS satellite network) to improve absolute flux calibration and relative spectrophotometric calibration across the visible and near-infrared spectrum. For spectroscopic measurements, the precision calibration of wavelength scale that is enabled can reduce uncertainties on measurements of fundamental constants using, e.g., quasar absoption lines. In addition to fundamental astrophysical applications, the system has broad utility for atmospheric & climate science, defense and national security applications, and space communication.
# Mass Profiles and Mass-to-light Ratios of SDSS Clusters from Lensing in the SDSS
1/19/07
Erin Sheldon (NYU)
The Maxbcg catalog of galaxy clusters, created from 7500 square degrees of Sloan Digital Sky Survey (SDSS) imaging data, is the largest yet assembled. These objects, ranging from small groups to massive clusters, provide an excellent laboratory to study the formation of structures in our universe. I will present measurements of the mean radial mass profile measured from weak gravitational lensing as a function of cluster richness and luminosity. The wide area of the SDSS allows measurements ranging from the inner halo (25 kpc) well into the surrounding large scale structure (30 Mpc). As predicted by the cold dark matter model, these mass profiles have a distinctive non-power law shape. They are well described by a universal NFW profile in the inner halo and linear correlations on large scales. The virial mass scales strongly with cluster richness. We also measure the total light of the galaxies in and around the clusters. The light is distributed in the cluster differently than the mass, with the light being more centrally concentrated due to the presence of the brightest cluster galaxy. We find that the mass to light ratio is scale dependent and asymptotically approaches the same global value on large scales, independent of cluster mass.
# Infalling Satellites and Structure of Galactic Disks in CDM Models
1/23/07
Stelios Kazantzidis (KIPAC/Stanford)
The Cold Dark Matter (CDM) model of hierarchical structure formation has emerged as the dominant paradigm in galaxy formation theory owing to its remarkable ability to explain a plethora of observations on large scales. Yet, on galactic and sub-galactic scales the CDM model has been neither convincingly verified nor disproved, and several outstanding issues remain unresolved. Using a set of high-resolution numerical simulations I investigate whether the abundance of substructure predicted by CDM models is in conflict with the existence of thin, dynamically fragile galactic stellar disks. I show that encounters of massive subhalos with the center of the host potential where the disk resides at z < 1 are quite common and yield significant damage to the disk. However, these violent interactions are not absolutely ruinous to the survival of disks. I demonstrate that infalling satellites produce several distinct observational signatures including flaring, long-lived, low-surface, ring-like and filamentary structures, and a complex vertical morphology that resembles the commonly adopted thin-thick disk profiles used in the analysis of disk galaxies. These results imply that substructure plays a significant role in setting the structure of disks. Upcoming galactic surveys and astrometric satellites offer a unique opportunity to distinguish between competing cosmological models and constrain the nature of dark matter on non-linear scales through detailed observations of galactic structure.
# Ultraviolet Pumping of the 21 cm Line in the High Redshift Universe
2/13/07
Leonid Chuzhoy (U.Texas)
The next generation of radio telescopes (LOFAR, MWA, SKA, 21CMA) promises to open a new observational window into the epoch preceding the end of reionization. By measuring the redshifted 21 cm signal from neutral hydrogen, the new telescopes can provide us with information on the history of reionization, the nature of the first radiation sources, the spectrum of the primordial density perturbation field, the physical properties of dark matter particles and so on. Besides the technical challenge, the correct extraction and interpretation of the measured signal requires accurate modeling of the physical processes that affect it. Unlike the collisionally pumped 21 cm signal from the nearby sources, the signal from high redshift intergalactic medium is pumped primarily by ultraviolet (UV) resonance photons. In this talk I will describe new calculations of UV pumping, which take into account several previously neglected physical processes, including the backreaction of induced hyperfine transitions on the incident UV photons and conversion of X-rays into the UV photons. I will show that neglecting these processes generally results in completely erroneous interpretation of the observed 21 cm signal.
# Quasars Probing Quasars: Shedding (Quasar) Light on High Redshift Galaxies
2/20/07
Joseph Hennawi (Berkeley)
With close pairs of quasars at different redshifts, a background quasar sightline can be used to study a foreground quasar in Ly-alpha absorption. This novel experiment allows us to probe the foreground quasar environment on scales as small as a galactic disk where the ionizing flux from the quasar could be as large as ~ 10,000 times the extragalactic UV background. I will discuss the manifold cosmological applications of these rare projected sightlines: they provide new laboratories for studying the faint fluorescent recombination radiation from the high redshift Universe, they constrain the environments, emission geometry, and radiative histories of quasars, and they shed light on the distribution and kinematics of the gas in high redshift proto-galaxies.
# A New Look at the Galactic Diffuse GeV Excess
03/06/07
Brian Baughman (UCSC/SCIPP)
The EGRET experiment onboard the Compton Gamma-ray Observatory have provided the most precise measurements of the gamma-ray sky to date. EGRET measurements of diffuse emission across the sky show an excess above 1 GeV. This “GeV excess” has been a topic of great debate and interest since its original discovery by Hunter et al. in 1997. While various attempts have been made to explain the measurement as new phenomena the possibility remains that it may be due to unknown instrumental effects. To examine this, I have modified the GLAST simulation and reconstruction software to model the EGRET instrument. This detailed modeling has allowed me to explore the parameters of the EGRET instrument, in both its beam-test configuration and in-orbit on CGRO, in greater detail than has previously been published. While it was our intention to examine the possibility that the GeV excess was the result of some hereto yet unknown instrumental effect, I have instead found that the GeV excess is significantly increased when previously unaccounted for instrumental effects are considered. I will present a new measurement of diffuse gamma-ray emission in the inner Galaxy, as well as the methodology used in our measurement.
# Cosmology in the Era of Large Surveys
3/13/07
Ryan Scranton (U. of Pittsburgh)
The past decade has seen an unprecedented improvement in our understanding of the basic picture of the universe. We have gone from factors of 2 uncertainties in the age and matter density of the universe to better than 10% precision thanks to a vast increase in the available survey information. Along the way, the depth and breadth of these surveys have made previously impossible measurements a reality. I will discuss two such cases: detection of cosmic magnification and the integrated Sachs-Wolfe effect. Another unexpected benefit of these surveys has been the discovery of dark energy. While the current generation has been sufficient to demonstrate its existence, we will have to wait until the next round of surveys to fully explore the details of dark energy behavior throughout the history of the universe. I will finish with a discussion of some of the tools we will need to develop over the course of the next several years to fully exploit the power of future surveys like the LSST, SPT and JDEM.
# Neutrino Astronomy and Particle Physics with the IceCube Detector
4/5/07
Doug Cowen (Pennsylvania State U.)
IceCube is a neutrino detector under construction at the South Pole. It is designed to search primarily for energetic neutrinos from cosmological sources, but is also sensitive to many other signals from neutrinos and other particles. In this talk we will make the case for neutrinos as astronomical messengers, describe how the IceCube detector will be able to detect them, discuss results from both the first year of full-scale running of the (partially-constructed) detector and from IceCube's progenitor AMANDA, and conjecture about when IceCube will make discoveries.
# The Least Luminous Galaxies in the Universe
4/10/07
Beth Willman (Harvard-Smithsonian Center for Astrophysics)
In the last few years, a combination of observational and computational advances have ignited the field of near-field cosmology - using galaxies in the nearby Universe as tracers of dark matter on small-scales and using their detailed properties as fossil records of the process of galaxy formation from the earliest times until now. For example, since 2005, a dozen dwarf galaxies have been discovered around the Milky Way and M31 that are less luminous than previously thought possible to exist. These discoveries will both revolutionize our understanding of galaxy formation at the lowest luminosities and will shed new light on the properties of dark matter on galaxy scales. I present the results of these searches and discuss them in a cosmological context.
# Closing in on Ultra-High Energy Cosmogenic Neutrinos with the Radio Detection Technique
4/16/07
Amy Connolly
No diffuse cosmic neutrino flux has yet been observed, but the highest energy cosmic rays imply an associated flux of neutrinos. These neutrinos, with energies that exceed 10^18 eV, will point back to their source, are nearly unattenuated over cosmic distances, and in any detection medium, will induce interactions at center-of-mass energies beyond those seen at any accelerator on earth. I will describe current and future experiments that seek ultra-high energy cosmic neutrinos, which are so evasive they require detection volumes beyond 100's of km^3. Volumes of such size are achievable using the radio Cerenkov technique, and I will discuss current and future projects that utilize this detection method, including the ANITA balloon experiment which just completed its first full physics flight in January of this year.
# Probing Hydrogen Reionization
4/17/07
Detailed observations of the Epoch of Reionization (EoR) will characterize the nature of the first luminous sources in the Universe, describe their impact on the surrounding IGM, and fill in a significant gap in our knowledge of the history of the Universe. I will describe recent efforts to theoretically model the EoR. Then I will discuss the theoretical interpretation of quasar absorption spectra at z~6, and comment on future 21 cm probes of reionization.
# A New Twist on Galaxy Scaling Relations
4/24/07
Dennis Zaritsky (Univ. of Arizona)
Galaxy evolution has proven to be a difficult problem, partly because we appear to be unable to separate various parts of the problem. I will discuss new results on galaxy scaling relations that suggest that galaxy structure may be much more scalable than previously appreciated. Our extended Fundamental Plane formalism has implications for the nature of spheroids on all scales, the physical processes that might affect the smallest galaxies, the distribution of baryons within dark matter halos, and the evolutionary state of spiral galaxies.
# Galactic Cosmic Rays and Diffuse Gamma-Ray Emission
5/1/07
Igor Moskalenko (Stanford)
Practically all our knowledge of cosmic ray (CR) propagation comes from studies of the composition and spectra of CR species. Therefore, astrophysics of cosmic rays and gamma rays depends very much on the quality of the data and their proper interpretation. Combining the data of different experiments into a single interpretive model of the Galaxy gives us a better chance to understand the mechanisms of particle acceleration, the role of CR in the dynamics and evolution of the Galaxy, and to provide a common background model upon which further progress in related areas can be made. The new generation gamma-ray observatory GLAST is to be launched in December of 2007; it covers the energy range from MeV to TeV energies and for the first time will close the gap between the spacecraft instruments and ACTs. GLAST will advance our knowledge of the detected sources, discover thousands of new sources, and provide invaluable insight into the propagation of CR in the Galaxy. In my talk, I will summarize the current status of astrophysics of CR and speculate on what we can learn from GLAST and other space missions.
# Voids of dark energy
5/8/07
Irit Maor (Case Western)
I discuss the clustering properties of a dynamical dark energy component. Modelling the dark energy as a light scalar field, The linear evolution of perturbations is numerically explored. The regime where the mass scale of the field is comparable to the Hubble scale gives non trivial dynamics, and the scalar field tends to form underdensities in response to the gravitationally collapsing matter. I shall discuss in detail the physics behind the formation of such voids, and the generality of these results. Detection of dark energy voids will clearly rule out the cosmological constant as the main source of the present acceleration.
# The Growth of Massive Galaxies
05/15/07
Andrew Benson (Caltech)
I will present new results from ongoing work aimed at understanding how massive (mostly elliptical) galaxies grow. Evidence is accumulating that this process isn't as simple as was previously thought - mergers between galaxies might not be the only (or even the main) driver of this growth. I will demonstrate that the perceived difficulty of forming massive galaxies at high redshifts in cold dark matter models is not a problem at all. The real problem is preventing them from becoming too massive! I'll show the latest results from model calculations along with some current observational measures and prospects for the future.
# Galaxy and halo evolution and environment
5/22/07
Ravi Sheth (U. Penn)
I will show a selection of measurements showing how galaxy properties and star formation histories correlate with their environment. I will then describe halo model interpretations of such measurements. These include a rather simple description of what appear to otherwise be complex correlations between galaxies and their environments; a number of ways in which the halo model makes a connection to observables which were previously the domain of SPH or semi-analytic galaxy formation models (e.g., the different formation histories and mass-to-light ratios of central and satellite galaxies, and the intercluster light component); and a complete description of the no-merger passive evolution model which can provide the basis for understanding the assembly of stellar mass in the most massive galaxies.
# Angular signatures of dark matter in the diffuse gamma ray background
5/29/07
Pasquale Serpico (Fermilab)
Dark matter annihilating in the halo of our Galaxy and elsewhere in the universe is expected to generate a diffuse flux of gamma rays, potentially observable with next generation satellite-based experiments, such as GLAST. We present the expected signatures of dark matter, in particular the deterministic features in the angular distribution of this radiation at large scales, both pertaining the galactic and the extragalactic contribution. If at least a few percent of the diffuse gamma ray background observed by EGRET is the result of dark matter annihilations, then GLAST should be able to detect many of the signatures discussed in this talk.
# The nature of cosmic explosions
6/27/07
Avishay Gal-Yam
I will try to review what we know about the various classes of cosmic explosions, how we came by that knowledge, and what we do not yet know. I will focus on some areas of recent progress, and will present some prospects for the near and mid-term future.
# The Search for Neutrinos from Gamma-Ray Bursts with AMANDA
9/7/07
Kyler Kuehn (UC Irvine)
The Antarctic Muon and Neutrino Detector Array (AMANDA) is a neutrino telescope located beneath the ice at the geographic South Pole. AMANDA searched for high energy neutrinos from both discrete and diffuse astrophysical sources from 1997 to 2004. Neutrino telescopes like AMANDA provide a unique window into the nature of various astrophysical phenomena, complementing what can be learned from other ground- or space-based observatories. We present the results of AMANDA's neutrino observations correlated with more than 400 gamma-ray bursts (GRBs) in the Northern Hemisphere during the first seven years of AMANDA operation. During this time period, AMANDA's effective collection area for muon neutrinos was larger than that of any other existing detector. Based on our observations, we set the most stringent upper limit on muon neutrino emission correlated with gamma-ray bursts to date. The impact of this limit on several theoretical models of GRBs is discussed, as well as the future potential for detection of GRBs by AMANDA's successor, IceCube.
# Reverse Engineering Galaxy Formation
9/11/07
Jeremy Tinker (University of Chicago)
I will show new results from halo occupation analyses of clustering measurements that provide insight into the processes that determine galaxy properties. From simultaneous analysis of galaxy correlation functions and galaxy void statistics, I show that the occupation of galaxies in halos at fixed mass is independent of large-scale environment. This is true for galaxies selected on luminosity, color, and morphology, implying that these galaxy properties are determined by the mass of the halo in which they sit, irrespective of the formation history of that halo.
# The Distribution of Baryons in Galaxy Clusters and Groups
9/20/07
Anthony Gonzalez (University of Florida)
If galaxy clusters are indeed fair samples of the universe, then a basic expectation is that the baryon fraction in clusters and groups should reflect the universal value. Observed shortfalls have therefore led to proposals of missing baryons in a warm gas component, as well as other more controversial interpretations. I will present the results of a program to understand the distribution of baryons in nearby galaxy clusters and groups, as well as the properties of their central galaxies. A main focus of this work is to quantify the total stellar baryon fraction, including stars in both galaxies and the intracluster light, and combine these data with published measurements of the hot baryon fraction in the intracluster medium (ICM). We find that the total baryon fraction is independent of cluster mass, with no compelling evidence for missing baryons. I will also present related results from this program pertaining to cluster galaxy evolution, galaxy structure, and chemical enrichment of the ICM.
# Gamma-Ray Bursts: Recent Progress
9/26/07
Pawan Kumar (University of Texas, Austin)
I will describe recent progress in our understanding of gamma-ray bursts. Using early x-ray data from the Swift/XRT we are able to determine the distance from the center of explosion where gamma-ray emission is generated. I will also discuss what we have learned about the mechanism by which gamma-ray photons are generated, and the composition of the relativistic outflow in this explosions.
# Two-filter Cluster Finding in the Millennium Simulation (...and beyond)
10/09/07
Joanne Cohn (UC - Berkeley)
The era of large scale galaxy surveys is now upon us. These surveys make possible large samples of galaxy clusters, which can be used for a variety of astrophysical and cosmological purposes. One of the most promising methods for selecting such samples of galaxy clusters is known as the red sequence method. It requires only two filters and has been shown to be successful in low redshift pilot studies. In order to better understand what sorts of objects are found by these methods at higher redshift, we used two filter cluster searches on outputs of the Millennium Simulation. We found a higher fraction of blends as redshift increased. We expect that the properties causing this blending are generic. We also explored ways to reduce or compensate for the blending in the analysis, which highlighted the crucial role of extremely accurate mock catalogues.
# Galaxy Content of Clusters and Groups in the Local Universe
10/23/07
Sarah Hansen (University of Chicago)
I will present recent analysis of SDSS imaging data quantifying the population of galaxies in MaxBCG-identified clusters and groups. I will discuss the distributions of satellite galaxy luminosity and satellite color and the dependence of these on cluster properties. I will also show the relationship of Brightest Cluster Galaxy luminosity to cluster mass and to satellite galaxy luminosity. These measurements of cluster light, in combination with lensing results, also allow measurement of ensemble cluster mass-to-light profiles. This study demonstrates the power of cross-correlation background correction techniques for measuring galaxy populations in purely photometric data, and provides a baseline for the study of galaxy evolution in higher redshift samples.
# The race to detect dark matter with noble liquids, XENON10 results, and the LUX experiment
11/6/07
Tom Shutt (Case Western)
Detectors based on liquid noble gasses have the potential to revolutionize the direct search search for WIMP dark matter. The XENON10 experiment, of which I am a member, has recently announced the results from it's first data run and is now the leading WIMP search experiment. LUX, a large-scale follow-up to XENON10 and other experiments using xenon, argon and neon have the potential to rapidly move from the current kg-scale target mass to the ton scale and well beyond. This should allow a (nearly) definitive test or discovery of dark matter if it is in the form of weakly interacting massive particles.
# On Scatter in Galaxy Cluster Mass-Observable Relations
11/20/07
Douglas Rudd (Institute for Advanced Study, Princeton)
In this talk I will discuss issues relevant to the use of galaxy cluster abundances to constrain the properties of dark energy. In particular I will focus on the use of self-calibration to jointly constrain cosmological and cluster model parameters using the large cluster samples provided by SZ surveys. These surveys should have sufficient statistics and sensitivity to dark energy to remain competitive with other dark energy probes, provided the connection between cluster observables and mass is describable in a small number of extra nuisance parameters. I will discuss the distribution of cluster SZ observables extracted from a large sample of simulated clusters and the link between halo assembly history and scatter in the mass-observable relations.
# Beyond WMAP: the CMB at large and small scales
12/04/07
Joanna Dunkley (Princeton)
I will discuss some current and future measurements of the Cosmic Microwave Background anisotropy: its small-scale intensity and large-scale polarization. I will describe some of the physics that we will be able to test at small-scales with the Atacama Cosmology Telescope, including better determining properties of neutrinos, and testing for non-standard inflation. At large-scales the next goal is to obtain evidence for inflation by observing gravitational waves, but we face challenges due to the significant level of polarized radiation from the Milky Way. I will discuss current efforts to better understand and characterize this emission, paving the way for future experiments.
# Cosmic Strings from Supersymmetric Flat Directions
1/22/08
David Morrissey (U. of Michigan)
Cosmic strings are non-trivial configurations of scalar (and vector) fields that are stable on account of a topological conservation law. They can be formed in the early universe as it cools after the Big Bang. The scalar fields required to form cosmic strings arise naturally if Nature is supersymmetric at high energies. A common feature of supersymmetric theories are directions in the scalar potential that are extremely flat. Combining these two ingredients, the cosmic strings associated with supersymmetric flat directions are qualitatively different from ordinary cosmic strings. In particular, flat-direction strings have very stable higher-winding modes, and are very wide relative to the scale of their energy density. These novel features have important implications for the formation and evolution of a network of flat-direction cosmic strings in the early universe. They also affect the observational signatures of the strings, which include gravity waves, dark matter, and modifications to the nuclear abundances and the blackbody spectrum of the microwave background radiation.
# Prospects For Detecting Dark Matter In Light Of The WMAP Haze
1/29/08
Gabrijela Zaharijas
Observations by the WMAP experiment have identified an excess of microwave emission from the center of our Galaxy, dubbed the "WMAP Haze". It has previously been shown that the origin of the haze could be linked to synchrotron emission from relativistic electrons and positrons produced in the annihilations of dark matter particles - implying a possible detection of dark matter. If dark matter annihilations are in fact responsible for this phenomenon, then other annihilation products will also be produced, including gamma rays. In this talk, I will present the prospects of detecting gamma rays from dark matter annihilations in the Galactic Center region which could reject or confirm this scenario in the near future.
# Observing the First Galaxies and the Reionization Epoch
2/5/08
Steven Furlanetto (Yale)
Finding and understanding the earliest generations of galaxies is one of the frontiers of modern cosmology. Although enormous strides have been made in the past decade, the current observational evidence is ambiguous at best. I will describe two routes toward improving the situation. First, searches for high-redshift galaxies through their Lyman-alpha emission lines can teach us not only about the galaxies themselves but also about the intergalactic medium (IGM). While current measurements constrain their abundance, the clustering of these objects promises to reveal even more information. Second, three-dimensional "tomography" 21 cm emission (or absorption) by the neutral IGM has the potential to unlock the detailed distribution of baryons between recombination and reionization. I will describe how this cosmic background can teach us about the eras of the first stars, first black holes, and reionization itself. I will also describe some of the challenges facing these measurements.
# First Results from the SDSS-II Supernova Survey
2/8/08
Ben Dilday (University of Chicago)
The SDSS-II Supernova (SN) Survey was undertaken during the Fall months of 2005-2007, with the primary goal of discovering and observing several hundred type Ia supernovae (SNe) in order to improve constraints on dark energy. Additional goals of the survey include determining SN rates and properties and exploring systematics in the use of type Ia SNe as cosmological distance indicators. I will provide an overview of the survey, which resulted in the discovery and spectroscopic confirmation of ~500 type Ia SNe, and discuss cosmological results based on 95 SNe from the first (2005) season. I will describe in detail our studies of the type-Ia SN rate, including (i) the most precise measurement of the rate at low redshift (z< 0.12) from the first season, (ii) extension of the rate measurement to z~0.25, (iii) study of the SN Ia rate as a function of host galaxy properties, e.g., star formation rate, and (iv) study of the rate in galaxy clusters. These rate measurements can provide improved observational constraints on the progenitor systems of type Ia SNe and can therefore improve the utility of SNe Ia as cosmological distance indicators.
# GRB Science with GLAST: From Onboard Detection to the Search for Quantum Gravity
2/12/08
Frederick Kuehn
The Gamma Ray Large Area Space Telescope (GLAST) is the next generation high energy gamma ray observatory. Set to launch in Mid 2008, it is charged with the broad scientific mission of studying a diverse range of phenomena such as gamma ray bursts (GRBs), active galactic nuclei, dark matter, the Sun, pulsars, micro-quasars, as well as mapping the entire gamma ray sky. I will discuss GLAST's ability to trigger and localize on GRBs with high energy emission. This capability is essential for multi-wavelength followup observations used to determine redshifts as well as constrain and inspire GRB models. It has been suggested that quantum gravity may modify the speed of light at high energies, such that it is no longer constant. GRBs are short, bright pulses of gamma rays at cosmological distances, spanning many orders of magnitude in energy. Due to the astronomical distance scales, slight speed differences between photons of different energies lead to measurable time delays. I will discuss GLAST's ability to constrain, or produce evidence for such a scenario.
2/13/08
Michael Stamatikos
I will present results based upon a synergistic methodology whose primary objective encompasses probing discrete gamma-ray burst (GRB) high-energy particle astrophysics via a broad-band, multi-messenger paradigm. The interface between leptonic and electromagnetic emission will be explored using the theoretical interpretation and correlative observations of high energy telescopes such as (i) Swift's Burst Alert Telescope (BAT), (ii) the Gamma-Ray Large Area Space Telescope (GLAST) Burst Monitor (GBM) and (iii) the Antarctic Muon and Neutrino Detector Array (AMANDA)/IceCube. Multi-wavelength analysis results include temporal studies of Swift GRBs featuring GRB 060218 in the context of the lag-luminosity relation, and simulations of joint photon energy spectra using Swift-BAT and GLAST-GBM. Probes for multi-messenger leptonic emission signatures via neutrino astronomy include modeling the correlated (TeV-PeV) muon neutrino flux from discrete GRBs featuring GRB 030329 in the context of canonical fireball phenomenology.
# Why Measure the Mean Curvature Even Better and How to Do It
2/19/08
Lloyd Knox (UC Davis)
This will be a two-part talk. First I will discuss precision measurements of the mean curvature of the Universe: their motivation as powerful tests of inflation and the string theory landscape, and how well surveys motivated by dark energy can be used for detecting small amounts of mean curvature. The second part will be about some recent developments in tools for making cosmological parameter inferences from such surveys.
# Observations of the Crab Nebula and Pulsar with VERITAS
2/26/08
Ozlem Celik (UCLA)
VERITAS, an array of four 12m diameter Cherenkov telescopes, is a ground based observatory designed to explore the very high energy gamma ray sky in the energy band between 100 GeV and 50 TeV. Observations of the Crab Nebula, which is accepted as the standard candle in gamma ray astronomy, have proven to be the best tool to calibrate and to characterize the performance of a Cherenkov telescope. Scientifically, it is interesting to measure its energy spectrum to confirm its power law nature across the VHE region and to search for pulsed emission from the Crab Pulsar at energies beyond the 10 GeV upper limit of the EGRET pulsar detection. With these motivations, we have observed the Crab extensively in the 2006 2007 season during the VERITAS 2 and 3 telescope commissioning phases. Using this data set I have reconstructed the energy spectrum of the steady emission from the Nebula. I have also measured the optical pulsed signal from the pulsar and have obtained an upper limit for the pulsed emission at gamma ray energies. On my talk, I will present the results of these studies.
# Cosmic Rays and MINOS: Physics in the Background
3/3/08
Eric Grashorn
The Main Injector Neutrino Oscillation Search (MINOS) is a long baseline neutrino oscillation experiment designed to make a precision measurement of \delta m^2_{23}. Cosmic ray muons are a source of background for such an experiment, but they are an isotropic data source with many calibration and scientific uses. MINOS has measured the atmospheric muon charge ratio to very high precision, as well as the seasonal variation in underground muon rate. New models describing both physical effects have been developed, and both are shedding new light on K/\pi in cosmic ray airshowers. The shadow of the moon is an important analysis to establish the resolution and absolute pointing capability of a cosmic ray detector, and it can also be used to put limits on the anti-matter content of cosmic ray primaries.
# A New Channel for Detecting Dark Matter Substructure in Galaxies
3/18/08
Charles Keeton (Rutgers University)
The Cold Dark Matter paradigm predicts that galaxy dark matter halos contain hundreds of bound subhalos left over from the hierarchical galaxy formation process. Testing this prediction provides unique access to the astrophysics of galaxy formation on small scales, and perhaps even the fundamental nature of dark matter. Gravitational lens flux ratios have been used to place the first constraints on dark matter substructure in galaxies out to redshift z~1. Now I propose to open a new frontier in substructure studies with gravitational lens time delays. Time delays offer several distinct advantages. The theory of "time delay millilensing" is rich and tractable. Time delays provide access not only to the total amount of substructure, but also to the distribution of subhalo masses. Good data are attainable now, and future large samples will allow us to measure substructure as a function of galaxy mass, redshift, and environment.
# Searches for Neutrino Oscillations and Dark Matter with IceCube
3/25/08
Carsten Rott
The IceCube Neutrino Observatory currently under construction at the South Pole, just finished a phenomenal season and has now half of the detector completed. It is a multi-purpose ice-Cherenkov detector, which has been taking data since the deployment of its first string in January 2005. After a brief introduction to the IceCube experiment and a summary of the main results, this talk will especially focus on the search for dark matter, neutrino oscillation and other analyses in IceCube?s low-energy regime (~30GeV-1TeV range). The talk will conclude with an outlook into possible detector extensions and discovery prospects.
# Standard Candle, Standard Yardstick and Non-standard Gravity
4/1/08
Lam Hui
I will discuss four topics: (1) how correlated peculiar flows constitute a surprisingly important source of error for supernova cosmology, (2) how gravitational lensing introduces an observable anisotropy to the galaxy correlation function, and how it impacts baryon acoustic oscillation measurements, (3) how large scale structure data already put interesting constraints on theories of modified gravity, in particular ruling out the popular DGP model at the 3 sigma level, (4) how viable gravity models can be constructed which exhibit a see-saw behavior: a large cosmological constant yielding a small Hubble constant.
# Exploring the High-z Frontier --- Galaxies at z~6 and beyond
4/8/08
Haojing Yan
The current status of the study at z~6 will be reviewed from an observer's point of view, with the emphasis on the implications for the reionization. A couple of key unknown questions at z~6 will also be discussed. The progress in searching for galaxies at z>7 will be reported.
# From quasars to dark energy : Adventures with the clustering of luminous red galaxies
4/15/08
I will discuss some of the cosmological applications of a survey of luminous red galaxies (LRGs), from constraining the clustering and properties of low redshift quasars to a new survey to measure the expansion rate of the Universe with baryon oscillations. Starting on small scales, I will discuss the clustering of LRGs around z< 0.6 quasars in the SDSS, and constraints this places on the environments of quasars. I will then switch to scales two orders of magnitudes larger, and discuss the Baryon Oscillation Spectroscopic Survey -- a next generation survey to measure baryon oscillations, yield 1% distance measures to z=0.35 and z=0.6.
# Probing Small-Scale Structure in Galaxies with Strong Gravitational Lensing
4/29/08
Arthur Congdon
We use gravitational lensing to study the small-scale distribution of matter in galaxies. Roughly half of all observed four-image quasar lenses have image flux ratios that differ from the values predicted by simple lens potentials. We show that smooth departures from elliptical symmetry fail to explain anomalous radio fluxes, regardless of the assumed multipole truncation order. Our work strengthens the case for dark matter substructure, which is predicted by numerical simulations to constitute a few percent of a galaxy's mass. Our results have important implications for the "missing satellites" problem, i.e., the discrepancy between the predicted and observed numbers of dwarf satellites in galaxy halos. To complement flux-ratio studies, we consider how time delays between lensed images can be used to identify lens galaxies that contain small-scale structure. We derive an analytic relation for the time delay between the close pair of images in a "fold" lens, and perform Monte Carlo simulations to investigate the utility of time delays for studying small-scale structure in realistic lens populations. We compare our numerical predictions with systems that have measured time delays and discover two anomalous lenses. We conclude that both flux ratios and time delays in lens systems provide powerful complimentary probes of cosmological theory.
# Cosmology from gravitational-wave standard sirens
5/06/08
Daniel Holz
We discuss the use of gravitational wave sources as probes of cosmology. The inspiral and merger of a binary system, such as a pair of black holes or neutron stars, is extraordinarily bright in gravitational waves. By observing such systems it is possible to directly measure an absolute distance to these sources out to very high redshift. When coupled with independent measures of the redshift, these "standard sirens" enable precision estimates of cosmological parameters. We review proposed GW standard sirens for the LIGO and LISA gravitational-wave observatories. Percent-level measurements of the Hubble constant and the dark energy equation-of-state may be feasible with these instruments.
# How Baryonic Physics Influences Efforts to Exploit Weak Lensing as a Dark Energy Probe
5/17/08
Andrew Zentner
Cosmologists are faced with several profound puzzles. I will discuss two of them, namely the mystery of the dark energy and the process of galaxy formation. The expansion rate of the Universe is accelerating. The causative agent of this expansion is commonly referred to as the Dark Energy. Though it is ten years since accelerated expansion became firmly established as a feature of our Universe, we know little about the dark energy. At present, observations indicate that dark energy is consistent with Einstein's cosmological constant. Any deviations from the phenomenology of a cosmological constant are subtle and difficult to measure. However, large ongoing and future projects such as the Dark Energy Survey, the Large Synoptic Survey Telescope, and a Joint Dark Energy Mission should allow us to constrain the properties dark energy more than an order of magnitude more stringently than current observations can. These efforts may limit strongly any deviations from a cosmological constant, constrain models of acceleration due to deviations from General Relativity, or indicate the presence of dynamical dark energy. The information we receive about the contemporary Universe comes from galaxies and the stars and stellar explosions that occur within them, yet the process of galaxy formation within the standard cosmology is poorly understood. I will review some of the methods that will be used to constrain dark energy, but I will focus on weak gravitational lensing as a dark energy probe. Although weak lensing measurements are notoriously difficult, this method has the greatest potential statistical leverage on dark energy (though systematics remain a concern). Many recent studies have suggested that the fact that galaxy formation is poorly understood theoretically will thwart forthcoming efforts to constrain dark energy through weak lensing. I will show how it is possible both to constrain dark energy properties and to learn about the process of galaxy formation simultaneously through weak lensing measurements. This possibility is interesting and may expand the scientific reach of several current and future projects.
# The Eddington Limit in Cosmic Rays: An Explanation for the Observed Faintness of Starbursting Galaxies
5/27/08
Aristotle Socrates
In terms of their energetics, interstellar cosmic rays are an insignificant by-product of star formation. However, due to their small mean free path, their coupling with interstellar gas is absolute in that they are the dominant source of momentum deposition on galactic scales. By defining an Eddington Limit in cosmic rays, we show that the maximum luminosity of bright starbursting galaxies is capped by the production and subsequent expulsion of cosmic rays. This simple argument may explain why galaxies are faint in comparison to quasars.
# An Alternative Origin for Hypervelocity Stars
9/09/08
Hypervelocity stars (HVS) are usually assumed to originate from the gravitational interaction of stellar systems with the supermassive black hole at the center of the Galaxy. We examine the latest HVS compilation and find peculiarities that are unexpected in this black hole-ejection scenario. We use numerical simulations to show that disrupting dwarf galaxies may contribute halo stars with velocities up to and sometimes exceeding the nominal escape speed of the system. These stars are arranged in a thinly-collimated outgoing tidal tail'' stripped from the dwarf during its latest pericentric passage. We speculate that some HVS may therefore be tidal debris from a dwarf recently disrupted near the center of the Galaxy.
# The Star Formation History and the Neutrino Background
9/25/08
Shunsaku Horiuchi
University of Tokyo
The emission of neutrinos from a core-collapse supernova was dramatically confirmed in 1987, when neutrinos were detected from SN1987A. However, in the 20 years since this event, there have been no close enough supernovae to directly detect in neutrinos. The diffuse supernova neutrino background on the other hand provides an immediate opportunity to study supernova neutrinos. A critical input in this pursuit has been astronomical - the cosmological rate of core-collapse supernovae, which is directly related to the star formation rate. Recently, our understanding of the star formation rate and its evolution has greatly improved, and it is timely to address the prospects the neutrino background provides to study stellar physics. In this talk I will discuss the latest astronomical inputs and present results of checks on its reliability and impacts it has on the neutrino background.
# The Evolution and Outflows of Hyper-Accreting Disks: Implications for Compact Object Mergers, Short Gamma-Ray Bursts, and Heavy Element Nucleosynthesis
9/30/08
Brian Metzger
Massive, compact accretion disks are thought to form in a number of astrophysical events, including the merger of two neutron stars (NSs), the merger of a NS with a black hole, and following the accretion-induced collapse (AIC) of a white dwarf to a NS. These disks, termed "hyper-accreting" due to their large accretion rates of up to several solar masses per second, may power the relativistic jets which produce gamma-ray bursts (GRBs). In particular, accretion following the merger of two compact objects (NS-NS or NS-BH) is a popular model for the production of short-duration GRBs, an idea which has received recent support due to the localization of some short GRBs in host galaxies with little ongoing star formation. I will discuss calculations of the evolution of viscously-spreading, hyper-accreting disks, emphasizing important transitions in the disk's thermodynamic properties and their implications for the late-time X-ray activity observed following some short GRBs. I will also focus on the properties of slower outflows from the disk and their ability to synthesize heavy radioactive elements, the decay of which may power an optical or infrared transient ~ 1 day following the merger. I shall further argue that late-time outflows from the disk synthesize neutron-rich isotopes which are rare in our solar system, from which one can place interesting constraints on the short GRB beaming fraction and the rate of compact object mergers in our galaxy. In addition, I will show that late-time outflows from accretion disks produced by the AIC of a white dwarf may have important observational consequences for these thus far unidentified events.
# The Merger Histories of LCDM Galaxies: disk survivability and the deposition of cold baryons via mergers.
10/14/08
Kyle Stewart
We employ a high resolution LCDM N-body simulation to study the merger histories of galaxy-halos and the evolution of the merger rate with redshift. We confirm the existence of a 'universal' halo merger rate, and provide a slightly modified fitting formula as a function of halo mass, redshift, and merger mass ratio. We find that the majority of Milky Way-size halos have experienced at least one major merger (defined either as mass ratio > 1:3, or in terms of absolute mass m > 10^11 Msun/h), which raises concerns about the survivability of disk dominated galaxies in a LCDM universe. We go on to explore the baryonic content of these mergers using direct empirical constraints to assign statistically likely stellar and gas masses to the central galaxies within the halos of our simulations. We find that the vast majority of mergers into Milky-Way size halos at z>1 are very gas rich (gas fraction > 50%). If we presume that gas rich-mergers such as these may result in disk dominated galaxies (as has been suggested based on direct numerical simulations), we find that only 20% of Milky Way-size galaxies have experienced a "destructive" gas poor major merger since z=2, suggesting a possible explanation to the problem of disk survivability. We also measure the total deposition of cold baryons into galaxies via mergers and find that Milky Way-size galaxies have accreted approximately 30% of their current cold baryonic mass directly from major mergers since z=2, the majority of which is gaseous. Whether this deposited material is labeled to be a "cold flow" is ! subject to definition, but it seems almost empirically inevitable that direct cold gas deposition of this kind must occur.
# Dark Matter and the Highest Redshift Galaxies: Revealing the Invisible with 2 Cosmic Supercolliders the "Bullet Cluster" 1E0657-56 and MACSJ0025-1222
10/28/08
The cluster of galaxies 1E0657-56 has been the subject of intense research in the last few years. This system is remarkably well-suited to addressing outstanding issues in both cosmology and fundamental physics. It is one of the hottest and most luminous X-ray clusters known, and is unique in being a major supersonic cluster merger occurring nearly in the plane of the sky, earning it the nickname "the Bullet Cluster". Recently we have discovered a new Bullet-like cluster, MACSJ0025-1222. Allthough it does not contain a low-entropy, high density hydrodynamical bullet,' this cluster exhibits many similar properties to the Bullet Cluster, and so we also use it to study dark matter. In this talk I will present our measurements of the composition of both systems (using gravitational lensing), show the (independent) evidence for the existence of dark matter, and describe limits that can be placed on the intrinsic properties of dark matter particles. In doing so, I will explain how these clusters offer a serious challenge to MOdified Newtonian Dynamics (MOND) theories. Finally I will conclude with some preliminary results we have on using the Bullet cluster as a cosmic telescope' to explore the Universe in its infancy.
# Collective Oscillations of Supernova Neutrinos
11/4/08
Basudeb Dasgupta
Tata Institute
Neutrinos oscillate in very unusual and interesting ways when their number densities are large, as in the case of neutrinos emitted from a core-collapse supernova. We present a formalism in a general three-flavor framework, that describes the peculiar flavor dynamics due to collective effects. We show how the flavor evolution may be factorized'' into two-flavor oscillations with hierarchical frequencies. We apply these ideas to a typical SN, where we show the interplay between collective and MSW effects, and predict some interesting signatures observable at large neutrino detectors, e.g hierarchy determination at extremely small theta13.
# Connecting Galaxies, Halos, and Star Formation Rates Across Cosmic Time
11/11/08
Risa Wechsler
Stanford University
Recent observational and theoretical studies have indicated that galaxy luminosities and stellar masses are tightly correlated with the masses of their dark matter halo hosts. I will describe a powerful approach to understanding galaxy clustering which uses this tight correlation to connect galaxies to their host dark matter halos. This model is able to explain a variety of statistics of the galaxy distribution, including the luminosity, scale dependence, and redshift dependence of galaxy clustering. Based on this insight, I will present a new observationally-motivated model for understanding how halo masses, galaxy stellar masses, and star formation rates are related, and how these relations evolve with time. This model indicates that a wide variety of galaxy properties, including trends with environment, are set primarily by the masses of their dark halos.
# The Challenge to Unveil the Microscopic Nature of Dark Matter
11/25/08
Scott Watson
University of Michigan
Despite the success of modern precision cosmology to measure the macroscopic properties of dark matter, its microscopic nature still remains elusive. LHC is expected to probe energies relevant for testing theories of electroweak symmetry breaking, and as a result may allow us to produce dark matter for the first time. Other indirect experiments, such as PAMELA, offer additional ways to probe the microscopic nature of dark matter through observations of cosmic rays. Results from a number of indirect detection experiments, along with hints from fundamental particle theories seem to suggest that our old views of the creat ed revisited. I will discuss both theoretical and experimental motivations for a new theory microscopic dark matter -- and the implications for future experiments such as LHC.
# The angular power spectrum of the diffuse gamma-ray background as a probe of galactic dark matter substructure
12/9/08
CCAPP Ohio State University
Recent work has shown that dark matter annihilation in galactic substructure will produce diffuse gamma-ray emission of remarkably constant intensity across the sky, and in general this signal will dominate over the smooth halo signal at angles greater than a few tens of degrees from the galactic center. The large-scale isotropy of the emission from substructure suggests that it may be difficult to extract this galactic dark matter signal from the extragalactic gamma-ray background. I will show that dark matter substructure induces characteristic small-scale anisotropies in the diffuse emission which may provide a robust means of distinguishing this component. I will present the angular power spectrum of the emission from galactic dark matter substructure for several models of the subhalo population, and show that features in the power spectrum can be used to infer the presence of substructure. The anisotropy from substructure is substantially larger than that predicted for the extragalactic gamma-ray background, and consequently the substructure signal can dominate the measured angular power spectrum even if the extragalactic background emission is a factor of 10 or more greater than the emission from dark matter. I will show that for many scenarios a measurement of the angular power spectrum by Fermi will be able to constrain the abundance of substructure in the halo.
# The Milky Way with SDSS: Decoding The Rosetta Stone of Galaxy Formation
1/6/09
Mario Juric
Princeton
The distribution, abundancies, and kinematics of stars in the Galaxy carry information about the hierarchical assembly, evolution, and structure of its luminous and dark components. However, small, biased and mostly local samples have traditionally hindered the mining and use of this rich resource. The situation has changed dramatically with large, precise, multiband, photometric and spectroscopic surveys such as the Sloan Digital Sky Survey (SDSS). With SDSS, we can directly and accurately map the stellar number density, metallicity, and kinematics, measure the scales of Galactic components, observe the relationships between their kinematic and physical properties, as well as identify and characterize disrupted remnants and dwarf satellites in the Galactic halo. All of these provide new and valuable insights on the process of assembly as well as the present day state of the Galaxy. This talk will review the results of photometric Milky Way studies with the SDSS, touching on future prospects and inferences about cosmology and galaxy formation in general. Discussion of the latter two is especially timely, given the imminent arrival of 2nd generation surveys such PanSTARRS, SkyMapper, DES, SDSS-III, GAIA and LSST. These surveys, covering 10-100x larger volumes with significantly improved accuracy, have the potential to revolutionize the theory of galaxy formation and near field cosmology in the next decade.
# The Destruction of Thin Stellar Disks Via Cosmologically Common Mergers
1/8/09
Chris Purcell
U. of California, Irvine
Analytic and numerical investigations strongly indicate that the predominant mode of mass delivery into CDM halos involves subhalos roughly one-tenth as massive as their host. Cosmological simulations suggest that around 70% of Galactic-scale halos (M_host ~ 1e12 M_sun) have undergone a 1:10 merger event in the last 10 Gyr involving a large satellite galaxy (M_sat ~ 1e11 M_sun) several times as massive as the galactic disk itself. The survival and stability of thin stellar disks against these common and potentially destructive mergers has therefore been of prime interest in the field of galaxy formation and evolution. I present results from the highest-resolution suite of collisionless simulations performed to date involving 1:10 mass-ratio mergers motivated by cosmological conditions, quantifying the morphological and dynamical transformation of cold, thin galactic disks into hot and extremely thick stellar systems. I discuss the ramifications of this disk destruction for LCDM models of galaxy formation, and assess the future endeavors for analysis and simulation which may alleviate this concern.
# Cosmology with the South Pole Telescope
1/13/09
Jeff McMahon
Univ. of Chicago
The South Pole Telescope is a 10-meter, millimeter wave telescope optimized for observation of the cosmic microwave background at arcminute resolution. In the 2006/2007 austral summer we assembled this telescope at the geographic South Pole and commissioned a sensitive 1000 element three-frequency camera. This instrument has now collected two years of survey data rep ive high resolution maps of the CMB to date. Using these data we published the first detection of new galaxy clusters selected with the Sunyaev-Zel'dovich (SZ) effect. Analysis of the completed survey will yield a large catalogue of SZ selected clusters which will be used to constrain the equation of state of dark energy. The SPT data will further constrain cosmological parameters through a measurement of the high-l CMB power spectrum. In addition, we are currently building a new polarization sensitive camera (SPTpol) to be deployed in 2011. SPTpol will measure the polarization of the CMB to provide constraints on neutrino mass and potentially, the energy scale of inflation. In this talk I provide an overview of the SPT instrument, and discuss the prospects for cosmology with these data.
# The Early Reionization of Voids
1/15/09
Kristian Finlator
Univ. of Arizona
I introduce a new method for computing cosmological reionization by coupling cosmological hydrodynamic simulations with an accurate solution to the moments of the radiative transfer equation. Applying this method to precomputed density and emissivity fields reveals that reionization proceeds rapidly from the overdense regions that host sources into voids before "mopping up" self-shielded filamentary regions. Our finding that most filamentary regions reionize late owes to a low-mass cutoff in the ratio of halo mass to ionizing luminosity that is expected in situations where low-mass halos form stars inefficiently or have low ionizing escape fractions. Previous works have largely overlooked the possibility for such a reionization topology because they implicitly agreed on the bias of the emissivity field. I discuss implications of the topology of reionization for current and upcoming observations.
# Co-evolving star formation and AGN activity within the zCOSMOS density field
1/20/09
John Silverman
ETH Zurich
An understanding of the influence of environment on both AGN activity and star formation is crucial to determine the physical mechanism(s) regulating the coeval growth of supermassive black holes (SMBHs) and the galaxies in which they reside. Deep multi-wavelength surveys (e.g. COSMOS) now offer the tools to explore in detail such studies up to z~1 by providing least biased samples of AGN, a characterization of the underlying parent sample of galaxies, and an assessment of the local environment. I will present new results based on 7543 zCOSMOS galaxies with quality optical spectra and XMM-Newton observations that identify those galaxies hosting X-ray selected AGNs including the obscured population. We specifically measure star formation rates of AGN host galaxies, growth rates of these SMBHs, and the influence of the environment on triggering AGN activity. Our findings shed light on the key ingredients for a galaxy to harbor an actively accreting SMBH and possible migration onto the local SMBH-bulge relations.
# Cluster Detection in Sunyaev-Zel'dovich Surveys
1/21/09
Laurie Shaw
McGill Univ.
Measuring the redshift evolution of the cluster mass function provides us with a sensitive means of constraining cosmological parameters. Sunyaev-Zel'dovich Effect (SZE) surveys are currently searching for clusters via their imprint on the CMB. In the first part of my talk I will discuss work done to determine the effectiveness of SZ cluster-finding algorithms in detecting clusters and measuring their integrated flux -- a quantity predicted to be tightly correlated with cluster mass -- using synthetic sky-maps constructed from high-resolution cosmological 'lightcone' simulations. In the second part of my talk I will present recent results from the South Pole Telescope, including the first blind detections of galaxy clusters via the SZE.
# Falsifying Paradigms for Cosmic Acceleration
1/22/09
Michael Mortonson
Univ. of Chicago
Future measurements of cosmic distances and growth can test and potentially falsify classes of dark energy models including the cosmological constant and quintessence. The distance-redshift relation measured by future supernova surveys will strongly constrain the expansion history of the universe. Under the assumption of a particular dark energy scenario, limits on the expansion rate place bounds on the evolution of the growth of large-scale structure. I will discuss the anticipated predictions for growth and expansion observables from a future SNAP-like supernova sample combined with CMB data from Planck. Although simple models like flat LCDM are easiest to falsify, strong consistency tests exist even for general dark energy models that include dynamical dark energy at low redshift, early dark energy, and nonzero spatial curvature.
# N-Body Simulations and Photometric Redshifts
1/28/09
Hans Stabenau
Univ. of Pennsylvania
Recent measurements have shown that the expansion of the universe is accelerating. In order to extrapolate GR to fit the properties of the universe on horizon scales, a strange form of energy density (dark energy'') is required. An alternative to dark energy is a modification of GR that accounts for accelerating expansion. In this talk, I will discuss my work on predicting large-scale structure formation in an alternative gravity model by N-body simulation. A related problem that I have worked on is efficiently obtaining accurate redshifts for galaxies using photometric redshifts with surface brightness priors. Finally I will talk briefly about my ongoing work on analyzing data from the Balloon-borne Large Aperture Submillimeter Telescope (BLAST) experiment.
# Astrophysical Signatures of Dark Matter Annihilation
1/29/09
Greg Dobler
Harvard
One of the most significant outstanding unknowns in cosmology is the fundamental nature of dark matter. One hypothesis is that the dark matter consists of Weakly Interacting Massive Particles (WIMPs) that are a thermal relic of the Big Bang. Generic WIMP models predict self-annihilation cross sections and masses that not only give roughly the correct relic density, but also have astrophysical consequences which may be observable by current and near-term experiments. I will discuss the implications for WIMP annihilation in the context of three recent observations: the anomalously hard synchrotron radiation centered on the Galactic center that is observed by WMAP (the WMAP "haze"), the rise in the PAMELA local positron spectrum above 10 GeV, and the excess electrons centered on 650 GeV in the ATIC local electron spectrum. In addition, I will show that, not only is the spectrum of the haze from 23 to 33 GHz consistent with numerous annihilation channels, but the existence of the haze electrons implies an inverse Compton (scattering of starlight photons by the electrons which produce the haze)signal towards the Galactic center which may be observable by the Fermi Gamma-Ray Space Telescope.
# Estimating cosmological parameters with cosmic shear
2/13/09
Tim Eifler
Univ. of Bonn
In recent years weak lensing by the large-scale structure of the Universe, called cosmic shear, has become a valuable probe in cosmology. Large upcoming surveys such as KIDS, Pan-STARRS, DES, SNAP/JDEM, and Euclid will improve the quality of cosmic shear data significantly, enabling us to measure its signal with less than 1% statistical error. In order to obtain cosmological parameters from these high precision data properly, there remain issues to be addressed. On the observational side, systematic errors, mainly from insufficient PSF-correction, must be reduced, and a possible contribution to the shear signal coming from intrinsic alignment or shape-shear correlation must be excluded. On the theoretical side, we need accurate predictions for P_delta(k) and precise statistical methods to infer cosmological parameters. In this seminar talk I review the basics of cosmic shear with the focus on how to constrain cosmological parameters. I illustrate the impact of cosmic shear covariances on the parameter estimation and present an improved likelihood analysis for cosmic shear data. As a second topic, I address the issue of E- and B-modes in the shear signal and outline a new method (the ring statistics) to separate E-modes from B-modes. I explain advantages and problems of the ring statistics and compare its information content to that of other second-order cosmic shear measures. Finally, I present results from the first shear measurement using the ring statistics on data of the CFHTLS survey.
# Surveying the TeV sky with Milagro and HAWC
2/17/09
Tyce DeYoung
Penn. State U.
The past few years have seen a wealth of new results in TeV astronomy, produced by wide field-of-view air shower detectors such as Milagro and by air Cherenkov telescopes. These results are shedding light on the very energetic objects such as the accelerators of cosmic rays, but also contain puzzling surprises, such as fine structure in the arrival directions of TeV cosmic rays. In the near future, the High Altitude Water Cherenkov (HAWC) observatory will provide an order of magnitude better sensitivity than Milagro and, in conjunction with other new instruments such as IceCube and Fermi (GLAST), will provide an even richer understanding of the very high energy universe.
# Signatures of Dissipation and Relaxation in the Stellar Orbit Structure of Merger Remnants (Galactic Archaeology)
2/24/09
Loren Hoffman
Northwestern U.
Many of the observed properties of elliptical galaxies indicate that they had a violent formation history. They are dynamically hot systems, with high velocity dispersions dominating over ordered stellar streaming. Gas-rich tidal tails, and rings and shells indicative of the recent disruption of a spiral galaxy, often surround systems otherwise resembling ordinary giant ellipticals. These observations led Toomre & Toomre (1972) to suggest that elliptical galaxies are the products of mergers between spirals, a hypothesis that today fits naturally into the context of hierarchical structure formation. The violent relaxation in galaxy mergers is incomplete - the stellar distribution is scrambled enough to produce hot, ellipsoidal systems resembling early-type galaxies, but substantial memory of the initial conditions is retained as features in the remnant distribution function. This "fine structure" serves as a fossil record of the galaxy's formation history. High-resolution integral field spectroscopy with instruments such as SAURON and OASIS has enabled us to reconstruct the full 3D stellar orbital distributions of nearby galaxies, taking "galactic archaeology" to a whole new level. In this talk I will present simulation work aimed at meeting the challenge of parsing this new profusion of information on the buildup of local galaxies. In particular I will focus on the signatures of gas dissipation in the remnant orbit structure, and on the unique characteristics of "dry" merger remnants, produced by the re-merger of two gas-poor ellipticals.
# The Quest for Dark Matter: from hints to discovery
3/16/09
Gianfranco Bertone
IAP, France
The possibility of explaining the positron and electron excess recently found by the PAMELA and ATIC collaborations in terms of dark matter (DM) annihilation or decay has attracted considerable attention. However, DM has been invoked in the past to explain many other experimental results, such as the 511 KeV line detected by INTEGRAL, the EGRET 'bump', or the WMAP 'Haze'. It is therefore natural to ask what we can actually learn from these experiments, and whether it is possible to obtain conclusive evidence for DM from Particle Astrophysics experiments. To answer these questions, I will review the prospects to detect "smoking-gun" features that would unambiguously point to DM, and argue that in absence of them, conclusive evidence can probably be obtained only through by crossing the results from accelerator, direct and indirect searches.
# Evolution and modelling of dwarf spheroidal galaxies
3/24/09
Jaroslaw Klimentowski
Nicolaus Copernicus Astronomical Center
Dwarf spheroidal galaxies of the Local Group are key objects for understanding of many aspects of current cosmology. Their dark matter contents are much higher than in typical galaxies. Due to their vicinity studies can be made with more sophisticated methods, not available for classical cosmological objects like clusters. Unfortunately we still lack knowledge about how they were formed and evolved. The missing satellites problem has shown that modern simulations cannot correcty explain the numbers of dwarf galaxies observed on the sky. Modelling of their dark matter halos differs between authors using even the same observational data. This shows that we still rely on different, often doubtful assumptions. In this talk I will present results of our work on dwarf spheroidal galaxies. We have studied different scenarios of formation and evolution of satellites based on a cosmological simulation. We have also studied in detail tidal evolution of disk galaxy in Milky Way potential. We show how a stellar disk can be transfomed into a spheroid and how this scenario can be confirmed by observations of real objects. We show how tidal debries can affect mass modelling and we test methods to do it properly. Finally we apply them to real sky galaxies and draw conclusions.
# The Missing Baryons and Quasars Missing in Action
3/31/09
Shirley Ho
Berkeley National Lab
I will present a new method in finding the missing baryons by generating a template for the kinematic Sunyaev-Zel'dovich effect. The template is computed from the product of a reconstructed velocity field with a galaxy field; we find that the combination of a galaxy redshift survey such as SDSS and a CMB survey such as ACT and PLANCK can detect the kSZ, and thus the ionized gas, at significant signal-to-noise. Unlike other techniques that look for hot gas or metals, this approach directly detects the electrons in the IGM through their signature on the CMB. The signal-to-noise ratio for various combination of experiments will be shown.
I will also discuss preliminary results on cross-correlation between Quasars (from SDSS) and WMAP, which puts upper limit on the average amount of energy that is imparted onto the gas surrounding the quasars.
# Fundamental Physics from the Sky
4/7/09
Stefano Profumo
UC Santa Cruz
Can we learn about New Physics with astronomical and astro-particle data? Understanding how this is possible is key to unraveling one of the most pressing mysteries at the interface of cosmology and particle physics: the fundamental nature of dark matter. Rapid progress may be within grasp in the context of an approach which combines information from high-energy particle physics with cosmic-ray and traditional astronomical data. I discuss recent puzzling data on cosmic-ray electrons and positrons and their interpretation. I show how the Fermi Space Telescope will soon shed light on those data as well as potentially on several dark matter particle properties. I then introduce a novel approach to particle dark matter searches based on the complementarity of astronomical observations across the electromagnetic spectrum, from radio to X-ray and to gamma-ray frequencies.
# The phase-space structure of dark matter halos
4/14/09
Monica Valluri
Univ. of Michigan
LambdaCDM simulations of structure formation have reached unprecedented numerical resolution in recent year. Yet some aspects of the evolution of dark matter halos are still not well understood. I will present results of a study of the evolution of the coarse-grained phase space distribution function in LCDM halos with particular focus on the importance of mixing in collisionless evolution. I will also describe results of a recent study of the evolution of the orbital properties of dark matter particles in response to the growth of baryonic components. The goal of this talk is to provide some insights into the evolution of dark matter halos using tools of classical dynamics.
# Clues about Disk Evolution from the Outermost Reaches of Galaxies
4/28/09
Rok Roskar
Univ. of Washington
Outer disks of galaxies defy our understanding of disk formation. Their profiles deviate from simple exponentials, they are at once the sites of current galaxy assembly and places where a galaxy's history can be effectively preserved owing to long dynamical times. We investigate the nature of outer disks with an N-body/SPH approach. We simulate a suite of idealized models representative of galaxy formation through dissipational collapse after the last major merger. We find that a disk break is seeded by a drop in star formation density, while the outer disk is populated almost exclusively by stars that migrated there from the interior on surprisingly circular orbits. The degree of such radial migrations is large and unexpected. I will discuss the theoretical basis for this phenomenon and present some observational evidence that lends support to the theory. I will also briefly chart out some far-reaching implications of such migrations for studies ranging from the solar neighborhood to extragalactic stellar populations.
# the stellar population synthesis technique
5/5/09
Charlie Conroy
Princeton
Price Prize Lecture
The SPS technique is deceptively simple. Relying on stellar evolution calculations, stellar spectral libraries, and dust models, practitioners of SPS aim to convert the observed spectral energy distributions of galaxies into physical properties. Knowledge of these physical properties, which range from total stellar masses to star formation rates and metallicities, are essential for understanding the formation and evolution of galaxies. The SPS framework thus provides a fundamental link between theory and observations. Despite its importance, a systematic investigation of the uncertainties in SPS is lacking. In this talk I will describe ongoing work exploring the panoply of uncertainties in SPS, including uncertainties in stellar evolution, dust models, and initial mass functions, amonst others, and their propagation into the derived physical properties of galaxies. I will also discuss attempts to constrain these uncertain aspects with existing and future observations.
# An Update from the South Pole Telescope
5/19/09
Jeff McMahon
University of Chicago
The South Pole Telescope (SPT) is a 10-meter telescope optimized for arcminute scale observations of the cosmic microwave background (CMB). Construction and commissioning were completed in January 2007, and since then we have acquired more than two years of data. Using these observations we recently published the first detection of galaxy clusters selected with the Sunyaev-Zel'dovich (SZ) effect. Through the SZ effect, we will create a mass limited catalog of galaxy clusters out to t on. In addition, the survey data will provide an improved measurement of the temperature power spectrum of the CMB out to arcminute scales. In this talk I will describe the telescope and instrumentation, provide an update on current and forthcoming results, and discuss plans for future science with SPT.
# Indirect Dark Matter Detection -- Robust Bounds on Annihilation to Electrons, Neutrinos and Gamma Rays
5/26/09
Nicole Bell
Melbourne University
We examine dark matter annihilation in galaxy halos to neutrinos, gamma rays, and e+e-. We show that annihilation to neutrinos, the least detectable final state, defines a robust upper bound on the total cross section and implies that annihilation cannot significantly modify dark matter halo density profiles. Dark matter annihilation into charged particles is necessarily accompanied by gamma rays produced via radiative corrections. Internal bremsstrahlung from final state charged particles produce hard gamma rays up to the dark matter mass, with an approximately model-independent spectrum. In addition, significant electromagnetic radiation is produced via energy loss processes of e+e- annihilation products. We discuss dark matter interpretations of the PAMELA anomaly in light of these results.
# Dark Energy Calibrations for JDEM and Neutrino Oscillations with MINOS
6/09/09
Bob Armstrong
Indiana University
Essential to future measurements of dark energy will be an understanding of systematic errors. One important component to the error budget will be from photometric calibration. Upcoming surveys will need to reduce the calibration uncertainty to less than 1% to distinguish between different models for dark energy. I will describe ongoing work to achieve this goal for JDEM. In addition, I will discuss neutrino oscillation results from the MINOS experiment. MINOS is a long baseline neutrino oscillation experiment that sends neutrinos from Fermilab to northern Minnesota. By comparing the neutrino energy spectrum at both locations, a precision measurement of the atmospheric mixing parameters can be done. I will report on these results as well our recent measurement of electron neutrino appearance.
# Inspiralling Supermassive Black Holes as Tracers of Galaxy Mergers
9/08/09
Julie Comerford
UC Berkeley
When two galaxies with central supermassive black holes (SMBHs) merge, the SMBHs inspiral in the resultant merger-remnant galaxy and eventually coalesce. However, very few inspiralling SMBH pairs have been identified observationally. In this talk, I will describe a new technique I use to build a significantly larger sample of inspiralling SMBHs, where I spectroscopically identify inspiralling SMBHs that power AGN. I search the DEEP2 Galaxy Redshift Survey for galaxy spectra that exhibit AGN emission lines that are offset in velocity relative to the mean velocity of the host galaxy's stars, suggesting bulk motion of the AGN within the host galaxies. Within the set of DEEP2 red galaxies at 0.3 < z < 0.8, I find 32 AGN with statistically significant (greater than 3 sigma) velocity offsets, ranging from ~50 km/s to ~300 km/s. After exploring physical effects such as AGN outflows that could cause such velocity offsets, I find that these offsets are most likely the result of SMBHs inspiralling within merger-remnant galaxies. With this new technique of identifying galaxy mergers, I find that roughly half of red galaxies hosting AGN are merger-remnant galaxies. This result implies that galaxy mergers may trigger AGN activity in red galaxies and sets a merger rate of ~3 mergers/Gyr for red galaxies at 0.3 < z < 0.8. Finally, I will discuss the utility of HST imaging and optical slit spectroscopy in increasing the number of known inspiralling SMBHs.
# Cosmological hydrogen recombination: the effect of very high-n states and quadrupole transitions
9/22/09
Dan Grin
Caltech
Thanks to the ongoing Planck mission, a new window will be opened on the properties of the primordial density field, the cosmological parameters, and the physics of reionization. Much of Planck's new leverage on these quantities will come from temperature measurements at small angular scales and from polarization measurements. These both depend on the details of cosmological hydrogen recombination; use of the CMB as a probe of energies greater than 1016 GeV compels us to get the ~eV scale atomic physics right.
One question that remains is how high in hydrogen principle quantum number we have to go to make sufficiently accurate predictions for Planck. Using sparse matrix methods to beat computational difficulties, I have modeled the influence of very high (up to and including n=200) excitation states of atomic hydrogen on the recombination history of the primordial plasma, resolving all angular momentum sub-states separately and including, for the first time, the effect of hydrogen quadrupole transitions. I will review the basic physics, explain the resulting plasma properties, discuss recombination histories, and close by discussing the effects on CMB observables.
# Pairing of Supermassive Black Holes in galaxy mergers
10/6/09
Simone Callegari
University of Zurich
Theoretical and observational efforts have been devoted in recent years to trace the coevolution of the populations of galaxies and of the Supermassive Black Holes (SMBHs) inhabiting their centers. In particular, the hierarchical assembly of a galaxy through mergers could lead to the formation of SMBH pairs in its nucleus. Such pairs can give rise to many interesting processes, among which the emission of gravitational waves, detectable by forthcoming experiments such as LISA. In this talk I will present results from a campaign of N-body/SPH simulations aimed at studying the conditions that drive or inhibit SMBH pairing in galaxy mergers. Gasdynamics affects the formation and the properties of SMBH pairs in a complex way, especially in the cosmologically relevant unequal-mass regime. Exploring the evolution of SMBHs during mergers is therefore a key ingredient in the study of the cosmic history of the SMBH population.
# Cosmology with the shear-peak statistics
10/19/09
Joerg Dietrich
University of Michigan
Weak-lensing searches for galaxy clusters are plagued by low completeness and purity, severely limiting their usefulness for constraining cosmological parameters with the cluster mass function. A significant fraction of false positives' are due to projection of large-scale structure and as such carry information about the matter distribution. We demonstrate that by constructing a `peak function'', in analogy to the cluster mass function, cosmological parameters can be constrained. To this end we carried out a large number of cosmological N-body simulations in the \Omega_m-\sigma_8 plane to study the variation of this peak function. We demonstrate that the peak statistics is able to provide constraints competitive with those obtained from cosmic-shear tomography from the same data set. By taking the full cross-covariance between the peak statistics and cosmic shear into account, we show that the combination of both methods leads to tighter constraints than either method alone can provide.
# Toward Unveiling the Sources of the Highest Energy Cosmic Rays
10/20/09
Hajime Takami
(IPMU, Univ. of Tokyo)
The origin of the highest energy cosmic rays (HECRs) is one of the biggest mysteries in modern astrophysics. A main reason why we have not able to identify their sources is magnetic fields in the Universe, i.e., the trajectories of HECRs are deflected by them. Recent progress of very large detectors for HECRs like Pierre Auger Observatory has unveiled their anisotropic arrival distribution and also spatial correlation between the arrival directions of HECRs and matter distribution of local Universe. These facts point out that magnetic fields are not so large that the arrival directions of HECRs lose information on their sources, but, the magnetic fields are not negligible, since we do not find any plausible source candidates to the arrival directions of detected HECRs. Thus, in order to explore the origin of HECRs by HECRs themselves, it is essential to take their propagation in magnetized Universe into account. In this seminar, I will briefly review our current understanding of HECR sources and mainly discuss the possibility to find the sources by cosmic ray astronomy, especially focusing on the propagation of the highest energy protons in intergalactic and Galactic magnetic fields.
# Probing extragalactic high-energy cosmic-ray sources with high-energy neutrinos and gamma rays
10/26/09
Kohta Murase
(Kyoto University)
The origin of high-energy cosmic rays is one of the big mysteries in the Universe. Observations of high-energy neutrinos and gamma rays are important to know properties of the cosmic-ray sources, especially for transients. Now, not only high-energy gamma-ray but also neutrino observations have provided information. Identification of the sources may be possible in the near future. In my talk, we will discuss possibilities and consequences of high-energy cosmic-ray acceleration in extragalactic astrophysical objects. We will especially focus on transient sources such as gamma-ray bursts, active galactic nuclei and newly born magnetars. We might also discuss persistent sources such as clusters of galaxies, if we have time.
# Models for X-Ray Binaries: Galactic and extragalactic populations
10/27/09
Tassos Fragos
Northwestern University
X-ray binaries are unique astrophysical laboratories as they carry information about many complex physical processes such as star formation, compact object formation, and evolution of interacting binary systems. I will initially present an analysis that allows us to reconstruct the full evolutionary history of known Galactic X-ray binaries back to the time of compact-object formation; the results provide us with the most robust constraints on black-hole kicks due to asymmetries in the collapse. Motivated by deep Chandra observations of extra-galactic populations of X-ray binaries, I will also present population studies of low-mass X-ray binaries in elliptical galaxies. These simulations are targeted at understanding the origin of the shape and normalization of the observed X-ray luminosity functions as well as the transient behavior of X-ray binaries. Finally, I will briefly talk about an ongoing project towards developing a new advanced computational tool for the study of X-ray binary populations formed in both galactic fields and dense stellar clusters.
# Getting the most out of dark matter observations and experiments
11/3/09
Annika Peter
Caltech
Dark matter, constituting a fifth of the mass-energy in the Universe today, is one of the major "known unknowns" in physics. There are currently four approaches to determining the nature of dark matter, assuming it is composed of at least one new species of particle: 1) creation in collider experiments; 2) indirect detection via its annihilation products; 3) and direct detection; and 4) observations sensitive to the gravity of dark matter. For the latter three approaches, event rates are not only sensitive to the "physics" of dark matter (mass, cross sections, and the theory in which the dark matter particles live) but to the "astrophysics" of dark matter as well, namely the phase space density of dark matter throughout the Milky Way and other galaxies and its evolution through cosmic time. There is much theoretical uncertainty in phase space density, a fact which tends to either be ignored or simply acknowledged as a problem. I will highlight a few recent developments in understanding the local dark matter phase space density. Then, I will propose a shift in the way we think about uncertainties in the dark matter phase space density. Namely, we should be treating the astrophysics and physics properties of dark matter on equal footing, as things we want to derive from myriad data sets. I will show how this shift in thinking may yield more robust determinations of both the physics and astrophysics of dark matter, and show how this works in practice.
# The Quest For the Nature of Dark Matter
11/10/09
Mike Kuhlen
(IAS/UC Berkeley)
Despite having observational evidence of its existence for more than seventy years now, we still don't know the nature of Dark Matter. Recent progress in astronomical observations, laboratory experiments, theory, and numerical simulations has led to an explosion in research on this topic. Here I review some of the recent developments, with an emphasis on how ultra-high resolution cosmological numerical simulations can contribute to the quest to unravel the mystery of Dark Matter.
# CDM Accelerating Cosmology as an Alternative to ΛCDM
11/17/09
J.A.S. Lima
(IAG, University of São Paulo, Brazil)
A new accelerating cosmology driven only by baryons plus cold dark matter (CDM) is proposed in the framework of general relativity.
In this scenario, the present accelerating stage of the Universe is powered by the negative pressure describing the gravitationally-induced particle production of cold dark matter particles. The new cosmology is presented in 3 steps: 1) The Mechanism, 2) The Classical Formalism (Correction to the Energy Momentum Tensor), and, 3) The Cosmological Scenario.
The resulting cosmology has only one free parameter and the differential equation governing the evolution of the scale factor is exactly the same of the ΛCDM model. For a spatially flat Universe, as predicted by inflation (Ωdm + Ωbar =1), it is found that the effectively observed matter density parameter is Ωeff = 1- α, where α is the constant parameter specifying the CDM particle creation rate. The supernovae test based on the Union data (2008) requires α ≅ 0.71 so that Ωeff ≅ 0.29, as independently derived from weak gravitational lensing, large scale structure and other complementary observations. Some caveats of the model (mainly the ones related to the quantum formulation) are discussed in the conclusion.
# What the most metal-poor stars tell us about the early Universe
11/20/09
Anna Frebel
(CfA)
The chemical evolution of the Galaxy and the early Universe is a key topic in modern astrophysics. Since the most metal-poor Galactic stars are the local equivalent of the high-redshift Universe, they can be employed to reconstruct the onset of the chemical and dynamical formation processes of the Galaxy, the origin and evolution of the elements, and associated nucleosynthesis processes. They also provide constraints on the nature of the first stars and SNe, the initial mass function, and early star formation processes. The discovery of two astrophysically very important metal-poor objects recently lead to a significant advance regarding these topics. One object is the most iron-poor star yet found (with [Fe/H]=-5.4). The other star displays the strongest known overabundances of heavy neutron-capture elements, such as uranium, and nucleo-chronometry yields a stellar age of ~13 Gyr. Metal-poor stars, once also identified in dwarf galaxies, are vital probes also for near-field cosmology. Their chemical signatures now suggest that systems like these were building blocks of the Milky Way's low-metallicity halo. This opens a new window to study galaxy formation through stellar chemistry.
# What *don't* we know about galaxy formation?
11/24/09
Darren Croton
(Swinburne)
Much progress has been made in recent years in our understanding of the co-evolution of galaxies and AGN, and their connection to the underlying large-scale structure. In this talk I will discuss simulation and modeling techniques that bridge theories of galaxy and quasar formation with the properties of observed galaxy populations. In addition, I will discuss a number of open questions important for extra-galactic astronomy and cosmology, and explain how future large-scale surveys and galaxy formation models may jointly address them.
# Magnetar Observations in the Fermi Era
12/1/09
Chryssa Kouveliotou
(NASA's MSFC)
Magnetars are magnetically powered rotating neutron stars with extreme magnetic fields (over 1014 Gauss). They are discovered and emit predominantly in the X- and gamma-rays. Very few sources (roughly 15) have been found since their discovery in 1987. NASA's Fermi Observatory was launched June 11, 2009; the Fermi Gamma Ray Burst Monitor (GBM) began normal operations on July 14, about a month after launch, when the trigger algorithms were enabled. In the first year of operations we recorded emission from four magnetar sources; of these, only one was an 'old' magnetar: SGR 1806+20. The other three detections were two brand new sources, SGR J0501+4516, discovered with Swift and extensively monitored with both Swift and GBM, SGR J0418+5729, discovered with GBM and the Interplanetary Network (IPN), and SGR J1550-5418, a source originally classified as an Anomalous X-ray Pulsar (AXP 1E1547.0-5408). In my talk I will give a short review on magnetars and describe the current status of the analyses efforts of the GBM data with our magnetar team.
# Naturally hidden dark matter
12/8/09
Francesc Ferrer
(Washington Univ, St. Louis)
Models addressing the naturalness problems of the Standard Model of Particle Physics often contain particles that could constitute the dark matter (DM) in the universe. Such DM particles could give rise to detectable fluxes of cosmic and gamma-rays. The simplest scenarios, like the neutralino in the Minimal Supersymmetric extension of the Standard Model (MSSM), cannot account for the anomalous cosmic ray fluxes observed by the PAMELA experiment. We study extensions of the MSSM that alleviate some of its remnant fine-tuning problems, and whose dark sector can fit the reported positron fraction excess.
# Pop goes the neutrino: acoustic detection of astrophysical neutrinos
12/14/09
Justin Vandenbroucke
(Stanford/KIPAC)
High energy particle showers developing in a dense medium heat the medium locally, causing it to expand and emit a shock wave detectable as an acoustic pulse. The idea of detecting particle tracks and showers with this method was proposed in the 1950's, and was confirmed in the laboratory in the 1970's. Interest has grown recently in discovering and then characterizing extremely high energy neutrinos, particularly cosmogenic ("GZK") neutrinos of energy ~10^18 eV. The acoustic technique has been proposed as a possible method to detect GZK neutrinos. Large volumes (10-100 km^3) of naturally occurring target media such as water, ice, and salt could be instrumented relatively inexpensively with this technique. I will describe the status of several acoustic projects, focusing in particular on the South Pole Acoustic Test Setup (SPATS), a small array deployed by the IceCube collaboration to determine the acoustic properties of South Pole ice. SPATS has completed many of its goals, including measuring the sound speed, noise level, transient background, and attenuation length in South Pole ice. Although these measurements were originally made for neutrino astronomy R&D, they can also help address open questions in glaciology.
# Fermi and Ultra-High Energy Cosmic Rays
12/15/09
Charles Dermer
(Naval Research Laboratory)
The Fermi Gamma ray Space Telescope, now midway through the second year of its mission, has given us a temporally evolving panorama of the GeV sky. Fermi's legacy includes a catalog of extragalactic sources containing star-forming galaxies, active galaxies, and gamma-ray bursts which display unexpected high-energy behavior. We consider Fermi and correlated data as potential evidence for hadronic acceleration leading to ultra-high energy cosmic ray (UHECR) production in blazars and GRBs. Centaurus A is considered as a nearby UHECR source. The features considered in theoretical analyses of relativistic jets are the gamma-gamma opacity and power constraints. Minimum outflow Lorentz factors have important implications for UHECR and UHE neutrino production, e.g., with IceCube, which are described in this talk.
# Analysis of Galaxy clusters in the SDSS Coadd data
2/8/10
Marcelle Soares-Santos
(U. of São Paulo)
Galaxy cluster counts in spatial pixels and mass bins constitute a sensitive probe for Cosmology. Analyses based on this fact are part of the scientific program of experiments such as the upcoming Dark Energy Survey and have been pursued using the state of the art data. We perform a measurement of cosmological parameters using cluster counts in the SDSS Coadd. A measurement using clusters requires galaxy photometric redshifts, cluster finding algorithms, cluster mass calibration, cosmological parameter estimation and a data set of sufficient scope. For the SDSS Coadd, photometric redshifts are obtained with a neural network algorithm. A cluster catalog from this sample of 13M galaxies covering 250 sq-degrees up to redshift ~1 is constructed using a Voronoi Tessellation cluster finder. The selection function is computed using DES mock galaxy catalogs. A weak lensing analysis provides the mass calibration of the cluster sample binned into observables. A joint likelihood method using the mean abundance and spatial distribution is used to obtain cosmological constraints.
# Cosmic ray anisotropy measurement with IceCube
2/9/10
Rasha Abbasi
IceCube is cubic kilometer scale neutrino observatory located at the geographical South Pole. The kilometer cubed detector construction is on schedule to be completed in 2011. At the moment it is taking data with 59 deployed strings, when completed it will comprise 80-strings plus 6 additional strings for the low energy array Deep Core. The strings are deployed in the deep ice between 1,450 and 2,450 meters depth, each string containing 60 optical sensors. In this talk I will present selected results of ongoing analysis of IceCube detector data including the search reporting the measurement of 0.06% of large scale anisotropy. The data used in the large scale anisotropy analysis contains billions of downward going muon events with a median energy per nucleon of ~14 TeV and a median angular resolution of 3 degrees. The energy dependence of this anisotropy is also presented. The observed anisotropy has an unknown origin and we will discuss various possible explanations. Studies of the anisotropy could further enhance the understanding of the structure of the galactic magnetic field and possible cosmic ray sources.
# Unraveling the Formation History of Elliptical Galaxies
3/2/10
TJ Cox
(Carnegie Observatories)
The idea that galaxies in general, and elliptical galaxies in particular, are shaped by their merger history has gained widespread acceptance. However, a detailed mapping between specific merger histories, and the wide variety of galaxies observed is still uncertain. By using a comprehensive set of state-of-the-art numerical simulations, we show that a single disk-disk merger, as originally proposed by the "merger hypothesis," is a plausible mechanism to form many elliptical galaxies provided that dissipation is involved. We also show that additional (merger?) processes are likely needed to form the largest ellipticals and we outline several properties commonly observed in elliptical galaxies that may provide insight into their formation history.
# Understanding Core-Collapse Supernovae in the Transient Era
3/16/10
Chris Fryer
(Los Alamos National Laboratory)
Supernova surveys have taught us much about supernovae. But the surveys of the past focused on "normal" supernovae. Today's transient surveys are discovering a wide variety of stellar explosions. These new explosions potentially will teach us as much about supernovae as focused supernovae. I will discuss a variety of specific examples where we can use the "new" explosions discovered in transient surveys to help us understand supernovae.
# Observations of Prompt Gamma-ray Burst Emission
3/30/10
Takanori Sakamoto
(NASA-GSFC)
I will review prompt emission observations from HETE-2 and Swift, which are both satellite missions dedicated to the detection of Gamma-ray Bursts (GRBs). HETE-2 and Swift have on-board computers to process the data and localize GRBs in real-time without a "human-in-the-loop" delay. Thanks to the fast and accurate position localization of GRBs, our understanding of their afterglow emission and host galaxies (birthplace of GRBs) has been dramatically improved. However, the origin of GRB prompt emission is still far from being resolved. I will talk about the observational properties of the prompt GRB emission phase in the context of HETE-2 and the Swift data. I will also discuss the nature of future observations needed to understand GRB prompt emission.
# Unraveling gamma-ray Blazars in the Era of Fermi and VERITAS
4/6/10
Luis Reyes
(U of Chicago)
The field of high-energy astrophysics is experiencing a revolution due to recent observations that have revealed a universe that is surprisingly rich, variable and complex at gamma-ray energies. This revolution has now switched into high gear with the launch of the Fermi Gamma-ray Space Telescope and the full-fledged operation of a new generation of ground-based instruments such as VERITAS, H.E.S.S. and MAGIC. Among the different classes of gamma-ray sources observed by these instruments, a particular subset of active galactic nuclei (AGN) known as blazars stand out as some of the most energetic and variable objects observed at any wavelength. In my talk I will describe how the complementary capabilities of space and ground-based instruments are leading us to a better understanding of gamma-ray blazars as high-energy sources, as a population, and as a cosmological tool to probe the background radiation known as extragalactic background light (EBL). Finally, I will discuss the important scientific return that a next-generation instrument such as AGIS would bring to the field of AGN astrophysics.
# Clues about Dark Matter: Studying the Milky Way in 6-D
4/27/10
Nitya Kallivayalil
(MIT)
Tidal Streams provide a powerful probe of the potential of the Milky Way halo over large Galactocentric distances and their detailed phase-space structure gives us clues as to the nature of dark matter. Powerful theoretical techniques are now available to re-construct the underlying potential from the six-dimensional phase-space parameters that describe stellar tracers. Notably absent from the presently available data-sets are full 3-D velocities. I will describe ongoing efforts to remedy this aimed at tracers that sample the Milky Way halo at a large range of distances: the inner stellar halo, the Sagittarius Stream and Globular Clusters, and the Magellanic Clouds. I will also describe efforts to expand the number of reference QSOs suitable for space-based astrometry, and what we ultimately hope to learn about halo shape and distribution.
# Measuring gravitational lenses
5/18/10
Peter Melchior
(Heidelberg University)
With current and upcoming lensing surveys, massive datasets are or will become available, which enable us to constrain the cosmological parameters governing the formation of gravitationally bound structures in the universe. I will discuss the principles employed for inferring the mass distribution of individual galaxy clusters and of the large-scale structure as a whole. I will also go through the problems we encounter, especially in estimating the lensing-induced distortions from background galaxies, and how we seek to overcome them with novel methods and dedicated simulations.
# Cosmological Constraints from the Growth of X-ray Luminous Galaxy Clusters
5/25/10
(NASA/GSFC)
Over the past few years, constraints on the growth of cosmic structure have become available from observations of the galaxy cluster population and its evolution. This advance is largely due to the painstaking identification of clusters at redshifts z>0.3 in the X-ray flux-limited ROSAT All-Sky Survey (with ongoing Sunyaev-Zel'dovich and optical surveys not far behind). I will present cosmological constraints obtained from a sample of 238 X-ray flux-selected clusters, which, including the recently released MACS sample, extend to redshift 0.5. The cluster data provide robust constraints on the amplitude of the matter power spectrum as well as the dark energy equation of state (+-0.2 for a constant w model). The ability to trace the growth of structure as a function of time also allows us to test the observed growth rate against that predicted by General Relativity, independent of the background expansion history. Ultimately, this provides a tool for testing alternative theories of gravity and potentially distinguishing them from dark energy models. Finally, I will present constraints on cluster mass-observable scaling relations, a necessary and parallel aspect of the cosmological tests, which has some interesting implications for future work.
# Exploring the Ends of the Rainbow: Cosmic Rays in Star-Forming Galaxies
9/21/10
Brian Lacki
OSU/CCAPP
The cosmic rays (CRs) in star-forming galaxies dominate their emission at gamma-ray and radio wavelengths. The observed linear correlation between the nonthermal radio emission and the thermal infrared emission of galaxies, the far infrared (FIR)-radio correlation (FRC), links together the CR electron population, star-formation rate, and magnetic field strength of galaxies. Furthermore, gamma-ray data links the CR proton population, the star-formation rate, and gas density. We construct one-zone steady-state models of cosmic ray (CR) spectra in star-forming galaxies ranging from normal galaxies to the densest starbursts, calculating both the radio and gamma-ray emission. We then calculate the broadband emission of primary and secondary CR protons, electrons, and positrons. We find the FRC is caused by conspiracies of several factors for galaxies across the range of the correlation, including CR escape from galaxies, UV opacity, non-synchrotron cooling, and secondary electrons and positrons generated by CR protons. The conspiracies have great implications for the evolution of the FRC at high z, actually preserving it to higher redshift than previously thought but allowing variations in the FIR-radio ratio with different galaxy properties. I describe how the recent gamma-ray observations of M82 and NGC 253 compare with our models. These starbursts are somewhat less gamma-ray bright than we expect, but still indicate substantial pionic losses for CR protons and non-synchrotron cooling for CR electrons and positrons, supporting the conspiracy. Finally, I will describe our more recent work on the highest energy CR electrons in starbursts and the gamma-rays they produce. Starburst galaxies ought to be opaque to 30 TeV gamma-rays through pair production; in the strong magnetic fields of starbursts, these created electrons and positrons radiate synchrotron X-rays. We find that these synchrotron X-rays could make up ~10% of the diffuse hard X-ray emission from M82-like starbursts and even more in the brightest starbursts like Arp 220.
# Electromagnetic Flares from the Tidal Disruption of Stars by Massive Black Holes
9/28/10
Linda Strubbe
(Berkeley)
A star that wanders too close to a massive black hole (BH) gets shredded by the BH's tidal gravity. Stellar gas soon falls back to the BH at a rate initially exceeding the Eddington rate, releasing a flare of energy as gas accretes. How often this process occurs is uncertain at present, as is the physics of super-Eddington accretion (which is relevant for BH growth and feedback at high redshift as well). Excitingly, transient surveys like the Palomar Transient Factory (PTF), Pan-STARRS and LSST should shed light on these questions soon -- in anticipation, we predict observational properties of tidal flares. Early on, much of the falling-back gas should blow away in a wind, producing luminous optical emission imprinted with blueshifted UV absorption lines, and the observational signatures can be qualitatively different for M_BH ~ 105 - 106 Msun relative to more massive BHs. Possible X-ray emission can complicate the spectroscopic predictions. I will describe predicted detection rates for PTF, Pan-STARRS and LSST, and discuss the substantial challenge of disentangling these events from supernovae. These surveys should significantly improve our knowledge of stellar dynamics in galactic nuclei, the physics of super-Eddington accretion, the demography of IMBHs, and the role of tidal disruption in the growth of massive BHs.
# Multi-wavelength studies of Galactic satellites and implications for dark matter detection
10/5/10
Louie Strigari
(Stanford/KIPAC)
The census of local group dwarf galaxies has changed dramatically in recent years. By studying both their number counts and internal kinematics, faint Galactic satellites uniquely test the standard cosmological model and the properties of dark matter in a regime that is not probed by large scale observations such as the distribution of galaxy clusters and the cosmic microwave background. In this talk, I will discuss the confrontation of new data with theoretical predictions, highlighting a developling new twist on the lingering issue of the overproduction of Galactic satellites in the theory of cold dark matter. I will further discuss the importance of multi-wavelength probes of satellites, following a path of discovery in optical surveys, to targeted follow up spectroscopy of individual objects, and then to searches for particle dark matter annihilation using high energy gamma-rays and neutrinos. Following this trail I argue that Galactic satellites present the most robust constraints on the dark matter annihilation cross section. Given the current constraints, I will review the status of a search for optically dark satellites with the Fermi gamma-ray telescope.
# The Coyote Universe and Beyond
10/12/10
Katrin Heitmann
(Los Alamos National Lab)
Cosmological evidence for dark energy and dark matter poses an exciting challenge to fundamental physics. Next-generation surveys will investigate new physics beyond the Standard Model by targeting the nonlinear regime of structure formation, observed using powerful probes such as weak gravitational lensing. In order to fully exploit the information available from these probes, accurate theoretical predictions are required. Currently such predictions can only be obtained from costly, precision numerical simulations. In this talk, I will introduce the "Coyote Universe" project, a combined computational and statistical program to obtain precision predictions for the nonlinear power spectrum of density fluctuations. Such a program is essential for the interpretation of ongoing and future weak-lensing measurements to investigate and understand the nature of dark energy. I will discuss planned extensions of the Coyote Universe to include more cosmological parameters and physics. This work will be carried out with a new simulation capability recently developed at Los Alamos and targeted at future hybrid computing architectures. I will give a brief overview of these new developments.
# Neutrino Oscillations and (dis)appearance prospects for IceCube-DeepCore
10/19/10
Jason Koskinen
(Penn State)
The recent commissioning of the full DeepCore sub-array, a low-energy extension of the IceCube neutrino observatory, offers exciting opportunities for neutrino oscillation physics in the multi-GeV energy region. The improved energy reach, use of the surrounding IceCube detector as an active veto and immense size of DeepCore will produce one of the largest neutrino datasets ever acquired, annually containing tens of thousands of atmospheric neutrinos after oscillating over a baseline of up to one earth diameter. I will cover some current non-DeepCore oscillation results as well as the prospects for a DeepCore muon neutrino disappearance and possibly a tau neutrino appearance measurement. Proposed future extensions to DeepCore designed to drive the energy reach down to ~1 GeV will conclude the talk.
# Observational Signatures of Neutron Star Mergers
10/26/10
Brian Metzger
(Princeton U.)
A fraction of neutron star (NS) and black hole binaries are formed sufficiently compact that they in-spiral and merge due to the emission of gravitational waves within the lifetime of the Universe. Such compact object mergers are among the most promising sources for the direct detection of gravitational waves with ground-based interferometers such as LIGO and Virgo. Maximizing the science of such a detection will, however, require identifying a coincident electromagnetic (EM) counterpart. One possible source of EM emission is a gamma-ray burst (GRB), powered by the accretion of material that remains in a rotationally-supported torus around the central black hole. I will overview the observational and theoretical status of the connection between NS mergers and the "short duration" subclass of GRBs. Although new observations from NASA's Swift observatory have provided some evidence in favor of the merger model, the puzzling discovery has also been made that many short GRBs are followed by late-time X-ray flaring activity, which does not fit current theory and may require modifying or considering alternative progenitor models. Another source of EM emission from NS mergers is a supernova-like optical transient, powered by the radioactive decay of heavy elements synthesized in neutron-rich ejecta from the merger. I will present the first calculations of the radioactively-powered transients from mergers that include both realistic nuclear physics and radiative transport, and I will discuss the prospects for detecting and identifying such events with present and future telescopes.
11/2/10
Jack Singal
(SLAC/Stanford)
# QUIET experiment for CMB polarization measurement
11/8/10
Akito Kusaka
(U. of Chicago)
Cosmic microwave background (CMB) polarization is the ultimate probe of primordial gravity waves in the early universe, via the B-mode (or parity odd) signal on degree angular scales. A detection of such a signal would rule out most non-inflationary models and represent indirect observation of a fundamentally new phenomenon near the grand unification energy scale. With its unique radiometer technology, QUIET is among the most competitive experiments aiming to detect such a signature in the CMB. QUIET started its observation with its 44GHz receiver in October 2008. After nine months of successful observation, we deployed the 95GHz receiver replacing the 44GHz one and the observation resumed in August 2009. In this talk, I will review its instrumentation, site, and observation strategy, as well as the current status of the analysis.
# The Cosmic Diffuse Gamma-ray Background: a puzzle to unveil
11/16/10
Marco Ajello
(SLAC/Stanford)
The Extragalactic Gamma-ray background might encrypt in itself the signature of some of the most powerful and exotic phenomena in the Universe. Recently, Fermi-LAT measured its intensity with unprecedented accuracy. At the same time Fermi, with its unprecedented sensitivity, detected over a thousand point-like sources. Most of the extragalactic sources are blazars, but a growing fraction of the detected sources comprises also starburst/starforming galaxies as well as radio galaxies. In this talk I will review and address the current efforts to sort out the different components of the extragalactic gamma-ray background, focusing in particular on the blazar class and the starforming galaxies. I will also discuss future developments and the possibility to study the fluctuations of the gamma-ray sky to gain knowledge about the 'truly' diffuse component of the gamma-ray background. Finally I will also address the variability of the gamma-ray sky and what can be learned from its systematic study.
# Resonant Stripping as the origin of dwarf spheroidal galaxies
11/23/10
Elena D'Onghia
(Havard-Smithsonian CfA)
The most dark matter dominated galaxies known are the dwarf spheroidals, but their origin is still uncertain. The recent discovery of ultra-faint dwarf spheroidals around the Milky Way further challenges our understanding of how low-luminosity galaxies originate and evolve because of their even more extreme paucity of gas and stars relative to their dark matter content. By employing numerical simulations I will show that interactions between dwarf disc galaxies can excite a gravitational resonance that immediately drives their evolution into spheroidals. This effect, which is purely gravitational in nature, applies to gas and stars and is distinct from other mechanisms which have been proposed up to now to explain the origin of dwarf spheroidals, such as merging, galaxy-galaxy harassment and more general heating processes, or tidal and ram pressure stripping. Using a new analytic formalism we developed based on the linear perturbation theory I will show the nature and the efficiency of the resonant process and its applicability to the formation of tails of stars and streams of gas.
# The Fermi LAT as a cosmic-ray electron observer
Even though it was designed to be a high sensitivity gamma-ray observatory, the Large Area Telescope (LAT) onboard the Fermi satellite has proved to be also an excellent electron/positron detector. The data collected by the LAT during its first year of operation have been used to measure the cosmic-ray electron and positron (CRE) energy spectrum in the energy range from 7 GeV to 1 TeV and to search for possible anisotropies in their arrival directions. An overview on the data analysis will be given and the main results will be illustrated.
# Ultra High Energy Cosmic Rays from Mildly Relativistic Supernovae
12/7/10
Sayan Chakraborti
(Tata Institute)
Understanding the origin of the highest energy cosmic rays, is a crucial step in using them as probes of new physics, at energies unattainable by terrestrial accelerators. However their sources remain an enigma nearly half a century after their discovery. They must be accelerated in the local universe, as otherwise background radiations would severely suppress the flux of protons and nuclei, at energies above the Greisen-Zatsepin-Kuzmin (GZK) limit. Nearby GRBs, Hypernovae, AGNs and their flares, have all been suggested and debated in the literature as possible sources. A local sub-population of type Ibc supernovae with mildly relativistic ejecta have been detected for some time as sub-energetic GRBs or X-Ray Flashes and more recently as radio afterglows without detected GRB counterparts, such as SN 2009bb. In this talk we shall discuss the measurement of the size-magnetic field evolution, baryon loading and energetics, of SN 2009bb using its radio spectra obtained with the VLA and GMRT. This will allow us to see where the engine-driven SNe lie in the Hillas diagram and whether they can explain the post-GZK UHECRs?
# A Bayesian Analysis of a Milky Way Ultra-Faint Satellite
1/18/11
Greg Martinez
(UC Irvine)
With the advent of SDSS the number of known Milky Way satellites has more than doubled. There new members, such as Segue 1, are extremely optically faint. Accurate mass measurements require careful analysis of velocity data. Here I describe the analysis of the multi-epoch velocity measurements of Segue 1 to determine its intrinsic velocity dispersion. Our method includes a simultaneous Bayesian analysis of both membership probabilities and the contribution of binary orbital motion to the observed velocity dispersion. Our analysis strongly disfavors the possibility that segue 1 is a bound star cluster. The inferred dark matter density is one of the highest measured, making Segue 1 a prime source for indirect dark matter detection. I will discuss the possibility of indirect detection in the context of SUSY models.
# A Search for Point Sources with the IceCube Neutrino Observatory
2/1/11
Jon Dumm
Construction of the IceCube Neutrino Observatory was only recently completed on Dec 18, 2010. IceCube is the first 1km3 detector of it's kind, monitoring 1 billion tons of ice. Deep under the South Pole, IceCube looks for rare high energy neutrino interactions (> ˜100 GeV). While the observatory was under construction for 5 years, data was being collected and analyzed continuously. Some of the science highlights so far include searches for astrophysical neutrinos, a measurement of the atmospheric neutrino spectrum above 1 TeV, observation of a cosmic ray anisotropy in the southern hemisphere, and indirect searches for dark matter. This talk will describe IceCube, the motivations for building such a detector, and highlight the effort to find point-like sources of astrophysical neutrinos.
# Channeling and daily modulation in direct dark matter detectors
2/8/2011
Nassim Bozorgnia
UCLA
The channeling of the ion recoiling after a collision with a WIMP in direct dark matter detectors produces a larger signal than otherwise expected. Channeling is a directional effect which depends on the velocity distribution of WIMPs in the dark halo of our galaxy, and could lead to a daily modulation of the signal. I will discuss channeling and blocking effects using analytic models produced in the 1960's and 70's, and present estimates of the expected amplitude of daily modulation in the data already collected by the DAMA experiment.
# Dark matter annihilation and spherical harmonics of Fermi gamma-rays
2/15/11
Dmitry Malyshev
(NYU)
Gamma-ray production by dark matter annihilation is one of the most universal indirect dark matter signals. In order to avoid intensive astrophysical background, one can study the gamma-rays away from the Galactic plane. The problems is that the dark matter annihilation signal at high latitudes is smooth and most probably subdominant to Galactic and extragalactic fluxes. I will discuss the use of spherical harmonics decomposition as a tool to distinguish a large scale small amplitude dark matter signal from astrophysical fluxes. The sensitivity of this method for currently available Fermi data is similar to the signal from thermal WIMP dark matter annihilation into, e.g., W+W-
# GADZOOKS! How to See Extragalactic Neutrinos By 2016
3/8/11
Mark Vagins
(IPMU/UCI)
Water Cherenkov detectors have been used for many years to study neutrino interactions and search for nucleon decays. Super-Kamiokande, at 50 kilotons the largest such underground detector in the world, has enjoyed over ten years of interesting and important physics results. Looking to the future, for the last eight years R&D on a potential upgrade to the detector has been underway. Enriching Super-K with 100,000 kilograms of a water-soluble gadolinium compound - thereby enabling it to detect thermal neutrons and dramatically improving its performance as a detector for supernova neutrinos, reactor neutrinos, atmospheric neutrinos, and also as a target for the new T2K long-baseline neutrino experiment - will be discussed.
# Generative modeling for the Milky Way and the Universe
3/14/11
Jo Bovy
(New York University)
At the interface between observational and theoretical astrophysics lies data analysis and inference. The most accurate and precise inferences require using a model that generates the data and that takes the noise into account. I give two examples where generative modeling performs better than other methods for parameter inference and classification. To put the Milky Way in a cosmological context we want to know its mass and dark matter distribution in detail. I will discuss in general how we can infer the gravitational potential—dynamics—from kinematics alone. As an application of this, I show how we can determine the Milky Way's circular velocity at the Sun from maser kinematics. As a second example, I discuss density-estimation-based classification for target selection. SDSS-III's BOSS aims to observe 150,000 quasars down to the faint limit of the SDSS in a redshift range (2.2 <= z <= 3.5) where the quasar and stellar color loci overlap significantly. I will show how we can determine models of the underlying distribution of quasars and stars in flux space. We can use these models to evaluate quasar probabilities for all potential targets and build an efficient survey.
# Energy-Dependent Composition of UHECRs and the Future of Charged Particle Astronomy
3/29/11
Antoine Calvez
(UCLA)
Recent results from the Pierre Auger Observatory show an energy dependent chemical composition of ultrahigh-energy cosmic rays (UHECRs), with a growing fraction of heavy elements at high energies. These results suggest a possible non-negligible contribution from galactic sources. We show that in the case of UHECRs produced by gamma-ray bursts (GRBs), or by rare types of supernova explosions that took place in the Milky Way in the past; the change in the composition of the UHECRs can be the result of the difference in diffusion times between different species. The anisotropy in the direction of the Galactic Center is expected to be a few per cent on average, and the locations of the most recent/closest bursts can be associated with observed clustering of UHECRs.
# Light WIMPs!
4/12/11
Dan Hooper
(U. of Chicago)
Observations from the direct detection experiments DAMA/LIBRA and CoGeNT, along with those from the Gamma Ray Space Telescope, have been interpreted as possible evidence of dark matter in the form of relatively light (5-10 GeV) WIMPs. I will discuss the implications of these observations for dark matter phenomenology and discuss how it will be possible with future measurements to either confirm or refute this interpretation. I will also discuss how recent results from the Tevatron could impact efforts to build models including a light WIMP.
# Optimal Linear Image Combination
4/19/11
Barnaby Rowe
(Jet Propulsion Laboratory/Caltech)
I will describe a simple, yet general, formalism for the optimized linear combination of astrophysical images, developed here at JPL/Caltech with Christopher Hirata and Jason Rhodes. The formalism allows the user to combine multiple undersampled images to provide oversampled output at high precision. The proposed method is general and may be used for any configuration of input pixels and point spread function; it also provides the noise covariance in the output image along with a powerful metric for describing undesired distortion to the image convolution kernel. The method explicitly provides knowledge and control of the inevitable compromise between noise and fidelity in the output image.
We also present a first prototype implementation of the method then put it to practical use in reconstructing fully-sampled output images using simulated, undersampled input exposures that are designed to mimic the proposed dark energy mission WFIRST. Comparing results for different dither strategies we illustrate the use of the method as a survey design tool. Finally, we use the method to test the robustness of linear image combination when subject to practical realities such as bad pixels and focal plane plate scale variations, an important consideration for a mission such as WFIRST.
# A Quest for Sources of Ultrahigh Energy Cosmic Rays
4/26/11
Kumiko Kotera
(U. of Chicago)
The origin of ultrahigh energy cosmic rays (UHECRs) has not been unveiled in spite of decades of experimental and theoretical research. In this talk, I discuss the observable signatures that would constrain the possible sources to one single suspect.
In particular, I will present the anisotropy signatures expected for various types of sources, and describe how the intergalactic magnetic field plays a prominent role in this picture. For this purpose, I will introduce an analytical formalism to study the propagation of UHECRs in the magnetized Universe.
Another constraint on the sources might come from multi-messenger signatures (in gamma-rays, neutrinos and gravitational waves) that can be produced together with UHECRs. I will present the expected fluxes for various astrophysical scenarios and discuss to which extent these signals could pin-point the actual sources of UHECRs.
In light of this discussion, I will briefly present the latest results of the Pierre Auger Observatory and give requirements for future detectors in UHECRs, neutrinos, gamma rays and gravitational waves, to solve this long-standing enigma.
# Constraining the Dawn of Cosmic Structure and the Epoch of Reionization with the 21cm Line
5/3/11
Jonathan Pritchard
Harvard/CfA
The first billion years of the Universe contains the formation of the first galaxies and reionization. This period lies beyond the current observational frontier presenting challenges to theory and observation. Low frequency observations of the redshifted 21 cm line of neutral hydrogen will be key in developing our understanding of this period. In this talk, I will describe two aspects of the 21 cm signal from the period of "cosmic dawn": the global 21 cm signal and 21 cm fluctuations. I will discuss what can be learnt about the first galaxies and reionization from this technique and explore some of the challenges and opportunities ahead for the first observations.
# Indirect Detection of Dark Matter - Electroweak Bremsstralung and Other Stories
5/13/11
Nicole Bell
(University of Melbourne)
Annihilation of dark matter to fermionic final states is often either helicity or velocity suppressed. We outline the circumstances under which bremsstrahlung processes can remove such suppressions, thereby dramatically improving prospects for indirect detection. In these cases, the three body final states such as e+e-gamma, e+e-Z and e\nuW dominate over the 2-body annihilation modes. Since the W and Z gauge boson have large hadronic decay modes, purely leptonic annihilation is impossible if the 3-body bremsstrahlung processes dominate. We also discuss dark matter annihilation via metastable mediators, and show that this can lead to greatly enhanced high energy neutrino signals from the Sun.
# The High Altitude Water Cherenkov Gamma-ray Observatory
5/24/11
Miguel Mostafa
The High Altitude Water Cherenkov (HAWC) experiment is a large field of view, continuously operated TeV gamma-ray observatory to be constructed using a dense array of water Cherenkov detectors covering an area greater than 25,000 m2. HAWC will be located at an elevation of 4,100 m near the Sierra Negra mountain in Mexico. The instrument will use 900 photomultiplier tubes to observe the relativistic particles and secondary gamma rays in extensive air showers. This technique has been used successfully by the Milagro observatory to detect known (as well as new!) TeV sources. HAWC is a natural extension of Milagro, which has demonstrated the ability to detect {at TeV energies{ many of the galactic sources which have been observed by the Fermi LAT in the GeV energy range. The design of HAWC was optimized using the lessons learned from Milagro, and will be 15 times more sensitive than Milagro when completed. Improvements in sensitivity, angular resolution, and background rejection will allow HAWC to measure or constrain the TeV spectra of most of the Fermi discovered GeV sources. In addition, above 100 GeV, HAWC will be more sensitive than the Fermi satellite and be the only ground-based instrument capable of detecting prompt emission from gamma-ray bursts in this energy regime. In this seminar I will present the physics motivation, the HAWC observatory, and the activities of my group.
# Supernova Feedback Keeps Galaxies Simple
9/26/11
Sayan Chakraborti
(TIFR, India)
Galaxies are complicated and history dependent. Yet, recent studies have uncovered surprising correlations among the properties of galaxies. Such simplicity seems, naively, to be at odds with the paradigm of hierarchical galaxy mergers. One of the puzzling results, is the simple linear correlation between the neutral hydrogen mass and the surface area, implying that widely different galaxies share very similar neutral hydrogen surface densities. We shall see in this presentation that self-regulated star formation, driven by the competition between gravitational instabilities and mechanical feedback from supernovae, can explain the nearly constant neutral hydrogen surface density across galaxies.
# Weak Lensing Simulations and Precision Cosmology with Large-area Sky Surveys
9/27/11
Matt Becker
(KICP/U. of Chicago)
Weak lensing measurements are an essential part of near- and long-term large-area sky surveys aimed at an array of scientific goals, like understanding Dark Energy, elucidating further the connection between galaxies and dark matter halos, constraining modifications to General Relativity, etc. The weak lensing community has undertaken extensive simulation efforts, both CCD image simulations and computations of the cosmological weak lensing signals from large-scale structure simulations, in order to address the variety of systematic errors which can adversely effect these measurements and their interpretation. The next logical step in this effort is the construction of mock galaxy catalogs with weak lensing shear signals self-consistently from large-scale structure simulations. While these weak lensing mock galaxy catalogs have easily been made for small patches of sky (~10 square degrees), upcoming large-area sky surveys will image thousands of square degrees or more. I will describe a new multiple-plane ray tracing code which is able to produce full-sky weak lensing deflection, convergence, and shear fields suitable for the construction of weak lensing mock galaxy catalogs for large-area sky surveys. I will also highlight the application of this code to the Dark Energy Survey simulation effort. Finally, I will present a prototypical example of these simulation efforts, my recent work on interpreting weak lensing galaxy cluster mass measurements, emphasizing understanding their scatter and more importantly their potential biases. This work, and ongoing work by others in the Dark Energy Survey collaboration based on these new weak lensing mock galaxy catalogs, illustrates the utility these simulations in understanding systematic errors in current and future weak lensing measurements from large-area sky surveys.
# The Intracluster Medium of Galaxy Clusters
10/11/11
Uri Keshet
(CfA, Harvard)
Recent observations of galaxy clusters reveal new insights into the dynamical and nonthermal processes in the intracluster medium (ICM). Tangential discontinuities are directly seen in high resolution X-ray maps of cool cluster cores, in the form of cold fronts. They reveal bulk shear flows which magnetize the plasma, give rise to radio minihalos, and may play a key role in solving the cooling problem. The ICM shows a rich phenomenology of non-thermal radio emission, arguably arising from hadronic cascades involving cosmic-ray protons. While such a secondary signal is too weak to be observed by Fermi, the primary gamma-ray signal from strong virial shocks may be identifiable.
# Dark Matter Parameters from Neutrino Telescopes
10/18/11
Katie Richardson
(U. of New Mexico)
In this talk, I will discuss how neutrino telescopes may help us extract dark matter parameters and can in fact place the most stringent bounds on the spin-dependent dark matter-nucleon scattering cross-section. In particular the dark matter annihilation final state provides a distinctive signature that allows us to discriminate among classes of dark matter models. Models with gauge boson or tau final states alongside neutrino final states are distinguishable, and the theoretically well-motivated U(1)_B-L extension of the MSSM produces just such a mixture of final states. It is feasible that the energy reconstruction capability of the IceCube neutrino telescope will preserve the important features. Finally, I will address the prospect for differentiating neutrino flavor final states from one another.
# First Cosmic Shear Measurement in SDSS
10/25/11
Eric Huff
(Berkeley)
I discuss preliminary results from a first cosmic shear measurement in SDSS. We have coadded 250 square degrees of multi-epoch SDSS imaging along the celestial equator, optimizing for weak lensing measurement. We employ standard techniques for shape measurement, shear calibration, and inference of the redshift distribution, and perform a wide array of tests that show that the systematic errors for this measurement are probably negligible compared to the statistical errors. We analyze the shear autocorrelation with and without WMAP7 priors, and produce competitive constraints on the matter density and the amplitude of the matter power spectrum at redshift z=0.6.
I will also discuss some new results on lensing magnification. Motivated by the need for greater signal-to-noise in weak lensing measurements, we have used tight photometric galaxy scaling relations to measure a galaxy-galaxy magnification signal with many times the signal-to-noise of previous magnification results. I describe how minor improvements on this work may permit magnification measurements with signal comparable to shear.
# Baryon Acoustic Oscillations: Galaxy Bias Effect and Cosmological Measurements
11/1/11
Kushal Mehta
(Arizona)
I will talk about the work presented in Mehta et al (2011) regarding measuring the effects of galaxy bias on baryon acoustic oscillations (BAO) measurements in cosmological N-body simulations, and the technique of reconstruction used to refine the BAO signal. I will also talk about new SDSS-II LRG (Luminous Red Galaxies) BAO data and the measurements of cosmological parameters. These results will be presented in 3 papers (Padmanabhan et al, Xu et al, and Mehta et al, all in prep).
# Propagation of Ultrahigh Energy Nuclei in the Galactic magnetic field
11/2/11
Gwenael Giacinti
(APC/Paris)
The composition of ultra-high energy cosmic rays (UHECR) at the highest energies is a matter of debate. The measurements from the Auger Observatory would suggest a shift towards heavier nuclei, whereas Telescope Array results can still be compatible with a proton composition. We present simulations for the propagation of ultra-high energy heavy nuclei, with E > 6x10^(19) eV, within recent Galactic Magnetic Field (GMF) models. Differences between the propagation of protons and heavy nuclei in the GMF may provide additional information about the charge composition of UHECRs.
For UHE heavy nuclei primaries, there is no one-to-one correspondence between their arrival directions at Earth and the directions of their extragalactic sources. We show the challenges, and possibilities, of "UHECR astronomy" with heavy nuclei. Finally, we present a quantitative study of the impact of the GMF on the (de-)magnification of source fluxes, due to magnetic lensing effects. For 60 EeV iron nuclei, sources located in up to about one fifth of the sky would have their fluxes so strongly demagnified that they would not be detectable at Earth, even by the next generation of UHECR experiments.
# Exploring the Dark Universe with Gravitational Lensing
11/15/11
Sherry Suyu
(U. or California, Santa Barbara)
Understanding the nature of dark energy and dark matter is one of the biggest challenges in modern cosmology. Strong gravitational lens systems provide a powerful tool for measuring cosmological parameters and for probing dark matter in galaxies. In the first part of my talk, I will show how strong lens systems with measured time delays between the multiple images can be used to determine the "time-delay distance" to the lens. I will present the cosmological constraints, particularly on the Hubble constant and the dark energy equation of state, from a detailed analysis of the gravitational lens B1608+656, and discuss future prospects of time-delay lens cosmography. In the second part of my talk, I will present a joint lensing and kinematics analysis of the spiral gravitational lens B1933+503 at z=0.76 to disentangle the baryons and dark matter in the spiral galaxy and probe the stellar initial mass function.
# Understanding Star-forming Galaxies across Cosmic Time
11/21/11
Matt Bothwell
(U. of Cambridge, UK)
The formation of stars from the interstellar medium is one of the primary drivers of galaxy evolution, and obtaining a full characterization of the processes involved is essential if we are to understand the physics behind the formation of galaxies. Viewing galaxies at high redshift gives us a direct window into the various formation processes, but the importance of a comprehensive understanding of the z~0 Universe cannot be overemphasized, as the early stages of galaxy evolution leave telltale footprints in the properties of local galaxies. I present work examining the star formation laws in galaxies at both low and high redshift. Firstly, I discuss the distribution function of star formation in the local Universe, calculated in a manner analogous to the luminosity function, and its implications for galaxy formation scenarios.
Looking to high redshift, I present molecular gas observations of a sample of z~2 ultra-luminous infrared galaxies (ULIRGs). These observations provide the best view of the star formation and kinematic properties of these enigmatic systems, allowing us to place them into the context of galaxy formation models.
# Core Collapse Supernovae: Black Holes and Neutrinos
11/29/11
Evan O'Connor
(Caltech)
Core-collapse supernovae are some of the most explosive high-energy astrophysical events in our universe. They are the result of the collapse of the iron core in an evolved massive star (M > 8-10 solar masses). The collapse is halted when the collapsing core reaches nuclear densities, at which point the core-collapse supernova central engine takes over. We know that the central engine must eventually drive an explosion in some fraction of massive stars, however, after over 40 years of theoretical research we still do not completely understand this core-collapse supernova mechanism. In this talk, I will review the state of core-collapse supernova theory. I will also discuss our work at Caltech on both the success and failure of the core-collapse supernova mechanism. For looking at the success, we considered the possibility that collective neutrino oscillations may enhance the neutrino mechanism. If a core-collapse supernova fails, a black hole is the result. I will discuss our predictions for black hole populations from failed supernova.
# Beyond the Standard Model of Cosmology: Dark Energy, Neutrinos, and Primordial Non-Gaussianity
12/6/11
Shahab Joudaki
(UCI)
Some of the most outstanding problems of physics lie in the understanding of the dark sector of the universe, in particular dark energy, neutrinos, and inflation.
The dark energy and neutrinos are correlated through their effects on distances and the clustering of matter. I will review the present state of surveys sensitive to the effects of dark energy and neutrino mass. I will then forecast how well the present dark energy density and its equation of state along with the sum of neutrino masses may be constrained using multiple probes that are sensitive to the growth of structure and expansion history, in the form of weak lensing tomography, galaxy tomography, supernovae, and the cosmic microwave background. I will include all cross-correlations between these different probes and allow for non-negligible dark energy at early times (motivated by the coincidence problem) in spatially flat and non-flat cosmological models. In the latter portion of the talk, I will discuss a novel method to constrain non-Gaussianity of the primordial density perturbations by its impact on the ionization power spectrum from 21 cm emission during the epoch of reionization. I will show that 21 cm experiments in the near future may constrain inflationary models via primordial non-Gaussianity to the same precision as expected from Planck.
# The LHCf experiment: Verification of high energy cosmic ray interactions
1/10/12
Yoshitaka Itow
(Nagoya University)
Recent progress in air shower observations of the highest energy cosmic rays with $\sim 10^{20}$ eV gives us an enigmatic problem about their origins and propagations. One difficulty is implication of air shower observations due to uncertainty of hadron interactions in such high energy. The particle production at the very forward region plays an important role in air shower development, since it carries most of collision energy. The LHCf experiment is dedicated to measure spectra of neutral particles at very forward region of the LHC collision point in order to verify interaction of cosmic rays of 10^{17} eV. The data taking had been carried out for $\sqrt{s}$=0.9TeV and 7TeV. The results of "inclusive" gamma-ray energy spectra at 0 degree has been obtained. Future plan for very forward measurement at p-A or A-A collisions is also discussed.
Department of Physics | Department of Astronomy | Postdoctoral Fellowships | Contact Us
191 West Woodruff Avenue, Columbus, OH 43210 Fax: (614) 292-7741 | 2013-12-07 14:17:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4846110939979553, "perplexity": 1471.8455245309433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054599/warc/CC-MAIN-20131204131734-00073-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/231770/integral-quaternary-forms-and-theta-functions | # Integral quaternary forms and theta functions
The following question arises when I attempt to understand the modular parameterization of the elliptic curve $$E:y^2-y=x^3-x$$
In Mazur-Swinnerton-Dyer and Zagier's construction, a theta function associated with a positive definite quadratic form is induced:
$$\theta(q)=\sum_{x\in\mathbb{Z}^4}q^{\frac{1}{2}x^{T}Ax}$$
where $$A=\left(\begin{matrix}2 & 0 & 1 & 1\\ 0 & 4 & 1 &2\\ 1 & 1 & 10 & 1\\ 1 & 2 & 1 & 20 \end{matrix}\right)$$
$A$ is a positive definite matrix of determinant $37^2$, and we have $37A^{-1}=K^TAK$ where $K$ is an integral matrix of determinant $\pm 1$.
Question: Suppose $A$ is a positive definite $4\times 4$ matrix with integral entries. All diagonal entries are even numbers. The determinant of $A$ is a square number $N^2$. Is it true that for every $N=p$ ($p>2$ is a prime number), there is at least one $A$ that $NA^{-1}=K^TAK$, where $K$ is an integral matrix of determinant $\pm 1$?
-
In your question, do you want to assume $A$ is a $4 \times 4$ matrix? It's a little unclear which parameters depend on which here. – David Loeffler Feb 21 at 18:13
@DavidLoeffler: Yes, $A$ should be a $4\times 4$ matrix. The title of the question is about "quaternary forms", but I think I should emphasis that $A$ is a $4\times 4$ matrix. – zy_ Feb 21 at 19:33
since this question is about explicit modular paramtetrization, let me put a link to a related question I asked: mathoverflow.net/questions/96621/… – Abdelmalek Abdesselam Feb 22 at 22:49
dunno. few_reps found that all the other forms of discriminant $37^2$ fail. Go figure. Here are all the examples from Nipp's extended tables at Nebe's website. To get the matrix, double f11, f22, f33, f44, but keep the others as they are, and make the matrix symmetric.
lower case g is just a genus ID number. G is the size of the automorphism group.
d g f11 f22 f33 f44 f12 f13 f23 f14 f24 f34 H level = N G m1 m2
9 1 1 1 1 1 1 0 0 0 0 1 -1-1 3 288 1 288
25 1 1 1 2 2 1 1 0 1 1 2 -1-1 5 72 1 72
49 1 1 1 2 2 0 1 0 0 1 0 -1-1 7 32 1 32
121 1 1 1 3 3 0 1 0 0 1 0 -1-1 11 32 25 288
121 1 1 1 4 4 1 1 0 1 1 4 -1-1 11 72 25 288
121 1 2 2 2 2 2 1 0 1 1 2 -1-1 11 24 25 288
169 1 1 2 2 4 1 0 1 1 1 2 -1-1 13 8 1 8
289 1 1 1 6 6 1 1 0 1 1 6 -1-1 17 72 2 9
289 1 1 2 3 5 1 0 2 0 1 3 -1-1 17 8 2 9
289 1 2 2 3 3 2 1 0 1 1 3 -1-1 17 12 2 9
361 1 1 1 5 5 0 1 0 0 1 0 -1-1 19 32 9 32
361 1 1 2 3 6 1 1 0 1 2 3 -1-1 19 8 9 32
361 1 2 2 3 3 0 2 1 1 2 1 -1-1 19 8 9 32
529 1 1 1 6 6 0 1 0 0 1 0 -1-1 23 32 121 288
529 1 1 1 8 8 1 1 0 1 1 8 -1-1 23 72 121 288
529 1 1 2 3 6 0 0 1 1 0 0 -1-1 23 8 121 288
529 1 2 2 3 3 0 1 0 0 1 0 -1-1 23 8 121 288
529 1 2 2 4 4 2 1 0 1 1 4 -1-1 23 12 121 288
529 1 3 3 3 3 3 2 0 2 2 3 -1-1 23 24 121 288
841 1 1 1 10 10 1 1 0 1 1 10 -1-1 29 72 49 72
841 1 1 2 4 8 0 1 1 1 2 1 -1-1 29 8 49 72
841 1 1 3 3 8 1 0 2 0 1 3 -1-1 29 8 49 72
841 1 2 2 4 4 1 1 0 0 -1 2 -1-1 29 4 49 72
841 1 2 2 5 5 2 1 0 1 1 5 -1-1 29 12 49 72
841 1 3 3 4 4 3 2 -1 2 3 3 -1-1 29 12 49 72
961 1 1 1 8 8 0 1 0 0 1 0 -1-1 31 32 25 32
961 1 1 2 4 8 0 0 1 1 0 0 -1-1 31 8 25 32
961 1 1 2 5 9 1 0 2 0 1 5 -1-1 31 8 25 32
961 1 2 2 4 4 0 1 0 0 1 0 -1-1 31 8 25 32
961 1 2 3 4 5 2 0 3 1 2 4 -1-1 31 4 25 32
961 1 3 3 3 3 2 1 0 0 1 -2 -1-1 31 8 25 32
1369 1 1 2 5 10 0 1 1 1 2 1 -1-1 37 8 9 8
1369 1 1 4 5 6 1 0 1 0 4 3 -1-1 37 4 9 8
1369 1 2 2 3 10 1 2 0 1 0 3 -1-1 37 4 9 8
1369 1 2 3 4 5 1 0 3 1 1 2 -1-1 37 2 9 8
1681 1 1 1 14 14 1 1 0 1 1 14 -1-1 41 72 25 18
1681 1 1 2 6 12 1 0 1 1 1 6 -1-1 41 8 25 18
1681 1 1 3 4 11 0 1 2 0 3 1 -1-1 41 8 25 18
1681 1 1 3 4 12 1 1 0 1 3 4 -1-1 41 8 25 18
1681 1 2 2 6 6 1 2 0 1 2 3 -1-1 41 4 25 18
1681 1 2 2 7 7 2 1 0 1 1 7 -1-1 41 12 25 18
1681 1 2 3 4 6 0 2 1 1 3 1 -1-1 41 4 25 18
1681 1 3 3 4 4 1 2 -1 1 -2 2 -1-1 41 4 25 18
1681 1 3 3 5 5 3 2 0 2 2 5 -1-1 41 12 25 18
1681 1 4 4 4 4 4 2 -1 3 2 4 -1-1 41 12 25 18
d g f11 f22 f33 f44 f12 f13 f23 f14 f24 f34 H level = N G m1 m2
-
It turns out that the OP will be satisfied with just one form for each of these square discriminants that maps to itself. I should already say that this thing reminds me of Watson transformations. However, few_reps has shown that some of the forms of the genus interchange. It seems this mapping permutes the genus, and one might need to search for a very long time to find a case when this permutation is a derangement.
I have two (infinite sets of) examples that suggest a derangement is going to be hard to find. If prime $q \equiv 3 \pmod 4,$ make a quaternary form out of two copies of the binary $x^2 + xy + \left( \frac{q+1}{4} \right) y^2.$ This works, so half the primes are finished.
Next, if $p = 6k-1,$ take matrix $$\left( \begin{array}{cccc} 2 & 1 & 1 & 1 \\ 1 & 2 & 0 & 1 \\ 1 & 0 & 4k & 2k \\ 1 & 1 & 2k & 4k \end{array} \right)$$ with determinant $p^2.$ The inverse times $p$ is
$$\left( \begin{array}{rrrr} 4k & -2k & -1 & 0 \\ -2k & 4k & 1 & -1 \\ -1 & 1 & 2 & -1 \\ 0 & -1 & -1 & 2 \end{array} \right)$$ I will need to check for the explicit change of variables matrix, but it looks good.
Got it, in
$$K = \left( \begin{array}{rrrr} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right)$$
-
Following Zagier, I can cover $p=8k-3$ with $$A = \left( \begin{array}{rrrr} 2 & 0 & 1 & 1 \\ 0 & 4 & 1 & -2 \\ 1 & 1 & 2k & 0 \\ 1 & -2 & 0 & 4k \end{array} \right)$$ – zy_ Feb 22 at 5:16
@zy_ in that case, i would expect that enough effort would produce recipes for $5k + 2,3$ more work $7k+3,5,6.$ This is not the sort of problem where a finite number of such cases are going to finish the job. I did not see that Zagier really cared about all positive quaternaries of a given discriminant, just some compatible with the modular forms material. – Will Jagy Feb 22 at 5:45
Let $X_p$ be the set of isometry classes of 4-dimensional positive definite lattices satisfying the property $L^\sharp/L\simeq (\mathbf Z/p)^2$. The set $X_n$ is stable under the involution $\tau : L\mapsto pL^\sharp$.
The question seems to ask whether or not $X_p^\tau$ contains an even lattice. Will Jagy has given evidences for a positive answer. Nevertheless, one might ask whether the same holds when one restricts the question to a given genus (genera are also stable under $\tau$), even or odd.
Further edit Feb. 22 : Interestingly, the case of odd lattices seems to allow the opposite behaviour:
p= 7 A=
[ 7 0 0 0]
[ 0 3 1 -1]
[ 0 1 2 -1]
[ 0 -1 -1 2]
fix= 0 move= 2 proportion= 0.000
p= 23 A=
[23 0 0 0]
[ 0 5 -1 0]
[ 0 -1 3 -1]
[ 0 0 -1 2]
fix= 0 move= 4 proportion= 0.000
p= 31 A=
[31 0 0 0]
[ 0 6 -1 -1]
[ 0 -1 3 0]
[ 0 -1 0 2]
fix= 0 move= 8 proportion= 0.000
p= 47 A=
[47 0 0 0]
[ 0 6 -2 1]
[ 0 -2 5 0]
[ 0 1 0 2]
fix= 0 move= 12 proportion= 0.000
p= 71 A=
[71 0 0 0]
[ 0 7 1 1]
[ 0 1 6 1]
[ 0 1 1 2]
fix= 0 move= 20 proportion= 0.000
p= 79 A=
[79 0 0 0]
[ 0 6 0 1]
[ 0 0 5 -1]
[ 0 1 -1 3]
fix= 0 move= 26 proportion= 0.000
p= 103 A=
[103 0 0 0]
[ 0 18 -1 -1]
[ 0 -1 3 0]
[ 0 -1 0 2]
fix= 0 move= 50 proportion= 0.000
p= 127 A=
[127 0 0 0]
[ 0 10 -2 1]
[ 0 -2 5 -1]
[ 0 1 -1 3]
fix= 0 move= 62 proportion= 0.000
p= 151 A=
[151 0 0 0]
[ 0 11 -1 0]
[ 0 -1 5 1]
[ 0 0 1 3]
fix= 0 move= 78 proportion= 0.000
p= 167 A=
[167 0 0 0]
[ 0 18 -2 -1]
[ 0 -2 5 0]
[ 0 -1 0 2]
fix= 0 move= 92 proportion= 0.000
p= 191 A=
[191 0 0 0]
[ 0 14 0 1]
[ 0 0 5 -1]
[ 0 1 -1 3]
fix= 0 move= 114 proportion= 0.000
p= 199 A=
[199 0 0 0]
[ 0 7 1 1]
[ 0 1 6 0]
[ 0 1 0 5]
fix= 0 move= 134 proportion= 0.000
p= 223 A=
[223 0 0 0]
[ 0 11 3 -3]
[ 0 3 6 -2]
[ 0 -3 -2 5]
fix= 0 move= 182 proportion= 0.000
p= 239 A=
[239 0 0 0]
[ 0 19 2 0]
[ 0 2 7 -1]
[ 0 0 -1 2]
fix= 0 move= 172 proportion= 0.000
Edit Feb. 21 : Here are some heuristics (in each case, there is a quadratic space $V$ on which some lattices $L$ with $q(L)\subset \mathbf Z$ will furnish Gram matrices $A$ such as required in the OP (recall that in that case, the Gram matrix is associated to the bilinear form $x.y=q(x+y)-q(x)-q(y)$, in particular it has even diagonal entries).
Let $p$ be a prime, congruent to $3$ mod $4$. Then there is such a genus on the $\mathbf Q$-quadratic space $[1,1,p,p]$. Here is the number of fixed (resp exchanged) lattices:
p= 3 fix= 1 move= 0 proportion= 1.00
p= 7 fix= 1 move= 0 proportion= 1.00
p= 11 fix= 3 move= 0 proportion= 1.00
p= 19 fix= 3 move= 0 proportion= 1.00
p= 23 fix= 6 move= 0 proportion= 1.00
p= 31 fix= 6 move= 0 proportion= 1.00
p= 43 fix= 5 move= 2 proportion= 0.714
p= 47 fix= 15 move= 0 proportion= 1.00
p= 59 fix= 21 move= 0 proportion= 1.00
p= 67 fix= 7 move= 6 proportion= 0.538
p= 71 fix= 28 move= 0 proportion= 1.00
p= 79 fix= 20 move= 2 proportion= 0.909
p= 83 fix= 27 move= 2 proportion= 0.931
p= 103 fix= 25 move= 6 proportion= 0.807
p= 107 fix= 33 move= 6 proportion= 0.846
p= 127 fix= 30 move= 12 proportion= 0.714
p= 131 fix= 65 move= 2 proportion= 0.970
p= 139 fix= 39 move= 12 proportion= 0.765
p= 151 fix= 49 move= 12 proportion= 0.804
p= 163 fix= 15 move= 42 proportion= 0.263
p= 167 fix= 88 move= 6 proportion= 0.937
p= 179 fix= 85 move= 12 proportion= 0.876
p= 191 fix= 117 move= 6 proportion= 0.951
p= 199 fix= 81 move= 20 proportion= 0.802
p= 211 fix= 57 move= 42 proportion= 0.576
p= 223 fix= 70 move= 42 proportion= 0.625
p= 227 fix= 105 move= 30 proportion= 0.777
p= 239 fix= 165 move= 12 proportion= 0.933
p= 251 fix= 161 move= 20 proportion= 0.890
p= 263 fix= 156 move= 30 proportion= 0.839
p= 271 fix= 132 move= 42 proportion= 0.759
p= 283 fix= 75 move= 90 proportion= 0.455
p= 307 fix= 81 move= 110 proportion= 0.424
p= 311 fix= 266 move= 20 proportion= 0.930
p= 331 fix= 87 move= 132 proportion= 0.397
p= 347 fix= 155 move= 110 proportion= 0.585
p= 359 fix= 304 move= 42 proportion= 0.879
p= 367 fix= 144 move= 132 proportion= 0.521
p= 379 fix= 99 move= 182 proportion= 0.353
p= 383 fix= 289 move= 72 proportion= 0.801
p= 419 fix= 333 move= 90 proportion= 0.787
p= 431 fix= 399 move= 72 proportion= 0.847
p= 439 fix= 285 move= 132 proportion= 0.684
p= 443 fix= 195 move= 210 proportion= 0.481
p= 463 fix= 140 move= 272 proportion= 0.340
p= 467 fix= 287 move= 182 proportion= 0.612
p= 479 fix= 525 move= 72 proportion= 0.880
p= 487 fix= 147 move= 306 proportion= 0.325
p= 491 fix= 387 move= 156 proportion= 0.713
p= 499 fix= 129 move= 342 proportion= 0.274
Similarly, in the following cases, there exist such a genus on $A_2\perp p.A_2$:
p= 5 fix= 1 move= 0 proportion= 1.00
p= 11 fix= 3 move= 0 proportion= 1.00
p= 17 fix= 3 move= 0 proportion= 1.00
p= 23 fix= 6 move= 0 proportion= 1.00
p= 29 fix= 6 move= 0 proportion= 1.00
p= 41 fix= 10 move= 0 proportion= 1.00
p= 47 fix= 15 move= 0 proportion= 1.00
p= 53 fix= 9 move= 2 proportion= 0.818
p= 59 fix= 21 move= 0 proportion= 1.00
p= 71 fix= 28 move= 0 proportion= 1.00
p= 83 fix= 27 move= 2 proportion= 0.931
p= 89 fix= 27 move= 2 proportion= 0.931
p= 101 fix= 35 move= 2 proportion= 0.946
p= 107 fix= 33 move= 6 proportion= 0.846
p= 113 fix= 22 move= 12 proportion= 0.647
p= 131 fix= 65 move= 2 proportion= 0.970
p= 137 fix= 26 move= 20 proportion= 0.565
p= 149 fix= 49 move= 12 proportion= 0.804
p= 167 fix= 88 move= 6 proportion= 0.937
p= 173 fix= 56 move= 20 proportion= 0.737
p= 179 fix= 85 move= 12 proportion= 0.876
p= 191 fix= 117 move= 6 proportion= 0.951
p= 197 fix= 45 move= 42 proportion= 0.518
p= 227 fix= 105 move= 30 proportion= 0.777
p= 233 fix= 63 move= 56 proportion= 0.529
p= 239 fix= 165 move= 12 proportion= 0.933
p= 251 fix= 161 move= 20 proportion= 0.890
p= 257 fix= 92 move= 56 proportion= 0.622
p= 263 fix= 156 move= 30 proportion= 0.839
p= 269 fix= 132 move= 42 proportion= 0.759
p= 281 fix= 125 move= 56 proportion= 0.690
p= 293 fix= 117 move= 72 proportion= 0.619
Here is a final example : on the $\mathbf Q$-quadratic space $<1,2>\perp p.<1,2>$ (here, the example in the OP, developped in my first answer, appears):
p= 5 fix= 1 move= 0 proportion= 1.00
p= 7 fix= 1 move= 0 proportion= 1.00
p= 13 fix= 1 move= 0 proportion= 1.00
p= 23 fix= 6 move= 0 proportion= 1.00
p= 29 fix= 6 move= 0 proportion= 1.00
p= 31 fix= 6 move= 0 proportion= 1.00
p= 37 fix= 2 move= 2 proportion= 0.500
p= 47 fix= 15 move= 0 proportion= 1.00
p= 53 fix= 9 move= 2 proportion= 0.818
p= 61 fix= 9 move= 2 proportion= 0.818
p= 71 fix= 28 move= 0 proportion= 1.00
p= 79 fix= 20 move= 2 proportion= 0.909
p= 101 fix= 35 move= 2 proportion= 0.946
p= 103 fix= 25 move= 6 proportion= 0.807
p= 109 fix= 15 move= 12 proportion= 0.556
p= 127 fix= 30 move= 12 proportion= 0.714
p= 149 fix= 49 move= 12 proportion= 0.804
p= 151 fix= 49 move= 12 proportion= 0.804
p= 157 fix= 21 move= 30 proportion= 0.412
p= 167 fix= 88 move= 6 proportion= 0.937
p= 173 fix= 56 move= 20 proportion= 0.737
p= 181 fix= 40 move= 30 proportion= 0.571
p= 191 fix= 117 move= 6 proportion= 0.951
p= 197 fix= 45 move= 42 proportion= 0.518
p= 199 fix= 81 move= 20 proportion= 0.802
p= 223 fix= 70 move= 42 proportion= 0.625
p= 229 fix= 50 move= 56 proportion= 0.472
p= 239 fix= 165 move= 12 proportion= 0.933
p= 263 fix= 156 move= 30 proportion= 0.839
p= 269 fix= 132 move= 42 proportion= 0.759
p= 271 fix= 132 move= 42 proportion= 0.759
p= 277 fix= 36 move= 110 proportion= 0.247
p= 293 fix= 117 move= 72 proportion= 0.619
p= 311 fix= 266 move= 20 proportion= 0.930
p= 317 fix= 70 move= 132 proportion= 0.347
p= 349 fix= 105 move= 132 proportion= 0.443
p= 359 fix= 304 move= 42 proportion= 0.879
p= 367 fix= 144 move= 132 proportion= 0.521
p= 373 fix= 80 move= 182 proportion= 0.305
p= 383 fix= 289 move= 72 proportion= 0.801
p= 389 fix= 187 move= 132 proportion= 0.586
p= 397 fix= 51 move= 240 proportion= 0.175
p= 421 fix= 90 move= 240 proportion= 0.273
p= 431 fix= 399 move= 72 proportion= 0.847
p= 439 fix= 285 move= 132 proportion= 0.684
p= 461 fix= 300 move= 156 proportion= 0.658
p= 463 fix= 140 move= 272 proportion= 0.340
p= 479 fix= 525 move= 72 proportion= 0.880
p= 487 fix= 147 move= 306 proportion= 0.325
Studying these repartitions seems to be quite interesting.
Original post :
The answer is no : if you run the following Magma code :
M:=Matrix(Integers(),4,4,[1,0,1,1,0,2,1,2,0,0,5,1,0,0,0,10]);
M:=M+Transpose(M);
L:=LatticeWithGram(M);
H:=GenusRepresentatives(L);
for h in H do
print "h= lattice with Gram", GramMatrix(h);
hd:=DualBasisLattice(h);
MD:=37*LLLGram(GramMatrix(hd));
print "rescaled dual = lattice with Gram", MD;
a,b:=IsIsometric(h,LatticeWithGram(MD));
print "are isometric : ", a;
print " ";
end for;
on the online calculator, you obtain the following result :
h= lattice with Gram
[ 2 0 1 1]
[ 0 4 1 2]
[ 1 1 10 1]
[ 1 2 1 20]
rescaled dual = lattice with Gram
[ 2 0 -1 -1]
[ 0 4 -1 -2]
[-1 -1 10 1]
[-1 -2 1 20]
are isometric : true
h= lattice with Gram
[ 4 -1 2 1]
[-1 4 -1 0]
[ 2 -1 6 -2]
[ 1 0 -2 20]
rescaled dual = lattice with Gram
[ 2 1 -1 0]
[ 1 8 -4 1]
[-1 -4 12 2]
[ 0 1 2 10]
are isometric : false
h= lattice with Gram
[ 4 1 1 1]
[ 1 6 3 1]
[ 1 3 8 -1]
[ 1 1 -1 10]
rescaled dual = lattice with Gram
[ 4 1 -1 -1]
[ 1 6 -3 -1]
[-1 -3 8 -1]
[-1 -1 -1 10]
are isometric : true
h= lattice with Gram
[ 2 1 0 -1]
[ 1 8 -1 -4]
[ 0 -1 10 -2]
[-1 -4 -2 12]
rescaled dual = lattice with Gram
[ 4 1 1 0]
[ 1 4 2 1]
[ 1 2 6 -2]
[ 0 1 -2 20]
are isometric : false
-
Eh..... Why do you think the answer is "no" for determinant $37^2$? At least two matrices in your calculation support my idea. – zy_ Feb 21 at 20:09
hmm sorry, I misunderstood the question ... I'm gonna modify the algorithm to search for a counter-example to your very question ... – few_reps Feb 21 at 20:13
@zy_, you did not word this well. If all you want is one $A$ of the discriminant, whenever $p \equiv 3 \pmod 4$ there will always be one, because there is a quaternary made from the sum of two identical binaries. – Will Jagy Feb 21 at 20:16
@zy_ the identical binaries can be the principal form $x^2 + xy + \left( \frac{p+1}{4} \right) y^2$ – Will Jagy Feb 21 at 20:26
@WillJagy Yes, it is an involution since $(L^\sharp)^\sharp=L$ and $(pL)^\sharp=\frac 1p L^\sharp$. – few_reps Feb 21 at 23:44 | 2016-07-24 02:55:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4802798926830292, "perplexity": 2134.429069121452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823935.18/warc/CC-MAIN-20160723071023-00036-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://zbmath.org/authors/das.amitabha | ## Das, Amitabha
Compute Distance To:
Author ID: das.amitabha Published as: Das, Amitabha
Documents Indexed: 71 Publications since 1957, including 4 Books Co-Authors: 14 Co-Authors with 8 Joint Publications 164 Co-Co-Authors
all top 5
### Co-Authors
18 single-authored 3 Thulasiraman, Krishnaiyan “KT” 2 Agarwal, Vinod Kumar 2 Bhaumik, Rabi Nanda 2 Ferbel, T. 2 Gegenberg, Jack D. 2 Li, Zhuowei 2 Wichmann, G. 1 Ahmed, S. Reaz 1 Ang, Ee Luang 1 Banerjee, Rabindra N. 1 Banks, Harvey Thomas 1 Bromberg, Carla 1 Chakraborty, Bulbul 1 Chaudhari, Narendra S. 1 Chong, Man Nang 1 Chongdar, Kumar 1 Cioranescu, Doina 1 Coffman, Charles V. 1 Debnath, Narayan Chandra 1 Emmanuel, Sabu 1 Florides, Petros S. 1 Gongxuan, Yao 1 Jayaram, Jayanth 1 Joseph, John Felix Charles 1 Krishna, T. Vamsi 1 Lakshmanan, K. B. 1 Lee, Bu-Sung 1 Longo, José M. A. 1 Maiti, Jyotirmoy 1 Makogin, Vitalii 1 Mazumdar, Himadri Pai 1 Melliar-Smith, P. Michael 1 Moser, Louise E. 1 Nath, N. G. 1 Patra, Jagdish Chandra 1 Pechlaner, Edgar 1 Rebnord, D. A. 1 Santra, K. C. 1 Schlichting, Hermann 1 Seet, Boon-Chong 1 Sood, Arun K. 1 Spodarev, Evgueni 1 Synge, John Lighton 1 Wang, Guandong 1 Wotzasek, Clovis 1 Zhou, Jianying
all top 5
### Serials
8 Journal of Mathematical Physics 3 Acta Astronautica 3 General Relativity and Gravitation 2 Journal of Sound and Vibration 2 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 2 The Mathematics Student 2 International Journal of Production Research 2 Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 2 Progress of Theoretical Physics 2 Computer Networks 2 Il Nuovo Cimento, X. Series 1 Modern Physics Letters B 1 International Journal of Non-Linear Mechanics 1 Indian Journal of Pure & Applied Mathematics 1 International Journal of Systems Science 1 Journal of Technical Physics 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Acta Ciencia Indica. Mathematics 1 SIAM Journal on Computing 1 Tensor. New Series 1 Computational Mechanics 1 Revista de la Academia Canaria de Ciencias 1 Applied Mathematical Modelling 1 Nuclear Physics. B. Proceedings Supplements 1 Theory of Probability and Mathematical Statistics 1 Aerospace Science and Technology 1 Markov Processes and Related Fields 1 Communications in Nonlinear Science and Numerical Simulation 1 Ultra Scientist of Physical Sciences 1 DGDS. Differential Geometry – Dynamical Systems 1 IEEE Transactions on Image Processing 1 Kragujevac Journal of Mathematics 1 EURASIP Journal on Applied Signal Processing 1 Sādhanā 1 Mediterranean Journal of Mathematics 1 Journal of Algebra and Related Topics
all top 5
### Fields
12 Computer science (68-XX) 8 Quantum theory (81-XX) 7 Mechanics of deformable solids (74-XX) 7 Relativity and gravitational theory (83-XX) 6 Partial differential equations (35-XX) 6 Fluid mechanics (76-XX) 5 Differential geometry (53-XX) 4 Mechanics of particles and systems (70-XX) 4 Systems theory; control (93-XX) 4 Information and communication theory, circuits (94-XX) 2 Combinatorics (05-XX) 2 Real functions (26-XX) 2 Special functions (33-XX) 2 Ordinary differential equations (34-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Statistics (62-XX) 2 Numerical analysis (65-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Astronomy and astrophysics (85-XX) 2 Operations research, mathematical programming (90-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 Mathematical logic and foundations (03-XX) 1 Nonassociative rings and algebras (17-XX) 1 Group theory and generalizations (20-XX) 1 Calculus of variations and optimal control; optimization (49-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Probability theory and stochastic processes (60-XX) 1 Optics, electromagnetic theory (78-XX) 1 Biology and other natural sciences (92-XX)
### Citations contained in zbMATH Open
28 Publication has been cited 1 time in 1 Document Cited by Year
A class of exact solutions of certain classical field equations in general relativity. Zbl 0103.21403
Das, A.
1962
Stationary Riemannian space-times with self-dual curvature. Zbl 0545.53039
Gegenberg, J. D.; Das, A.
1984
A class of exact plane wave solutions of the Maxwell-Dirac equations. Zbl 0701.58059
Das, A.; Kay, D.
1989
General solutions of Maxwell-Dirac equations in $$1+1$$-dimensional space- time and a spatially confined solution. Zbl 0780.35107
Das, A.
1993
An exact stationary solution of the combined Einstein-Maxwell-Klein- Gordon equations. Zbl 0466.53016
Gegenberg, J. D.; Das, A.
1981
Cellular space-time and quantum field theory. Zbl 0094.42703
Das, A.
1960
Stationary weak gravitational fields to any order of approximation. Zbl 0096.43105
Das, A.; Florides, P. S.; Synge, J. L.
1961
A class of eigenvalues of the fine-structure constant and internal energy obtained from a class of exact solutions of the combined Klein-Gordon- Maxwell-Einstein field equations. Zbl 0157.32204
Das, A.; Coffman, C. V.
1967
Diagnosis of $$t/(t+1)$$-diagnosable systems. Zbl 0822.68010
Das, A.; Thulasiraman, K.; Agarwal, V. K.
1994
Complex scalar field in general relativity. Zbl 0114.21204
Das, A.
1963
On the stability and control of the Schimizu-Morioka system of dynamical equations. Zbl 1205.37023
Islam, N.; Mazumdar, H. P.; Das, A.
2009
A Bayesian finite element model updating with combined normal and lognormal probability distributions using modal measurements. Zbl 1462.62175
Das, A.; Debnath, N.
2018
Determining number of some families of cubic graphs. Zbl 1474.05176
Das, A.; Saha, M.
2020
Introduction to nuclear and particle physics. 2nd ed. Zbl 1060.81001
Das, A.; Ferbel, T.
2003
Numerical computation of vortical flow fields of double-delta wings moving in a compressible viscous medium. Zbl 0809.76074
Das, A.; Longo, J. M. A.
1994
An ongoing big bang model in the special relativistic Maxwell-Dirac equations. Zbl 0865.35131
Das, A.
1996
Generalized Korteweg-de Vries equation induced from position-dependent effective mass quantum models and mass-deformed soliton solution through inverse scattering transform. Zbl 1319.35217
Ganguly, A.; Das, A.
2014
Detailed study of complex flow fields of aerodynamical configurations by using numerical methods. Zbl 1075.76654
Das, A.
1994
Explicit solutions and stability analysis of the $$(2+1)$$ dimensional KP-BBM equation with dispersion effect. Zbl 1440.35292
Ganguly, A.; Das, A.
2015
Analysis of spiraling vortical flows around slender delta wings moving in an inviscid medium. Zbl 0802.76011
Das, A.
1991
Rule mining for dynamic databases. Zbl 1113.68376
Das, A.; Bhattacharyya, D. K.
2004
Note on the forced vibration of an orthotropic plate on an elastic foundation. Zbl 0421.73064
Das, A.; Roy, S. K.
1979
The role of preference relations in mathematical economics. Zbl 0856.90007
Bhaumik, R. N.; Das, A.
1990
Homogenization techniques and estimation of material parameters in distributed structures. Zbl 0778.93046
Banks, H. T.; Miller, R. E.; Cioranescu, D.; Das, A.; Rebnord, D. A.
1991
Constructions of Markov processes in random environments which lead to a product form of the stationary measure. Zbl 1379.60086
Das, A.
2017
Quantized phase space and relativistic quantum theory. Zbl 0959.81557
Das, A.
1989
Socio-technical perspective on manufacturing system synergies. Zbl 1128.90359
Das, A.; Jayaram, J.
2007
A survey on MAC protocols in OSA networks. Zbl 1188.68071
Krishna, T. Vamsi; Das, Amitabha
2009
Determining number of some families of cubic graphs. Zbl 1474.05176
Das, A.; Saha, M.
2020
A Bayesian finite element model updating with combined normal and lognormal probability distributions using modal measurements. Zbl 1462.62175
Das, A.; Debnath, N.
2018
Constructions of Markov processes in random environments which lead to a product form of the stationary measure. Zbl 1379.60086
Das, A.
2017
Explicit solutions and stability analysis of the $$(2+1)$$ dimensional KP-BBM equation with dispersion effect. Zbl 1440.35292
Ganguly, A.; Das, A.
2015
Generalized Korteweg-de Vries equation induced from position-dependent effective mass quantum models and mass-deformed soliton solution through inverse scattering transform. Zbl 1319.35217
Ganguly, A.; Das, A.
2014
On the stability and control of the Schimizu-Morioka system of dynamical equations. Zbl 1205.37023
Islam, N.; Mazumdar, H. P.; Das, A.
2009
A survey on MAC protocols in OSA networks. Zbl 1188.68071
Krishna, T. Vamsi; Das, Amitabha
2009
Socio-technical perspective on manufacturing system synergies. Zbl 1128.90359
Das, A.; Jayaram, J.
2007
Rule mining for dynamic databases. Zbl 1113.68376
Das, A.; Bhattacharyya, D. K.
2004
Introduction to nuclear and particle physics. 2nd ed. Zbl 1060.81001
Das, A.; Ferbel, T.
2003
An ongoing big bang model in the special relativistic Maxwell-Dirac equations. Zbl 0865.35131
Das, A.
1996
Diagnosis of $$t/(t+1)$$-diagnosable systems. Zbl 0822.68010
Das, A.; Thulasiraman, K.; Agarwal, V. K.
1994
Numerical computation of vortical flow fields of double-delta wings moving in a compressible viscous medium. Zbl 0809.76074
Das, A.; Longo, J. M. A.
1994
Detailed study of complex flow fields of aerodynamical configurations by using numerical methods. Zbl 1075.76654
Das, A.
1994
General solutions of Maxwell-Dirac equations in $$1+1$$-dimensional space- time and a spatially confined solution. Zbl 0780.35107
Das, A.
1993
Analysis of spiraling vortical flows around slender delta wings moving in an inviscid medium. Zbl 0802.76011
Das, A.
1991
Homogenization techniques and estimation of material parameters in distributed structures. Zbl 0778.93046
Banks, H. T.; Miller, R. E.; Cioranescu, D.; Das, A.; Rebnord, D. A.
1991
The role of preference relations in mathematical economics. Zbl 0856.90007
Bhaumik, R. N.; Das, A.
1990
A class of exact plane wave solutions of the Maxwell-Dirac equations. Zbl 0701.58059
Das, A.; Kay, D.
1989
Quantized phase space and relativistic quantum theory. Zbl 0959.81557
Das, A.
1989
Stationary Riemannian space-times with self-dual curvature. Zbl 0545.53039
Gegenberg, J. D.; Das, A.
1984
An exact stationary solution of the combined Einstein-Maxwell-Klein- Gordon equations. Zbl 0466.53016
Gegenberg, J. D.; Das, A.
1981
Note on the forced vibration of an orthotropic plate on an elastic foundation. Zbl 0421.73064
Das, A.; Roy, S. K.
1979
A class of eigenvalues of the fine-structure constant and internal energy obtained from a class of exact solutions of the combined Klein-Gordon- Maxwell-Einstein field equations. Zbl 0157.32204
Das, A.; Coffman, C. V.
1967
Complex scalar field in general relativity. Zbl 0114.21204
Das, A.
1963
A class of exact solutions of certain classical field equations in general relativity. Zbl 0103.21403
Das, A.
1962
Stationary weak gravitational fields to any order of approximation. Zbl 0096.43105
Das, A.; Florides, P. S.; Synge, J. L.
1961
Cellular space-time and quantum field theory. Zbl 0094.42703
Das, A.
1960
### Cited by 4 Authors
1 Kasahara, Shoji 1 Katayama, Haruki 1 Masuyama, Hiroyuki 1 Takahashi, Yutaka
### Cited in 1 Serial
1 Journal of Industrial and Management Optimization
### Cited in 2 Fields
1 Probability theory and stochastic processes (60-XX) 1 Computer science (68-XX) | 2023-03-28 17:29:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5631304979324341, "perplexity": 12975.565731995806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00265.warc.gz"} |
https://www.sparrho.com/item/method-and-apparatus-for-multiple-description-video-coding/df685b/ | # Method and apparatus for multiple description video coding
Imported: 17 Feb '17 | Published: 23 Sep '14
USPTO - Utility Patents
## Abstract
A method and apparatus for utilizing temporal prediction and motion compensated prediction to accomplish multiple description video coding is disclosed. An encoder receives a sequence of video frames and divides each frame into non-overlapping macroblocks. Each macroblock is then encoded using either an intraframe mode (I-mode) or a prediction mode (P-mode) technique. Both the I-mode and the P-mode encoding techniques produce an output for each of n channels used to transmit the encoded video data.
## Description
### CLAIM OF PRIORITY
The present application is a continuation of U.S. patent application Ser. No. 12/872,152, filed Aug. 31, 2010, which is a continuation of U.S. patent application Ser. No. 11/050,570, filed Feb. 3, 2005 (now U.S. Pat. No. 7,813,427), which is a continuation of U.S. patent application Ser. No. 10/350,537, filed Jan. 23, 2003 (now U.S. Pat. No. 6,920,177), which is a divisional application of U.S. application Ser. No. 09/478,002, filed Jan. 5, 2000 (now U.S. Pat. No. 6,556,624), which claims the benefit of and priority from U.S. Provisional Application Ser. No. 60/145,852 entitled “Method and Apparatus for Accomplishing Multiple Description Coding for Video,” filed Jul. 27, 1999.
### FIELD OF THE DISCLOSURE
The present disclosure relates to video coding. More particularly, the present disclosure relates to a method for utilizing temporal prediction and motion compensated prediction to perform multiple description video coding.
### BACKGROUND
Most of today's video coder standards use block-based motion compensated prediction because of its success in achieving a good balance between coding efficiency and implementation complexity.
Multiple Description Coding (“MDC”) is a source coding method that increases the reliability of a communication system by decomposing a source into multiple bitstreams and then transmitting the bitstreams over separate, independent channels. An MDC system is designed so that, if all channels are received, a very good reconstruction can be made. However, if some channels are not received, a reasonably good reconstruction can still be obtained. In commonly assigned U.S. patent application Ser. No. 08/179,416, a generic method for MDC using a pairwise correlating transform referred to as (“MDTC”) is described. This generic method is designed by assuming the inputs are a set of Gaussian random variables. A method for applying this method for image coding is also described. A subsequent and similarly commonly assigned U.S. Provisional Application Ser. No. 60/145,937, describes a generalized MDTC method.
Unfortunately, in existing video coding systems when not all of the bitstream data sent over the separate channels is received, the quality of the reconstructed video sequence suffers. Likewise, as the amount of the bitstream data that is not received increases the quality of the reconstructed video sequence that can be obtained from the received bitstream decreases rapidly.
Accordingly, there is a need in the art for a new approach for coding a video sequence into two descriptions using temporal prediction and motion compensated prediction to improve the quality of the reconstructions that can be achieved when only one of the two descriptions is received.
### DETAILED DESCRIPTION
Embodiments of the present invention provide a block-based motion-compensated predictive coding framework for realizing MDC, which includes two working modes: Intraframe Mode (I-mode) and Prediction Mode (P-mode). Coding in the P-mode involves the coding of the prediction errors and estimation/coding of motion. In addition, for both the I-mode and P-mode, the MDTC scheme has been adapted to code a block of Discrete Cosine Transform (“DCT”) coefficients.
Embodiments of the present invention provide a system and method for encoding a sequence of video frames. The system and method receive the sequence of video frames and then divide each video frame into a plurality of macroblocks. Each macroblock is then encoded using at least one of the I-mode technique and the P-mode technique, where, for n channels the prediction mode technique generates at least n+1 prediction error signals for each block. The system and method then provide the I-mode technique encoded data and the at least n+1 P-mode technique prediction error signals divided between each of the n channels being used to transmit the encoded video frame data.
The Overall Coding Framework
In accordance with an embodiment of the present invention, a multiple description (“MD”) video coder is developed using the conventional block-based motion compensated prediction. In this embodiment, each video frame is divided into non-overlapping macroblocks which are then coded in either the I-mode or the P-mode. In the I-mode, the color values of each of the macroblocks are directly transformed using a Discrete Cosine Transform (“DCT”) and the resultant quantized DCT coefficients are then entropy coded. In the P-mode, a motion vector which describes the displacement between the spatial position of the current macroblock and the best matching macroblock, is first found and coded. Then the prediction error is coded using the DCT. Additional side information describing the coding mode and relevant coding parameters is also coded.
An embodiment of an overall MDC framework of the present invention is shown in FIG. 1 and is similar to the conventional video coding scheme using block-based motion compensated predictive coding. In FIG. 1, an input analog video signal is received in an analog-to-digital (“A/D”) converter (not shown) and each frame from the input analog video signal is digitized and divided into non-overlapping blocks of approximately uniform size as illustrated in FIG. 8. Although shown as such in FIG. 8, the use of non-overlapping macroblocks of approximately uniform size is not required by the present invention and alternative embodiments of the present invention are contemplated in which non-overlapping macroblocks of approximately uniform size are not used. For example, in one contemplated alternative embodiment, each digitized video frame is divided into overlapping macroblocks having non-uniform sizes. Returning to FIG. 1, each input macroblock X 100 is input to a mode selector 110 and then the mode selector selectively routes the input macroblock X 100 for coding in one of the two modes using switch 112 by selecting either channel 113 or channel 114. Connecting switch 112 to channel 113 enables I-mode coding in an I-mode MDC 120, and connecting switch 112 with channel 114 enables P-mode coding in a P-mode MDC 130. In the I-mode MDC 120, the color values of the macroblock are coded directly into two descriptions, description 1 122 and description 2 124, using either the MDTC method; the generalized MDTC method described in co-pending U.S. patent application Ser. No. 08/179,416; Vaishampayan's Multiple Description Scalar Quantizer (“MDSQ”); or any other multiple description coding technique. In the P-mode MDC 130, the macroblock is first predicted from previously coded frames and two (2) descriptions are produced, description 1 132 and description 2 134. Although shown as being output on separate channels, embodiments of the present invention are contemplated in which the I-mode description 1 122 and the P-mode description 1 132 are output to a single channel. Similarly, embodiments are contemplated in which the I-mode description 2 124 and the P-mode description 2 134 are output to a single channel.
In FIG. 1, the mode selector 110 is connected to a redundancy allocation unit 140 and the redundancy allocation unit 140 communicates signals to the mode selector 110 to control the switching of switch 112 between channel 113 for the I-mode MDC 120 and channel 114 for the P-mode MDC 130. The redundancy allocation unit 140 is also connected to the I-mode MDC 120 and the P-mode MDC 130 to provide inputs to control the redundancy allocation between motion and prediction error. A rate control unit 150 is connected to the redundancy allocation unit 140, the mode selector 110, the I-mode MDC 120 and the P-mode MDC 130. A set of frame buffers 160 is also connected to the mode selector 110 for storing previously reconstructed frames from the P-mode MDC 130 and for providing macroblocks from the previously reconstructed frames back to the P-mode MDC 130 for use in encoding and decoding the subsequent macroblocks.
In an embodiment of the present invention, a block-based uni-directional motion estimation method is used, in which, the prediction macroblock is determined from a previously decoded frame. Two types of information are coded: i) the error between the prediction macroblock and the actual macroblock, and ii) the motion vector, which describes the displacement between the spatial position of the current macroblock and the best matching macroblock. Both are coded into two descriptions. Because the decoder may have either both descriptions or one of the two descriptions, the encoder has to take this fact into account in coding the prediction error. The proposed framework for realizing MDC in the P-mode is described in more detail below.
Note that the use of I-mode coding enables the system to recover from an accumulated error due to the mismatch between the reference frames used in the encoder for prediction and that available at the decoder. The extra number of bits used for coding in the I-mode, compared to using the P-mode, is a form of redundancy that is intentionally introduced by the coder to improve the reconstruction quality when only a single description is available at the decoder. In conventional block-based video coders, such as an H.263 coder, described in ITU-T, “Recommendation H.263 Video Coding for Low Bitrate Communication,” July 1995, the choice between I-mode and P-mode is dependent on which mode uses fewer bits to produce the same image reconstruction quality. For error-resilience purposes, I-mode macroblocks are also inserted periodically, but very sparsely, for example, in accordance with an embodiment of the present invention, one I-mode macroblock is inserted after approximately ten to fifteen P-mode macroblocks. The rate at which the I-mode macroblocks are inserted is highly dependent on the video being encoded and, therefore, the rate at which the I-mode macroblocks are inserted is variably controlled by the redundancy allocation unit 140 for each video input stream. In applications requiring a constant output rate, the rate control component 150 regulates the total number of bits that can be used on a frame-by-frame basis. As a result, the rate control component 150 influences the choice between the I-mode and the P-mode. In an embodiment of the present invention, the proposed switching between I-mode and P-mode depends not only on the target bit rate and coding efficiency but also on the desired redundancy. As a result of this redundancy dependence, the redundancy allocation unit 140, which, together with the rate control unit 150, determines, i) on the global level, redundancy allocation between I-mode and P-mode; and ii) for every macroblock, which mode to use.
P-mode Coding.
In general, the MDC coder in the P-mode will generate two descriptions of the motion information and two descriptions of the prediction error. A general framework for implementing MDC in the P-mode is shown in FIG. 2. In FIG. 2, the encoder has three separate frame buffers (“FB”), FB0 270, FB1 280 and FB2 290, for storing previously reconstructed frames from both descriptions (ψo,k-m), description one (ψ1,k-m), and description two (ψ2,k-m), respectively. Here, k represents the current frame time, k−m, m=1, 2, . . . , k, the previous frames up to frame 0. In this embodiment, prediction from more than one of the previously coded frames is permitted. In FIG. 2, a Multiple Description Motion Estimation and Coding (“MDMEC”) unit 210 receives as an initial input macroblock X 100 to be coded at frame k. The MDMEC 210 is connected to the three frame buffers FB0 270, FB1 280 and FB2 290 and the MDMEC 210 receives macroblocks from the previously reconstructed frames stored in each frame buffer. In addition, the MDMEC 210 is connected to a redundancy allocation unit 260 which provides an input motion and prediction error redundancy allocation to the MDMEC 210 to use to generate and output two coded descriptions of the motion information, {tilde over (m)}1 and {tilde over (m)}2. The MDMEC 210 is also connected to a first Motion Compensated Predictor 0 (“MCP0”) 240, a second Motion Compensated Predictor 1 (“MCP1”) 220 and a third Motion Compensated Predictor 2 (“MCP2”) 230. The two coded descriptions of the motion information, {tilde over (m)}1 and {tilde over (m)}2 are transmitted to the MCP0 240, which generates and outputs a predicted macroblock P0 based on {tilde over (m)}1, {tilde over (m)}2 and macroblocks from the previously reconstructed frames from the descriptions where i=0, 1, 2, which are provided by frame buffers FB0 270, FB1 280 and FB2 290. Similarly, MCP1 220 generates and outputs a predicted macroblock P1 based on {tilde over (m)}1 from the MDMEC 210 and a macroblock from the previously reconstructed frame from description one (ψ1,k-m) from FB1 280. Likewise, MCP2 230 generates and outputs a predicted macroblock P2 based on {tilde over (m)}2 from the MDMEC 210 and a macroblock from the previously reconstructed frame from description two (ψ2,k-m) from FB2 290. In this general framework, MCP0 240 can make use of ψ1,1,k-m and ψ2,k-m in addition to ψo,k-m. MCP0 240, MCP1 220 and MCP2 230 are each connected to a multiple description coding of prediction error (“MDCPE”) unit 250 and provide predicted macroblocks P0, P1 and P2, respectively, to the MDCPE 250. The MDCPE 250 is also connected to the redundancy allocation unit 260 and receives as input the motion and prediction error redundancy allocation. In addition, the MDCPE 250 also receives the original input macroblock X 100. The MDCPE 250 generates two coded descriptions of prediction error, {tilde over (E)}1 and {tilde over (E)}2, based on input macroblock X 100, P0 P1, P2 and the motion and prediction error redundancy allocation. Description one 132, in FIG. 1, of the coded video consists of {tilde over (m)}1 and {tilde over (E)}1 for all the macroblocks. Likewise, description two 134, in FIG. 1, consists of {tilde over (m)}2 and {tilde over (E)}2 for all the macroblocks. Exemplary embodiments of the MDMEC 210 and MDCPE 250 are described in the following sections.
Multiple Description Coding of Prediction Error (MDCPE)
The general framework of a MDCPE encoder implementation is shown in FIG. 3A. First, the prediction error in the case when both descriptions are available, F=X−P0, is coded into two descriptions {tilde over (F)}1 and {tilde over (F)}2. In FIG. 3A, predicted macroblock P0 is subtracted from input macroblock X 100 in an adder 306 and a both description side prediction error F0 is input to an Error Multiple Description Coding (“EMDC”) Encoder 330. The encoding is accomplished in the EMDC Encoder 330 using, for example, MDTC or MDC. To deal with the case when only the i-th description is received (that is where i=1 or 2) either an encoder unit one (“ENC1”) 320 or an encoder unit two (“ENC2”) 310 takes either pre-run length coded coefficients, Δ{tilde over (C)}n, Δ{tilde over (D)}n, respectively, and a description i side prediction error Ei, where Ei=X−Pi, and produces a description i enhancement stream {tilde over (G)}i. {tilde over (G)}i together with {tilde over (F)}1 form a description i. Embodiments of the encoders ENC1 320 and ENC2 310 are described in reference to FIGS. 3A, 4, 5, 6 and 7. As shown in FIG. 3A, P2 is subtracted from input macroblock X 100 by an adder 302 and a description two side prediction error E2 is output. E2 and Δ{tilde over (D)}n are then input to ENC2 310 and a description two enhancement stream {tilde over (G)}2 is output. Similarly, P1 is subtracted from input macroblock X 100 in an adder 304 and a description one side prediction error E1 is output. E1 and Δ{tilde over (C)}n are then input to ENC1 320 and a description one enhancement stream {tilde over (G)}1 322 is output. In an alternate embodiment (not shown), Δ{tilde over (C)}n and Δ{tilde over (D)}n are determined from {tilde over (F)}1 and {tilde over (F)}2 by branching both of the {tilde over (F)}1 and {tilde over (F)}2 output channels to connect with ENC1 320 and ENC2 310, respectively. Before the branches connect to ENC1 320 and ENC2 310, they each pass through separate run length decoder units to produce Δ{tilde over (C)}n and Δ{tilde over (D)}n, respectively. As will be seen in the description referring to FIG. 4, this alternate embodiment requires two additional run length decoders to decode {tilde over (F)}1 and {tilde over (F)}2 to obtain Δ{tilde over (C)}n and Δ{tilde over (D)}n, which had just been encoded into {tilde over (F)}1 and {tilde over (F)}2 in EMDC encoder 320.
In the decoder, shown in FIG. 3B, if both descriptions, that is, {tilde over (F)}1 and {tilde over (F)}2, are available, an EMDC decoder unit 360 generates {circumflex over (F)}0 from inputs {tilde over (F)}1 and {tilde over (F)}2, where {circumflex over (F)}0 represents the reconstructed F from both {tilde over (F)}1 and {tilde over (F)}2. {circumflex over (F)}0 is then added to P0 in an adder 363 to generate a both description recovered macroblock {circumflex over (X)}0. {circumflex over (X)}0 is defined as {circumflex over (X)}0=P0+{circumflex over (F)}0. When both descriptions are available, enhancement streams {tilde over (G)}1 and {tilde over (G)}2 are not used. When only description one is received, a first side decoder (“DEC1”) 370, produces Ê1 from inputs Δ{tilde over (C)}n and {tilde over (G)}1 and then Ê1 is added to P1 in an adder 373 to generate a description one recovered macroblock {circumflex over (X)}1. The description one recovered macroblock is defined as {circumflex over (X)}1=P11. When only description two is received, a second side decoder (“DEC2”) 380, produces Ê2 from inputs Δ{tilde over (D)}n and {tilde over (G)}2 and then Ê2 is added to P2 in an adder 383 to generate a description two recovered macroblock {circumflex over (X)}2. The description two recovered macroblock, {circumflex over (X)}2, is defined as {circumflex over (X)}2=P22. Embodiments of the decoders DEC1 370 and DEC2 380 are described in reference to FIGS. 3B, 4, 5, 6 and 7. As with the encoder in FIG. 3A, in an alternate embodiment of the decoder (not shown), Δ{tilde over (C)}n and Δ{tilde over (D)}n are determined from {tilde over (F)}1 and {tilde over (F)}2 by branching both of the {tilde over (F)}1 and {tilde over (F)}2 output channels to connect with ENC1 320 and ENC2 310, respectively. Before the branches connect to ENC1 320 and ENC2 310, they each pass through separate run length decoder units (not shown) to produce Δ{tilde over (C)}n and Δ{tilde over (D)}n, respectively. As with the alternate embodiment for the encoder described above, this decoder alternative embodiment requires additional run length decoder hardware to extract Δ{tilde over (C)}n and Δ{tilde over (D)}n from {tilde over (F)}1 and {tilde over (F)}2 just before Δ{tilde over (C)}n and Δ{tilde over (D)}n are extracted from {tilde over (F)}1 and {tilde over (F)}2 in EMDC decoder 360.
Note that in this framework, the bits used for Gi, i=1, 2 are purely redundancy bits, because they do not contribute to the reconstruction quality when both descriptions are received. This portion of the total redundancy, denoted by ρe,2 can be controlled directly by varying the quantization accuracy when generating Gi. The other portion of the total redundancy, denoted by ρe,1, is introduced when coding F using the MDTC coder. Using the MDTC coder enables this redundancy to be controlled easily by varying the transform parameters. The redundancy allocation unit 260 manages the redundancy allocation between ρe,2 and ρe,1 for a given total redundancy in coding the prediction errors.
Based on this framework, alternate embodiments have been developed, which differ in the operations of ENC1 320/DEC1 370 and ENC2 310/DEC2 380. While the same type of EMDC encoder 330 and EMDC decoder 380 described in FIGS. 3A and 3B are used, the way in which {tilde over (G)}i is generated by ENC1 320 and ENC2 310 is different in each of the alternate embodiments. These alternate embodiments are described below in reference to FIGS. 4, 5 and 6.
Implementation of the EMDC ENC1 and ENC2 Encoders
FIG. 4 provides a block diagram of an embodiment of multiple description coding of prediction error in the present invention. In FIG. 4, an MDTC coder is used to implement the EMDC encoder 330 in FIG. 3A. In FIG. 4, for each 8×8 block of central prediction error P0 is subtracted from the corresponding 8×8 block from input macroblock X 100 in an adder 306 to produce E0 and then E0 is input to the DCT unit 425 which performs DCT and outputs N≦64 DCT coefficients. A pairing unit 430 receives the N≦64 DCT coefficients from the DCT unit 425 and organizes the DCT coefficients into N/2 pairs (Ãn, {tilde over (B)}n) using a fixed pairing scheme for all frames. The N/2 pairs are then input with an input, which controls the rate, from a rate and redundancy allocation unit 420 to a first quantizer one (“Q1”) unit 435 and a second Q1 unit 440. The Q1 units 435 and 440, in combination, produce quantized pairs (ΔÃn, Δ{tilde over (B)}n). It should be noted that both N and the pairing strategy are determined based on the statistics of the DCT coefficients and the k-th largest coefficient is paired with the (N−k)-th largest coefficient. Each quantized pair (ΔÃn, Δ{tilde over (B)}n) is then input with a transform parameter βn, which controls a first part of the redundancy, from the rate and redundancy allocation unit 420 to a Pairwise Correlating Transform (“PCT”) unit 445 to produce the coefficients (Δ{tilde over (C)}n, Δ{tilde over (D)}n), which are then split into two sets. The unpaired coefficients are split even/odd and appended to the PCT coefficients. The coefficients in each set, Δ{tilde over (C)}n and Δ{tilde over (D)}n, are then run length and Huffman coded in run length coding units 450 and 455, respectively, to produce {tilde over (F)}1 and {tilde over (F)}2. Thus, {tilde over (F)}1 contains Δ{tilde over (C)}n in coded run length representation, and {tilde over (F)}2 contains Δ{tilde over (D)}n in coded run length representation. In the following, three different embodiments for obtaining {tilde over (G)}i from FIG. 3A are described. For ease of description, in the descriptions related to the detailed operation of the ENC1 320 and ENC2 310 in FIGS. 4, 5 and 6, components in ENC2 310 which are analogous to components in ENC1 320 are denoted as primes. For example, in FIG. 4, ENC1 320 has a DCT component 405 for calculating {tilde over (G)}1 and ENC2 310 has an analogous DCT component 405′ for calculating {tilde over (G)}2.
In accordance with an embodiment of the present invention, shown in FIG. 4, the central prediction error {tilde over (F)}1 is reconstructed from Δ{tilde over (C)}n and Δ{tilde over (C)}n is also used to generate {tilde over (G)}1. To generate {tilde over (G)}1, Δ{tilde over (C)}n from PCT unit 445 is input to an inverse quantizer (“Q1−1”) 460 and dequantized C coefficients, Δ{tilde over (C)}n are output. A linear estimator 465 receives the Δ{tilde over (C)}n and outputs estimated DCT coefficients ΔÂn1 and Δ{circumflex over (B)}n1. ΔÂn1 and Δ{circumflex over (B)}n1 which are then input to inverse pairing unit 470 which converts the N/2 pairs into DCT coefficients and outputs the DCT coefficients to an inverse DCT unit 475 which outputs {circumflex over (F)}1 to an adder 403. P1 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 302 and the adder 302 outputs E1 to the adder 403. {circumflex over (F)}1 is subtracted from E1 in the adder 403 and G1 is output. In the absence of any additional information, the reconstruction from description one alone will be P1+{circumflex over (F)}1. To allow for a more accurate reconstruction, G1 is defined as G1=X−P1−{circumflex over (F)}1, and G1 is coded into {tilde over (G)}1 using conventional DCT coding. That is, G1 is DCT transformed in a DCT coder 405 to produce DCT coefficients for G1. The DCT coefficients are then input to a quantizer two (“Q2”) 410, quantized with an input, which controls a second part of redundancy, from the rate and redundancy unit 420 in Q2 410 and the quantized coefficients are output from Q2 410 to a run length coding unit 415. The quantized coefficients are then run length coded in run length coding unit 415 to produce the description one enhancement stream {tilde over (G)}1.
Also shown in FIG. 4, the central prediction error {tilde over (F)}2 is reconstructed from Δ{tilde over (D)}n and Δ{tilde over (D)}n is also used to generate {tilde over (G)}2. To generate {tilde over (G)}2, Δ{tilde over (D)}n from PCT unit 445′ is input to Q1−1 460′ and dequantized D coefficients, Δ{tilde over (D)}n are output. A linear estimator 465′ receives the Δ{tilde over (D)}n and outputs estimated DCT coefficients ΔÂn2 and Δ{circumflex over (B)}n2. ΔÂn2 and Δ{circumflex over (B)}n2 are then input to inverse pairing unit 470′ which converts the N/2 pairs into DCT coefficients and outputs the DCT coefficients to an inverse DCT unit 475′ which outputs {circumflex over (F)}2 to an adder 403′. P2 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 304 and the adder 304 outputs E2 to the adder 403′. {circumflex over (F)}2 is subtracted from E2 in the adder 403′ and G2 is output. In the absence of any additional information, the reconstruction from description two alone will be P2+{circumflex over (F)}2. To allow for a more accurate reconstruction, G2 is defined as G2=X−P2−{circumflex over (F)}2, and G2 is coded into {tilde over (G)}2 using conventional DCT coding. That is, G2 is DCT transformed in a DCT coder 405′ to produce DCT coefficients for G2. The DCT coefficients are then input to Q2 410′, quantized with an input from the rate and redundancy unit 420 in Q2 410′ and the quantized coefficients are output from Q2 410′ to a run length coding unit 415′. The quantized coefficients are then run length coded in run length coding unit 415′ to produce the description two enhancement stream {tilde over (G)}2.
In accordance with the current embodiment of the present invention, the EMDC decoder 360 in FIG. 3B is implemented as an inverse circuit of the EMDC encoder 330 described in FIG. 4. With the exception of the rate and redundancy unit 420, all of the other components described have analogous inverse components implemented in the decoder. For example, in the EMDC decoder, if only description one is received, the same operation as described above for the encoder is used to generate {circumflex over (F)}1 from Δ{tilde over (C)}n. In addition, by inverse quantization and inverse DCT, the quantized version of G1, denoted by Ĝ1, is recovered from {tilde over (G)}1. The finally recovered block in this side decoder is X1, which is defined as X1=P1+{circumflex over (F)}11.
In the embodiment of FIG. 4, more than 64 coefficients are needed to be coded in the EMDC 330 and ENC1 320 together. While the use of the 64 coefficients completely codes the mismatch error, G1, subject to quantization errors, it requires too many bits. Therefore, in accordance with another embodiment of the present invention, only 32 coefficients are coded when generating {tilde over (G)}1, by only including the error for the D coefficients. Likewise, only 32 coefficients are coded when generating {tilde over (G)}2, by only including C coefficients. Specifically, as shown in FIG. 5, DCT is applied to side prediction error E1 in the DCT coder 405, where E1=X−P1, and the same pairing scheme as in the central coder is applied to generate N pairs of DCT coefficients in pairing unit 510.
As in FIG. 4, in FIG. 5, to implement the EMDC encoder 330, a MDTC coder is used. For each 8×8 block of central prediction error, P0 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 306 to produce E0 and then E0 is input to the DCT unit 425 which performs DCT on E0 and outputs N≦64 DCT coefficients. In pairing unit 430, the coder takes the N≦64 DCT coefficients from the DCT unit 425 and organizes them into N/2 pairs (Ãn, {tilde over (B)}n) using a fixed pairing scheme for all frames. The N/2 pairs are then input with an input from the rate and redundancy allocation unit 420 to the Q1 quantizer units 435 and 440, respectively, and Q1 quantizer units 435 and 440 produce quantized pairs (ΔÃn, Δ{tilde over (B)}n), respectively. It should be noted that both N and the pairing strategy are determined based on the statistics of the DCT coefficients and the k-th largest coefficient is paired with the (N−k)-th largest coefficient. Each quantized pair (ΔÃn, Δ{tilde over (B)}n) is input with an input from the rate and redundancy allocation unit 420 to a PCT unit 445 with the transform parameter βn to produce the coefficients (Δ{tilde over (C)}n, Δ{tilde over (D)}n), which are then split into two sets. The unpaired coefficients are split even/odd and appended to the PCT coefficients.
In accordance with an embodiment of the present invention, shown in FIG. 5, an estimate of the central prediction error {tilde over (F)}1 is reconstructed from Δ{tilde over (C)}n and Δ{tilde over (C)}n is also used to generate {tilde over (G)}1. To generate {tilde over (G)}1, {tilde over (C)}n from PCT unit 445 is input to Q1−1 460 and dequantized C coefficients, Δ{tilde over (C)}n are output to a linear estimator 530. The linear estimator 530 receives the ΔĈn and outputs an estimated DCT coefficient {circumflex over (D)}n1, which is input to an adder 520. P1 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 302 to produce side prediction error E1 which is then input to conventional DCT coder 405 where DCT is applied to E1. The output of the DCT coder 405 is input to pairing unit 510 and the same pairing scheme as described above for pairing unit 430 is applied to generate N pairs of DCT coefficients. The N pairs of DCT coefficients are then input to a PCT unit 515 with transform parameter βn which generates only the D component, Dn1. Then, Dn1 is input to an adder 520 and {circumflex over (D)}n1 is subtracted from Dn1 and an error Cn is output. The error Cn, which is defined as Cn=Dn1·{circumflex over (D)}n1, is input with an input from the rate and redundancy allocation unit 420 to Q2 525 and quantized to produce a quantized error, Ĉn. The {tilde over (C)}n coefficients from the PCT unit 515 and the quantized error Ĉn are then together subjected to run-length coding in run length coding unit 450 to produce a resulting bitstream {tilde over (F)}1, {tilde over (G)}1, which constitutes {tilde over (F)}1 and {tilde over (G)}1 from FIG. 3A.
Likewise, an estimate of the central prediction error {tilde over (F)}1 is reconstructed from Δ{tilde over (D)}n and Δ{tilde over (D)}n is also used to generate {tilde over (G)}2. To generate {tilde over (G)}2, {tilde over (D)}n from PCT unit 445′ is input to Q1−1 460′ and dequantized D coefficients, Δ{tilde over (D)}n are output to a linear estimator 530′. The linear estimator 530′ receives the Δ{tilde over (D)}n and outputs an estimated DCT coefficient {circumflex over (D)}n1, which is input to an adder 520′. P2 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 304 to produce side prediction error E2 which is then input to conventional DCT coder 405′ where DCT is applied to E2. The output of the DCT coder 405′ is input to pairing unit 510′ and the same pairing scheme as described above for pairing unit 430 is applied to generate N pairs of DCT coefficients. The N pairs of DCT coefficients are then input to a PCT unit 515′ with transform parameter βn which generates only the C component, Cn1. Then, Cn1 is input to an adder 520′ and Ĉn1 is subtracted from Cn1 and an error Dn is output. The error Dn, which is defined as Dn=Cn1·Dn, is input with an input from the rate and redundancy allocation unit 420 to Q2 525′ and quantized to produce a quantized error, {circumflex over (D)}n. The {tilde over (D)}n coefficients from the PCT unit 515′ and the quantized error {circumflex over (D)}n are then together subjected to run-length coding in run length coding unit 450′ to produce a resulting bitstream {tilde over (F)}2, {tilde over (G)}2, which constitutes {tilde over (F)}2 and {tilde over (G)}2 from FIG. 3A.
In accordance with the current embodiment of the present invention, the DEC1 370 from FIG. 3B is implemented as an inverse circuit of the ENC1 320 described in FIG. 4. With the exception of the rate and redundancy unit 420, all of the other components described have analogous inverse components implemented in the decoder. For example, in the DEC1 370, if only description one is received, which includes, after run length decoding and dequantization, Cn and Ĉn, the PCT coefficients corresponding to the side prediction error can be estimated by Ĉn1n, {circumflex over (D)}n1={circumflex over (D)}n1n)+Ĉn. Then inverse PCT can be performed on Ĉn1 and {circumflex over (D)}n1, followed by inverse DCT to arrive at quantized prediction error Ê1. The finally recovered macroblock, X1, is reconstructed by adding P1 and Ê1 together, such that, X1=P11.
In another embodiment of the present invention, the strategy is to ignore the error in the side predictor and use some additional redundancy to improve the reconstruction accuracy for the Dn in the central predictor. This is accomplished by quantizing and coding the estimation error for Cn=Δ{circumflex over (D)}n−{circumflex over (D)}nn), as shown in FIG. 6. This scheme is the same as the generalized PCT, where four variables are used to represent the initial pair of two coefficients
As in the previously described embodiments, in FIG. 6, to implement the EMDC encoder 330, a MDTC coder is used. For each 8×8 block of central prediction error, P0 is subtracted from each corresponding 8×8 block from input macroblock X 100 in the adder 306 to produce E0 and then E0 is input to the DCT unit 425 which performs DCT on E0 and outputs N≦64 DCT coefficients. A pairing unit 430 receives the N≦64 DCT coefficients from the DCT unit 425 and organizes them into N/2 pairs (Ãn, {tilde over (B)}n) using a fixed pairing scheme for all frames. The N/2 pairs are then input with an input from the rate and redundancy allocation unit 420 to Q1 quantizer units 435 and 440, respectively, and Q1 quantizer units 435 and 440 produce quantized pairs (ΔÃn, Δ{tilde over (B)}n), respectively. It should be noted that both N and the pairing strategy are determined based on the statistics of the DCT coefficients and the k-th largest coefficient is paired with the (N−k)-th largest coefficient. Each quantized pair (ΔÃn, Δ{tilde over (B)}n) is input with an input from the rate and redundancy allocation unit 420 to the PCT unit 445 with the transform parameter βn to produce the PCT coefficients (Δ{tilde over (C)}n, Δ{tilde over (D)}n), which are then split into two sets. The unpaired coefficients are split even/odd and appended to the PCT coefficients.
In accordance with an embodiment of the present invention, shown in FIG. 6, {tilde over (C)}n is input to inverse quantizer Q1−1 460 and dequantized C coefficients, ΔĈn are output to a linear estimator 610. The linear estimator 610 is applied to ΔĈn to produce an estimated DCT coefficient {circumflex over (D)}n which is output to an adder 630. Similarly, {circumflex over (D)}n is input to a second inverse quantizer Q1−1 620 and dequantized D coefficients, Δ{circumflex over (D)}n are also output to the adder 630. Then, {circumflex over (D)}n is subtracted from Δ{circumflex over (D)}n in the adder 630 and the error Cn is output. The error Cn=Δ{circumflex over (D)}n−{circumflex over (D)}nn) is input with an input from the rate and redundancy allocation unit 420 to quantizer Q2 640 and quantized to produce Ĉn1. The {tilde over (C)}n coefficients and the quantized error Ĉn are then together subjected to run-length coding in run length coding unit 650 to produce the resulting bitstream {tilde over (F)}1, {tilde over (G)}1, which constitutes {tilde over (F)}1 and {tilde over (G)}1 from FIG. 3A.
Similarly, in FIG. 6, {tilde over (D)}n is input to inverse quantizer Q1−1 460′ and dequantized D coefficients, Δ{circumflex over (D)}n are output to a linear estimator 610′. The linear estimator 610′ is applied to Δ{circumflex over (D)}n to produce an estimated DCT coefficient Ĉn which is output to an adder 630′. Similarly, {tilde over (C)}n is input to a second inverse quantizer Q1−1 620′ and dequantized C coefficients, ΔĈn are also output to the adder 630′. Then, Ĉn is subtracted from ΔĈn in the adder 630′ and the error Dn is output. The error Dn is input with an input from the rate and redundancy allocation unit 420 to quantizer Q2 640′ and quantized to produce {circumflex over (D)}n. The {tilde over (D)}n coefficients and the quantized error {circumflex over (D)}n are then together subjected to run-length coding in run length coding unit 650′ to produce the resulting bitstream {tilde over (F)}2, {tilde over (G)}2, which constitutes {tilde over (F)}2 and {tilde over (G)}2 from FIG. 3A.
In accordance with the current embodiment of the present invention, the DEC2 decoder 380 decoder from FIG. 3B is implemented as an inverse circuit of the ENC2 encoder 310 described in FIG. 4. With the exception of the rate and redundancy unit 420, all of the other components described have analogous inverse components implemented in the decoder. For example, the DEC2 decoder 380 operation is the same as in the DEC1 decoder 370 embodiment, the recovered prediction error is actually a quantized version of F, so that X1=P1+{circumflex over (F)}. Therefore, in this implementation, the mismatch between P0 and P1 are left as is, and allowed to accumulate over time in successive P-frames. However, the effect of this mismatch is eliminated upon each new I-frame.
In all of the above embodiments, the quantization parameter in Q1 controls the rate, the transform parameter βn controls the first part of redundancy ρe,1, and the quantization parameter in Q2 controls the second part of redundancy ρe,2. In each embodiment, these parameters are controlled by the rate and redundancy allocation component 420. This allocation is performed based on a theoretical analysis of the trade-off between rate, redundancy, and distortion, associated with each implementation. In addition to redundancy allocation between ρe,1 and ρe,2 for a given P-frame, the total redundancy, ρ, among successive frames must be allocated. This is accomplished by treating coefficients from different frames as different coefficient pairs.
Multiple Description Motion Estimation and Coding (MDMEC)
In accordance with an embodiment of the present invention, illustrated in FIG. 7, in a motion estimation component 710, conventional motion estimation is performed to find the best motion vector for each input macroblock X 100. In an alternate embodiment (not shown) a simplified method for performing motion estimation is used in which the motion vectors from the input macroblock X 100 are duplicated on both channels. FIG. 8 shows an arrangement of odd and even macroblocks within each digitized frame in accordance with an embodiment of the present invention. Returning to FIG. 7, the motion estimation component 710 is connected to a video input unit (not shown) for receiving the input macroblocks and to FB0 270 (not shown) for receiving reconstructed macroblocks from previously reconstructed frames from both descriptions, ψo,k-1. The motion estimation component 710 is also connected to a motion-encoder-1 730, an adder 715 and an adder 718. Motion-encoder-1 730 is connected to a motion-interpolator-1 725 and the motion-interpolator-1 725 is connected to the adder 715. The adder 715 is connected to a motion-encoder-3 720. Similarly, motion-encoder-2 735 is connected to a motion-interpolator-2 740 and the motion-interpolator-2 740 is connected to the adder 718. The adder 718 is connected to a motion-encoder-4 745.
In FIG. 7, the motion vectors for the even macroblocks output from the motion estimation unit 710, denoted by m1, are input to Motion-Encoder-1 730, and coded to yield {tilde over (m)}1,1 and reconstructed motions {circumflex over (m)}1,1. The reconstructed motions, {circumflex over (m)}1,1, are input to motion interpolator-1 725 which interpolates motions in odd macroblocks from the coded ones in even macroblocks, and outputs m2,p to adder 715. In adder 715 m2,p is subtracted from m2 and m1,2 is output, where m2 was received from motion estimation unit 710. m1,2 is then input to motion encoder-3 720 and {tilde over (m)}1,2 is output. Similarly, motion vectors for the odd macroblocks, m2, are input to and coded by Motion-Encoder-2 735, and the coded bits and reconstructed motions denoted by {tilde over (m)}2,1 and {circumflex over (m)}2,1, respectively, are output. The reconstructed motions, {circumflex over (m)}2,1, are input to motion interpolator-2 740 which interpolates motions in even macroblocks from the coded ones in odd macroblocks, and outputs m1,p to adder 718. In adder 718 m1,p is subtracted from m1 and m2,2 is output, where m1 was received from motion estimation unit 710. m2,2 is then input to motion encoder-4 745 and {tilde over (m)}2,2 is output.
For a lossless description of motion, all of the four encoders involved should be lossless. An encoder is “lossless” when the decoder can create an exact reconstruction of the encoded signal, and an encoder is “lossy” when the decoder can not create an exact reconstruction of the encoded signal. In accordance with an embodiment of the present invention, lossless coding is used for m1 and m2 and lossy coding is used for m1,2 and m2,2.
The bits used for coding m1,2 and m2,2 are ignored when both descriptions are received and, therefore, are purely redundancy bits. This part of the redundancy for motion coding is denoted by ρm,2. The extra bits in independent coding of m1 and m2, compared to joint coding, contribute to the other portion of the redundancy. This is denoted by ρm,1.
In another embodiment of the present invention, conventional motion estimation is first performed to find the best motion vector for each macroblock. Then, the horizontal and vertical components of each motion vector are treated as two independent variables a (pre-whitening transform can be applied to reduce the correlation between the two components) and generalized MDTC method is applied to each motion vector. Let mh, mv represent the horizontal and vertical component of a motion vector. Using a pairing transform, T, the transformed coefficients are obtained from Equation (1):
$[ m c m d ] = T [ m h m v ] ( 1 )$
Where {tilde over (m)}i,1=1, 2, represents the bits used to code mc and md, respectively, and mi,2, i=1, 2 represents the bits used to code mc and md, the estimation error for md from mc and the estimation error for mc from md, respectively. The transform parameters in T are controlled based on the desired redundancy.
In another embodiment of the present invention (not shown), each horizontal or vertical motion component is quantized using MDSQ to produce two bit streams for all the motion vectors.
Application of MDTC to Block DCT Coding
The MDTC approach was originally developed and analyzed for an ordered set of N Gaussian variables with zero means and decreasing variances. When applying this approach to DCT coefficients of a macroblock (either an original or a prediction error macroblock), which are not statistically stationary and are inherently two-dimensional, there are many possibilities in terms of how to select and order coefficients to pair. In the conventional run length coding approach for macroblock DCT coefficients, used in all of the current video coding standards, each element of the two-dimensional DCT coefficient array is first quantized using a predefined quantization matrix and a scaling parameter. The quantized coefficient indices are then converted into a one-dimensional array, using a predefined ordering, for example, the zigzag order. For image macroblocks, consecutive high frequency DCT coefficients tend to be zero and, as a result, the run length coding method, which counts how many zeros occur before a non-zero coefficient, has been devised. A pair of symbols, which consist of a run length value and the non-zero value, are then entropy coded.
In an embodiment of the present invention, to overcome the non-stationarity of the DCT coefficients as described above, each image is divided into macroblocks in a few classes so that the DCT coefficients in each class are approximately stationary. For each class, the variances of the DCT coefficients are collected, and based on the variances, the number of coefficients to pair, N, the pairing mechanism and the redundancy allocation are determined. These are determined based on a theoretical analysis of the redundancy-rate-distortion performance of MDTC. Specifically, the k-th largest coefficient in variance is always paired with the (N−k)-th largest, with a fixed transform parameter prescribed by the optimal redundancy allocation. The operation for macroblocks in each class is the same as that described above for the implementation of EMDC. For a given macroblock, it is first transformed into DCT coefficients, quantized, and classified into one of the predefined classes. Then depending on the determined class, the first N DCT coefficients are paired and transformed using PCT, while the rest are split even/odd, and appended to the PCT coefficients. The coefficients in each description (C coefficients and remaining even coefficients, or D coefficients and remaining odd coefficients) usually have many zeros. Therefore, the run length coding scheme is separately applied to the two coefficient streams.
In an alternative embodiment of the present invention (not shown), instead of using a fixed pairing scheme for each macroblock in the same class, which could be pairing zero coefficients, a second option is to first determine any non-zero coefficients (after quantization), and then apply MDTC only to the non-zero coefficients. In this embodiment, both the location and the value of the non-zero coefficients need to be specified in both descriptions. One implementation strategy is to duplicate the information characterizing the locations of the two coefficients in both descriptions, but split the two coefficient values using MDTC. A suitable pairing scheme is needed for the non-zero coefficients. An alternative implementation strategy is to duplicate some of the non-zero coefficients, while splitting the remaining one in an even/odd manner.
FIG. 9 is a flow diagram representation of an embodiment of an encoder operation in accordance with the present invention. In FIG. 9, in block 905 a sequence of video frames is received and in block 910 the frame index value k is initialized to zero. In block 915 the next frame in the sequence of video frames is divided into a macroblock representation of the video frame. In an embodiment of the present invention, the macroblock is a 16×16 macroblock. Then, in block 920, for a first macroblock a decision is made on which mode will be used to code the macroblock. If the I-mode is selected in block 920, then, in block 925 the 16×16 macroblock representation is divided into 8×8 blocks and in block 930 DCT is applied to each of the 8×8 blocks and the resulting DCT coefficients are passed to block 935. In an embodiment of the present invention, four 8×8 blocks are created to represent the luminance characteristics and two 8×8 blocks are created to represent the chromanance characteristics of the macroblock. In block 935, a four-variable transform is applied to the DCT coefficients to produce 128 coefficients, which, in block 940, are decomposed into two sets of 64 coefficients. The two sets of 64 coefficients are each run length coded to form two separate descriptions in block 945. Then, in block 950, each description is output to one of two channels. In block 952, a check is made to determine if there are any more macroblocks in the current video frame to be coded. If there are more macroblocks to be coded, then, the encoder returns to block 920 and continues with the next macroblock. If there are not any more macro blocks to be coded in block 952, then, in block 954 a check is made to determine if there are any more frames to be coded, and if there are not any more frames to be coded in block 954, then the encoder operation ends. If, in block 954, it is determined that there are more frames to be coded, then, in block 955 the frame index k is incremented by 1 and operation returns to block 915 to begin coding the next video frame.
If, in block 920, the P-mode is selected, then, in block 960, the three best prediction macroblocks are determined with their corresponding motion vectors and prediction errors using a reconstructed previous frame from both descriptions and zero, one or two of the reconstructed previous frames from description one and description two. Then, in block 965 for the three best macroblocks a decision is made on which mode will be used to code the macroblocks. If the I-mode is selected in block 965, then, the macroblocks are coded using the method described above for blocks 925 through block 955. If the P-mode is selected in block 965, then, in block 970 each of the three prediction error macroblocks is divided into a set of 8×8 blocks. In block 975, DCT is applied to each of the three sets of 8×8 blocks to produce three sets of DCT coefficients for each block and, then, in block 980, a four-variable pairing transform is applied to each of the three sets of DCT coefficients for each block to produce three sets of 128 coefficients. Each of the three sets of 128 coefficients from block 980 are decomposed into two sets of 64 coefficients in block 985 and the results are provided to block 990. In block 990, up to two motion vectors and each of the two sets of 64 coefficient are encoded using run-length coding to form two descriptions. Then, in block 950, each description is output to one of two channels. In block 952, a check is made to determine if there are any more macroblocks in the current video frame to be coded. If there are more macroblocks to be coded, then, the encoder returns to block 920 and continues with the next macroblock. If there are not any more macro blocks to be coded in block 952, then, in block 954 a check is made to determine if there are any more frames to be coded, and if there are not any more frames to be coded in block 954, then the encoder operation ends. If, in block 954, it is determined that there are more frames to be coded, then, in block 955 the frame index k is incremented by 1 and operation returns to block 915 to begin coding the next video frame.
FIG. 10 is a flow diagram representation of the operations performed by a decoder when the decoder is receiving both descriptions, in accordance with an embodiment of the present invention. In FIG. 10, in block 1005 the frame index k is initialized to zero. Then, in block 1010, the decoder receives bitstreams from both channels and in block 1015 the bitstreams are decoded to the macroblock level for each frame in the bitstreams. In block 1020, the mode to be used for a decoded macroblock is determined. If, in block 1020, the mode to be used for the macroblock is determined to be the I-mode, then, in block 1025 the macroblock is decoded to the block level. In block 1030, each block from the macroblock is decoded into two sets of 64 coefficients, and in block 1035 an inverse four-variable pairing transform is applied to each of the two sets of 64 coefficients to produce the DCT coefficients for each block. In block 1040, an inverse 8×8 DCT is applied to the DCT coefficients for each block to produce four 8×8 blocks. Then, in block 1045, the four 8×8 blocks are assembled into one 16×16 macroblock.
If, in block 1020, the mode to be used for the macroblock is determined to be the P-mode, then, in block 1065, the motion vectors are decoded and a prediction macroblock is formed from a reconstructed previous frame from both descriptions. In block 1070 the prediction macroblock from block 1065 is decoded to the block level. Then, in block 1075, each block from the prediction macroblock is decoded into two sets of 64 coefficients, and in block 1080 an inverse four-variable pairing transform is applied to each of the two sets of coefficients to produce the DCT coefficients for each block. In block 1085, an inverse 8×8 DCT is applied to the DCT coefficients for each block to produce four 8×8 blocks. Then, in block 1090, the four 8×8 blocks are assembled into one 16×16 macroblock, and in block 1095 the 16×16 macroblock from block 1090 is added to the prediction macroblock which was formed in block 1065.
Regardless of whether I-mode or P-mode decoding is used, after either block 1045 or block 1095, in block 1050 the macroblocks from block 1045 and block 1095 are assembled into a frame. Then, in block 1052, a check is made to determine if there are any more macroblocks in the current video frame to be decoded. If there are more macroblocks to be decoded, then, the decoder returns to block 1020 and continues with the next macroblock. If there are not any more macro blocks to be decoded in block 1052, then, in block 1055, the frame is sent to the buffer for reconstructed frames from both descriptions. In block 1057 a check is made to determine if there are any more frames to decode, and if there are not any more frames to decode in block 1057, then the decoder operation ends. If, in block 1057, it is determined that there are more frames to decode, then, in block 1060 the frame index, k, is incremented by one and the operation returns to block 1010 to continue decoding the bitstreams as described above.
FIG. 11 is a flow diagram representation of the operations performed by a decoder when the decoder is receiving only description one, in accordance with an embodiment of the present invention. In FIG. 11, in block 1105 the frame index k is initialized to zero. Then, in block 1110, the decoder receives a single bitstream from channel one and in block 1115 the bitstream is decoded to the macroblock level for each frame in the video bitstream. In block 1120, the mode used for a decoded macroblock is determined. If, in block 1120, the mode of the macroblock is determined to be the I-mode, then, in block 1125 the macroblock is decoded to the block level. In block 1130, each block from the macroblock is decoded into two sets of 64 coefficients, and in block 1132 an estimate for the two sets of 64 coefficients for the description on channel two, which was not received, is produced for each block. In block 1135 an inverse four-variable pairing transform is applied to each of the two sets of 64 coefficients to produce the DCT coefficients for each block. In block 1140, an inverse 8×8 DCT is applied to the DCT coefficients for each block to produce four 8×8 blocks. Then, in block 1145, the four 8×8 blocks are assembled into a 16×16 macroblock.
If, in block 1120, the mode of the macroblock is determined to be the P-mode, then, in block 1165, up to two motion vectors are decoded and a prediction macroblock is formed from a reconstructed previous frame from description one. In block 1170 the prediction macroblock from block 1165 is decoded to the block level. Then, in block 1175, each block from the prediction macroblock is decoded into two sets of 64 coefficients, and in block 1177 an estimate for the two sets of 64 coefficients for the description on channel two, which was not received, is produced for each block. In block 1180 an inverse four-variable pairing transform is applied to each of the two sets of 64 coefficients to produce the DCT coefficients for each block. In block 1185, an inverse 8×8 DCT is applied to the DCT coefficients for each block to produce four 8×8 blocks. Then, in block 1190, the four 8×8 blocks are assembled into a 16×16 macroblock, and in block 1195 the macroblock from block 1190 is added to the prediction macroblock formed in block 1165.
Regardless of whether I-mode or P-mode decoding is used, after either block 1145 or block 1195, in block 1150 the macroblocks from block 1145 and block 1195 are assembled into a frame. In block 1152, a check is made to determine if there are any more macroblocks in the current video frame to be decoded. If there are more macroblocks to be decoded, then, the decoder returns to block 1120 and continues with the next macroblock. If there are not any more macro blocks to be decoded in block 1152, then, in block 1155, the frame is sent to the buffer for reconstructed frames from description one. In block 1157 a check is made to determine if there are any more frames to decode, and if there are not any more frames to decode in block 1157, then the decoder operation ends. If, in block 1157, it is determined that there are more frames to decode, then, in block 1160 the frame index, k, is incremented by one and the operation returns to block 1110 to continue decoding the bitstream as described above.
While the decoder method of operations shown in FIG. 11, and described above, are directed to an embodiment in which the decoder is only receiving description one, the method is equally applicable when only description two is being received. The modifications that are required merely involve changing block 1110 to receive the bitstream from channel two; changing block 1165 to form the prediction macroblock from reconstructed previous frame from description two; and changing blocks 1132 and 1177 to estimate the coefficients sent on channel one.
In the foregoing detailed description and figures, several embodiments of the present invention are specifically illustrated and described. Accordingly, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
## Claims
1. An encoder comprising:
a first module responsive to a video frame, the first module configured to divide the video frame into macroblocks;
a mode selection element configured to select each of the macroblocks for intra-frame coding mode (I-mode) encoding or for predictive coding mode (P-mode) encoding;
an I-mode module configured to encode a first macroblock that is selected for I-mode encoding by coding color values of the first macroblock into an I-mode first description of a multiple description coding scheme and into an I-mode second description of the multiple description coding scheme; and
a P-mode module configured to encode a second macroblock that is selected for P-mode encoding by coding both a motion vector and an error into a first P-mode description and into a second P-mode description by predicting the second macroblock based on a previously encoded frame.
a first module responsive to a video frame, the first module configured to divide the video frame into macroblocks;
a mode selection element configured to select each of the macroblocks for intra-frame coding mode (I-mode) encoding or for predictive coding mode (P-mode) encoding;
an I-mode module configured to encode a first macroblock that is selected for I-mode encoding by coding color values of the first macroblock into an I-mode first description of a multiple description coding scheme and into an I-mode second description of the multiple description coding scheme; and
a P-mode module configured to encode a second macroblock that is selected for P-mode encoding by coding both a motion vector and an error into a first P-mode description and into a second P-mode description by predicting the second macroblock based on a previously encoded frame.
2. The encoder of claim 1, wherein the error corresponds to a difference between the second macroblock and another macroblock from a previous frame that is used to predict the second macroblock.
3. The encoder of claim 2, wherein the motion vector is descriptive of displacement between the second macroblock and a best matching macroblock.
4. The encoder of claim 3, wherein both the error and the motion vector are coded into multiple descriptions.
5. The encoder of claim 1, wherein the mode selection element applies one macroblock to the I-mode module for every x macroblocks that are routed to the P-mode module, where x is a number between 10 and 15.
6. The encoder of claim 1, further comprising a rate control unit, wherein the rate control unit is configured to influence whether the mode selection element applies each of the macroblocks to the I-mode module or to the P-mode module.
7. A method for encoding video frames comprising:
dividing a video frame into macroblocks;
selecting each of the macroblocks for I-mode encoding or for P-mode encoding;
encoding a first macroblock that is selected for I-mode encoding by coding color values of the first macroblock into an I-mode first description of a multiple description coding scheme and an I-mode second description of the multiple description coding scheme; and
encoding a second macroblock that is selected for P-mode encoding by coding both a motion vector and an error into a P-mode first description and into a P-mode second description.
dividing a video frame into macroblocks;
selecting each of the macroblocks for I-mode encoding or for P-mode encoding;
encoding a first macroblock that is selected for I-mode encoding by coding color values of the first macroblock into an I-mode first description of a multiple description coding scheme and an I-mode second description of the multiple description coding scheme; and
encoding a second macroblock that is selected for P-mode encoding by coding both a motion vector and an error into a P-mode first description and into a P-mode second description.
8. The method of claim 7, wherein the motion vector represents a displacement between the second macroblock and a best matching macroblock.
9. The method of claim 7, wherein a first set of descriptions comprises the I-mode first description and the I-mode second description, and wherein, when one description of the first set of descriptions is not received, the video frame can be reconstructed by using another description of the first set of descriptions.
10. The method of claim 9, wherein a second set of descriptions comprises the P-mode first description and the P-mode second description, and wherein, when one description of the second set of description is not received, the video frame can be reconstructed by using another description of the second set of descriptions.
11. The method of claim 7, wherein the error corresponds to a difference between the second macroblock and a previously encoded frame.
12. The method of claim 7, wherein both the error and the motion vector are coded into multiple descriptions.
13. The method of claim 7, wherein the I-mode encoding is applied to one macroblock for every x macroblocks, where x is a number between 10 and 15.
14. The method of claim 7, wherein a determination as to whether to apply the I-mode encoding or the P-mode encoding to a particular macroblock is responsive to a rate control unit.
15. The method of claim 7, wherein a determination as to whether to apply the I-mode encoding or the P-mode encoding to a particular macroblock is responsive to a redundancy allocation unit. | 2021-07-30 14:59:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26126763224601746, "perplexity": 2912.5962854334102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.60/warc/CC-MAIN-20210730122926-20210730152926-00093.warc.gz"} |
https://ccsindian.in/2017/06/23/average-speed-formula-2/ | # Average Speed Formula
The Average speed, as obvious from the term itself, is the average of the speed of a moving body for the overall distance it has covered.
The average speed is a scalar quantity, which means, it is only represented by the magnitude and direction of travel is not important and is linked to the distance covered by the object.
Average Speed Formula
The formula for average speed is computed by calculating the ratio of the total distance traveled by the body to the time taken to cover that space. It is not the average of the speed.
The average speed equation is articulated as:
$_{S_{AVG}}=&space;\frac{Total\,&space;Distance\,&space;Traveled}{Total\,&space;Time&space;taken}\,&space;.........(1)$
${S_{AVG}}=&space;\frac{D_{total}}{T_{total}}\,&space;..........(2)$
The equation (2) embodies the average speed formula of an object moving at a varying speed.
Average Speed Problems
The subsequent samples will help us comprehend how to compute average speed.
Solved Examples
Problem 1: A runner sprints at a track meet. He completes an 800-meter lap in 1 minute 20 sec. After the finish, he is at the starting point. Calculate the average speed of the runner during this lap?
For calculating the Average speed of the runner, one must calculate the total distance traveled by him and the overall time taken to complete that distance.
In this case, the distance traveled by him is equal to 800 meters and he has completed it in 80 seconds.
So, applying formula for the average speed
we have
${S_{AVG}}=&space;\frac{800}{80},$
${S_{AVG}}=&space;10\,&space;m/s,$
Problem 2: Vikram drove his car for 4 hours at 50 miles per hour and for 3 hours at the speed of 60 miles per hour. Find his average speed for the journey?
For finding the average speed we need to compute the full distance covered by Vikram.
D1 = 60 × 3 = 180 miles
D2 = 50 × 4 = 200 miles
Therefore, the total distance traveled is
D = D1 + D2
D = 180 + 200
D = 380 miles
So Average speed is
${S_{AVG}}=&space;\frac{(380)}{(3+4)}$
${S_{AVG}}=&space;\frac{(380)}{(7)}$
${S_{AVG}}=&space;54.29\,&space;miles\,&space;per\,&space;hour$
$So,\,&space;the\,&space;average\,&space;speed\,&space;of\,&space;the\,&space;vikram's\,&space;journey\,&space;by\,&space;car\,&space;is\,&space;54.29\,&space;miles\,&space;per\,&space;hour.$ | 2019-11-14 16:02:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114207625389099, "perplexity": 483.1104452640055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668529.43/warc/CC-MAIN-20191114154802-20191114182802-00356.warc.gz"} |
https://dzackgarza.com/courses/2021/Fall/2250/ | General Info
Meeting Times
• Course meetings: MWF 9:10 AM – 10:00 AM. Boyd 204.
• Discussion section: Tuesday 2:20 PM – 3:35 PM. Geography and Geology 155.
The course begins Wednesday, August 18th. Our first meetings will be in-person.
Course discussions will be on the Zulip channel here: https://uga2250fall2021.zulipchat.com/
Syllabus and Scheduling
Week Number Month Day Weekday Topic Description Notes
1 Aug 18 W A Preview of Calculus Course Intro (Icebreaker/Syllabus/Review)
Aug 20 F The Limit of a Function A0: Precalc review, tangents and secants, position and (average) velocity
2 Aug 23 M The Limit Laws B1: The Concept of Limit; Graphical Limits, One- and Two-Sided Limits, Vertical Asymptotes
Aug 24 T The Limit Laws B2/B4: The Limit Laws; forms 0/0 and K/0 (K nonzero), The Squeeze Theorem *drop-add ends Aug 24
Aug 25 W Continuity B2/B4: The Limit Laws; forms 0/0 and K/0 (K nonzero), The Squeeze Theorem
Aug 27 F Flex Day B3: Continuity; Intermediate Value Theorem Homework Check-In: Up to B4.
3 Aug 30 M Defining the Derivative B5: The Definition of the Derivative, Applications, Estimating the Derivative at a Point
Aug 31 T The Derivative as a Function B6: The Derivative as a Function; Sketching the Derivative, Derivatives and Continuity, Higher Order Derivatives; Applications
Sept 1 W Differentiation Rules C1: Differentiation Rules - Constant, Power for integer exponent, sum/difference, constant multiple, product, quotient, applications (no e^x here)
Sept 3 F Derivatives as Rates of Change C2: The Derivative as a Rate of Change
4 Sept 6 M Labor Day - No Class
Sept 7 T Derivatives of Trigonometric Functions C3: Derivatives of Trig Functions
Sept 8 W The Chain Rule C4: The Chain Rule
Sept 10 F Review Homework Check-In: up to C4
5 Sept 13 M Review
Sept 14 T Exam 1: Openstax 2.1-3.5, Workbook up to C4
Sept 15 W Implicit Differentiation C5: Implicit Differentiation
Sept 17 F Implicit Differentiation C5: Implicit Differentiation
6 Sept 20 M Derivatives of Inverse Functions C6: Derivatives of Inverse Functions; the power rule for rational exponents, derivatives of inverse trig functions
Sept 21 T Derivatives of Exponential and Logarithmic Functions C7: Derivatives of Exponential and Logarithmic functions - definition of e, derivatives of exponential functions, derivatives of logarithmic functions, logarithmic differentiation
Sept 22 W Derivatives of Exponential and Logarithmic Functions C7: Derivatives of Exponential and Logarithmic functions - definition of e, derivatives of exponential functions, derivatives of logarithmic functions, logarithmic differentiation
Sept 24 F Related Rates C8: Related Rates
7 Sept 27 M Related Rates C8: Related Rates
Sept 28 T Related Rates C8: Related Rates
Sept 29 W Linear Approximations and Differentials D1: Linear Approximations and Differentials
Oct 1 F Maxima and Minima D2: Extrema, Extreme Value Theorem, Critical Points, Local Extrema - must be interior, Closed Interval Max/Min Problems; Absolute/Local Extrema on Graphs
8 Oct 4 M Review
Oct 5 T Exam 2: Openstax 3.6-4.2, Workbook up to C8
Oct 6 W The Mean Value Theorem D3: MVT, IVT, EVT
Oct 8 F Derivatives and the Shape of a Graph D2: EVT, Local and global minima and maxima, critical points
9 Oct 11 M Derivatives and the Shape of a Graph D1: Linear Approximation and Differentials
Oct 12 T Derivatives and the Shape of a Graph D4: Derivative tests
Oct 13 W Limits at Infinity and Asymptotes D4/D5: Derivative tests, concavity
Oct 15 F Limits at Infinity and Asymptotes D5: Concavity, curve sketching, limits at infinity
10 Oct 18 M Applied Optimization D6/D7/D8: Concavity, limits at infincity, optimization
Oct 19 T Applied Optimization D7/D8: Optimization
Oct 20 W Applied Optimization D8: Optimization
Oct 22 F Flex Day
11 Oct 25 M L’Hopital’s Rule D9: L’Hopital’s Rule; Indeterminate Forms *oct 25 withdraw deadline
Oct 26 T L’Hopital’s Rule D9: L’Hopital’s Rule; Indeterminate Forms
Oct 27 W Antiderivatives Review
Oct 29 F Fall Break - No Class No class
12 Nov 1 M Review Review
Nov 2 T Exam 3: Openstax 4.3-4.10, Workbook D1 – D9
Nov 3 W Approximating Area E1: Antiderivatives; Indefinite Integrals, Initial Value Problems
Nov 5 F Approximating Area E2: Approximating Areas; Sigma Notation, Area Estimates, Riemann Sum, Upper/Lower Sums
13 Nov 8 M The Definite Integral E3/E4: The Definite Integral; Limit Definition of the Definite Integral, Notation, Net Signed Area, Total Area, Properties, Average Value
Nov 9 T The Definite Integral E3/E4: The Definite Integral; Limit Definition of the Definite Integral, Notation, Net Signed Area, Total Area, Properties, Average Value
Nov 10 W The Fundamental Theorem of Calculus E5: The Fundamental Theorem of Calculus; Mean Value Theorem for Integrals
Nov 11 F The Fundamental Theorem of Calculus E5: The Fundamental Theorem of Calculus
14 Nov 15 M Integration Formulas and the Net Change Theorem E5: Integration Formulas and the Net Change Theorem
Nov 16 T Substitution E6/E7: Substitution; Indefinite and Definite Integrals with Substitution (Power, Trig); Integrals Involving Exponential and Logarithmic Functions; Integrals Involving Exponential and Logarithmic Functions; Integrals Resulting in Inverse Trig Functions (only do a=1; no inverse secant)
Nov 17 W Substitution E6/E7: Substitution; Indefinite and Definite Integrals with Substitution (Power, Trig); Integrals Involving Exponential and Logarithmic Functions; Integrals Involving Exponential and Logarithmic Functions; Integrals Resulting in Inverse Trig Functions (only do a=1; no inverse secant)
Nov 19 F Substitution E6/E7: Substitution; Indefinite and Definite Integrals with Substitution (Power, Trig); Integrals Involving Exponential and Logarithmic Functions; Integrals Involving Exponential and Logarithmic Functions; Integrals Resulting in Inverse Trig Functions (only do a=1; no inverse secant)
15 Nov 22 M Areas Between Curves E8: Areas Between Curves
Nov 23 T Areas Between Curves E8: Areas Between Curves
Nov 24 W Thanksgiving Break
Nov 26 F No class
16 Nov 29 M Review
Nov 30 T Exam 4: Openstax 5.1-6.1
Dec 1 W Course Review
Dec 3 F Course Review
17 Dec 6 M Course Review
Dec 7 T Course Review Dec 7 last class day, Friday class schedule | 2022-07-04 22:29:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680363893508911, "perplexity": 8922.668449966657}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00283.warc.gz"} |
https://dsp.stackexchange.com/questions/22259/initial-state-recognition-in-hmm/22280 | # Initial state recognition in HMM
I am building a speech recognition system using Hidden Markov Model in python. I referred to this and this question and its answers, which were very helpful.
In my approach, I split the continuous speech into separate words. I am thinking of using HMM to detect each word. So my states of HMM will be phones.
What I understood so far is that HMM estimates next state based on current state(phone). But I don't get how to estimate first state of HMM(i.e. the first phone of the word).
Can you suggest the best approach to use HMM to achieve this?
Also states of HMM will be phones, but I am not getting what can be observation in problem? There are multiple frame in a single phone and there is a feature vector corresponding to each frame. What should I use as observation?
• en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm - Have you looked into this? – Phorce Mar 23 '15 at 9:40
• @Phorce I have seen it, but I didn't understand how to apply it to solve my problem i. e. detecting first state of HMM (first phone). – Gaurav Deshmukh Mar 23 '15 at 10:07
• The baum welch algorithm will give you an approximation to what the next state is. I.e. Given the word "Yes" it will predict that in "Y" then comes "E" followed by "S" so you can predict the next state. The Vertibi score might be a better solution. – Phorce Mar 23 '15 at 10:13
• The $\pi$ from the Baum-Welch algorithm is the initial probabilities of the states (i.e. the states at start), which is close to what you're after I think. – Peter K. Mar 23 '15 at 21:02
The Baum-Welch algorithm uses the EM (Expectation Maximization) algorithm to estimate the model parameters $(T, E, \pi)$, where:
$T$: the transition probabilities
$E$: the emition probabilities
$\pi$: probability distribution on the states
Some years ago, I made the following quick-and-dirty implementation (may be fairly broken now), for the discrete case.
Hope this helps.
• I thought that $\pi$ from the Baum-Welch algorithm was the initial probability distribution, which is close to what the OP is asking for? – Peter K. Mar 23 '15 at 21:01
• @dohmatob as I said states of HMM will be phones, but I am not getting what can be observation in problem? There are multiple frame in a single phone and there is a feature vector corresponding to each frame. What should I use as observation? – Gaurav Deshmukh Apr 2 '15 at 7:41
This is my understanding, which is probably incomplete: phones are usually modeled using multiple (~3) states. Observations are feature vectors and are often some variant of mel-frequency cepstral coefficients (MFCCs). Word models can then be constructed by concatenating several phone models together. More detailed information can be found in this PDF describing the applications of HMMs to speech recognition. | 2021-06-20 00:46:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6083817481994629, "perplexity": 1037.8840327566857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00287.warc.gz"} |
http://mathonline.wikidot.com/hessian-matrices | Hessian Matrices
# Hessian Matrices
We are about to look at a method of finding extreme values for multivariable functions. We will first need to define what is known as the Hessian Matrix (sometimes simply referred to as just the "Hessian") of a multivariable function.
Definition: Let $\mathbf{x} = (x_1, x_2, ..., x_n)$ and let $z = f(x_1, x_2, ..., x_n) = f(\mathbf{x})$ be an $n$ variable real-valued function whose second partial derivatives exist. Then the Hessian Matrix of $f$ is the $n \times n$ matrix of second partial derivatives of $f$ denoted $\mathcal H (\mathbf{x}) = \begin{bmatrix} f_{11} (\mathbf{x}) & f_{12} (\mathbf{x}) & \cdots & f_{1n} (\mathbf{x})\\ f_{21} (\mathbf{x}) & f_{22} (\mathbf{x}) & \cdots & f_{2n} (\mathbf{x})\\ \vdots & \vdots & \ddots & \vdots \\ f_{n1} (\mathbf{x}) & f_{n2} (\mathbf{x}) & \cdots & f_{nn} (\mathbf{x}) \end{bmatrix}$.
For a two variable function $z = f(x, y)$ we have that the Hessian Matrix of $f$ is:
(1)
\begin{align} \quad \mathcal H (x, y) = \begin{bmatrix} f_{11} (x, y) & f_{12} (x, y)\\ f_{21} (x, y) & f_{22} (x, y) \end{bmatrix} = \begin{bmatrix} \frac{\partial ^2 f}{\partial x^2} & \frac{\partial ^2 f}{\partial y \partial x}\\ \frac{\partial ^2 f}{\partial x \partial y} & \frac{\partial ^2 f}{\partial y^2} \end{bmatrix} \end{align}
For a three variable function $w = f(x, y, z)$ we have that the Hessian Matrix of $f$ is:
(2)
\begin{align} \quad \mathcal H (x, y, z) = \begin{bmatrix} f_{11} (x, y, z) & f_{12} (x, y, z) & f_{13} (x, y, z) \\ f_{21} (x, y, z) & f_{22} (x, y, z) & f_{23} (x, y, z)\\ f_{31} (x, y, z) & f_{32} (x, y, z) & f_{33} (x, y, z) \end{bmatrix} = \begin{bmatrix} \frac{\partial^2 f}{\partial x^2} & \frac{\partial f}{\partial y \partial x} & \frac{\partial^2 f}{\partial z \partial x} \\ \frac{\partial^2 f}{\partial x \partial y} & \frac{\partial ^2 f}{\partial y^2} & \frac{\partial ^2 f}{\partial z \partial y}\\ \frac{\partial^2 f}{\partial x \partial z} & \frac{\partial^2 f}{\partial y \partial z} & \frac{\partial ^2 f}{\partial z^2}\end{bmatrix} \end{align}
Recall from the Clairaut's Theorem on Higher Order Partial Derivatives page that if the second mixed partial derivatives of $f$ are continuous on some neighbourhood of $f$, then these mixed partial derivatives are equal on this neighbourhood. That is for $\mathbf{x} = (x_1, x_2, ..., x_n)$ and $z = f(x_1, x_2, ..., x_n)$ we have that $f_{ij} (\mathbf{x}) = f_{ji} (\mathbf{x})$ for $i = 1, 2, ...,n$ and $j = 1, 2, ..., n$. So the continuity of all of the second mixed partial derivatives of $f$ imply that the Hessian $H(\mathbf{x})$ is symmetric.
## Example 1
Find the Hessian Matrix of the function $f(x, y) = x^2y + xy^3$.
We need to first find the first partial derivatives of $f$. We have that:
(3)
\begin{align} \quad \frac{\partial f}{\partial x} = 2xy + y^3 \quad , \quad \frac{\partial f}{\partial y} = x^2 + 3xy^2 \end{align}
We then calculate the second partial derivatives of $f$:
(4)
\begin{align} \quad \frac{\partial^2 f}{\partial x^2} = 2y \quad , \quad \frac{\partial^2 f}{\partial y \partial x} 2x + 3y^2 \quad , \quad \frac{\partial^2 f}{\partial x \partial y} = 2x + 3y^2 \quad , \quad \frac{\partial^2 f}{\partial y^2} = 6xy \end{align}
Therefore the Hessian Matrix of $f$ is:
(5)
\begin{align} \quad \mathcal H (x, y) = \begin{bmatrix} 2y & 2x + 3y^2 \\ 2x + 3y^2 & 6xy \end{bmatrix} \end{align} | 2018-03-21 13:07:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999376535415649, "perplexity": 192.3347492421482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647649.70/warc/CC-MAIN-20180321121805-20180321141805-00147.warc.gz"} |
https://or.stackexchange.com/questions/6008/correct-way-to-define-constraints-in-pyomo | # Correct way to define constraints in Pyomo
Can I know if the constraint below can be defined as follows in Pyomo for convex optimization.
W and G are arrays of dimension M x N.
del_t = 5
M = 2 # set of active tasks
N = 4 # 4 time steps
maxP = 0.14
d = np.array([20,20]) # deadline in seconds
curr_time = 0
## Third constraint START ##
m.c3 = []
for i in range(M):
for k in range(N):
if curr_time + k*del_t <= d[i] + curr_time:
c3_exp = m.W[i+1,k+1] + m.G[i+1,k+1] <= maxP*del_t
m.c3.append(Constraint(expr= c3_exp))
print(m.c3[i])
else:
c3_exp = m.W[i+1,k+1] + m.G[i+1,k+1] == 0
m.c3.append(Constraint(expr= c3_exp))
print(m.c3[i])
## Third constraint END ##
Can I also know if the output I get below when I run this code is correct
• Can you please provide an MWE? BTW, it seems to me that the code has not completely adopted the Pyomo environment. You still can enhance that. – Oguz Toragay Mar 29 at 14:11 | 2021-05-06 07:25:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6485143899917603, "perplexity": 3728.022508405719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00631.warc.gz"} |
https://solved.stackoverflow-exchange.ml/2019/03/solved-calculus-ii-professor-will-not.html | # StackOverflow - Solved
looking for some solutions? You are welcome.
## SOLVED: Calculus II Professor will not accept my correct integral evaluation that uses a different method, should I bring this up further? – math.stackexchange.com
user146073: I am a freshman enrolled at an American University. Recently, I took an examination in which the following problem appeared: Evaluate the following integral: $\int_0^4\sqrt{16-x^2}dx$ My answer: ...
Posted in S.E.F
via StackOverflow & StackExchange Atomic Web Robots | 2019-05-20 11:26:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6087214946746826, "perplexity": 2704.9891493719065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00373.warc.gz"} |
https://tex.stackexchange.com/questions/267873/latex-error-you-have-run-the-document-with-pdflatex-but-pstricks/267876 | # ! LaTeX Error: You have run the document with pdflatex, but PSTricks
! LaTeX Error: You have run the document with pdflatex, but PSTricksrequires latex->dvips->ps2pdf or alternatively the useof the package auto-pst-pdf'. Then you can runpdflatex -shell-escape ' (TeX Live)orpdflatex -enable-write18 ' (MikTeX).See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.... \begin{document}
• All is said in the error message, you have some pstricks code that cannot be compiled by pdflatex alone. You must load the auto-pst-pdf package, and launch pdflatex with one of the mentioned switches. We can help more if you give some more details (which distribution do you have, which editor, &c.) Sep 16 '15 at 17:09
• You are using Sage-TeX. Instaed of running pdflatex use xelatex and everything should be fine.
– user2478
Sep 16 '15 at 17:34
I think the error message is pretty clear as it stands. The package PSTricks does not run with pdflatex without additional preparations (like including the package auto-pst-pdf and adding some operating system dependend compilation flag), but only with standard latex producing a dvi file. This dvi file is converted to a ps (PostScript) file with dvips. In a last step you can get a pdf file from the ps file running either ps2pdf or pstopdf (the latter two are two different programs producing different pdf files that should look the same on screen and on paper).
• You can compile with pdflatex if you load auto-pst-pdf`. Sep 16 '15 at 17:22
• @Bernard: OK, I rephrased my answer. My main aim in the answer was to guide the classical LaTeX-dvips-ps2pdf path. Sep 16 '15 at 17:26 | 2021-10-25 16:50:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784954786300659, "perplexity": 4031.666336642426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587719.64/warc/CC-MAIN-20211025154225-20211025184225-00589.warc.gz"} |
https://stats.stackexchange.com/questions/66622/covariates-considered-moderator-or-control-variables | # Covariates considered moderator or control variables?
Are covariates considered moderator variables or control variables? To elaborate on my question, I'm conducting a research study which has:
1. variables that are supposed to affect the dependent variable but that I don't consider in any analysis (e.g., daytime the experiment done)
2. variables that are used as exclusion criteria, so that some subjects are removed (filtered) from the analysis (e.g., having medical condition)
3. variables that are supposed to influence the dependent variable and which I therefore include as covariates in my model (e.g., a GLM or MANCOVA).
What would you call each of these three kinds of variables?
• In general, the difference between a moderator and a control is semantic. If the effect is moderated by a given effect, then the estimates are also controlled by that effect. Usually "control" implies that it is included in the model to reduce/remove confounding of the variable of interest. There is nothing mathematically different about the variable of interest. Aug 6 '13 at 4:19
• @ACD Thanks, but could you elaborate on the semantic difference ? I just want this to know how to describe kind of my variables in my thesis. Aug 6 '13 at 4:26
• Say you are running y = a +bx + e. x is the variable of interest. it is confounded by z. That means that z is a moderating variable. if you run y = a + b1x + b2z + e, then you are controlling for z as a moderating variable. Aug 6 '13 at 6:34
• I disagree with @ACD. A moderator is a variable that interacts with the predictor of interest and in this way influences the predictor's relationship with the DV. In a purely statistical sense (that is, ignoring any notions of causality), moderation is synonymous with interaction. A covariate or "control variable," on the other hand, does not interact with the predictor of interest. So, no, the difference between a moderator and a control variable is not merely semantic. The z variable from @ACD's previous comment would be a covariate and not a moderator, since there is no interaction. Aug 9 '13 at 3:22
• ^I don't disagree. Either way, what we've got here is an example of why it is dangerous to rely on metaphor and other linguistic constructs for mathematical concepts. Aug 9 '13 at 3:55
## 1 Answer
A control variable (confounder, potential omitted variable) is a variable you include in the model because you suspect it is confounding the main relationship you are interested in (so it is suspected to be related to both the main independent variable (explanatory variable, predictor, treatment) of interest and to the dependent (outcome) variable.
A moderator is a variable which changes the effect of the main independent variable on the outcome variable, so it interacts with the main independent variable.
A mediating variable is a variable that translates fully or partially the effect of the main independent variable on the outcome.
A covariate is in my opinion unspecified and could refer to either of these.
A variable that is suspected to be related with the outcome variable but not with the main independent variable of interest is a covariate but not a control variable (as it does not control for anything).
A variable that is suspected to be related to the main independent variable of interest but not to the outcome would be an instrumental variable.
To come back to your concrete questions:
1. since you don't use them it matters little what you called them - alternative predictors (causal factors), unmodelled effects, etc are all options
2. definitely no controls or covariates, these are just sample selection criteria
3. see above
• +1. Clear, complete, and to the point. Welcome to our site!
– whuber
Feb 4 '14 at 15:55
• +1, However, I wouldn't call "a variable you include" an "omitted variable". Would you mind tweaking that? Feb 4 '14 at 16:47
• @gung Good catch. Yes, that phrase could be confusing. I read "potential omitted variable" in the sense of "if you were to leave it out, it would cause an 'omitted variable' error" (and therefore, by implication, you should include it in the model).
– whuber
Feb 4 '14 at 18:12
• yeah, I added 'potential' later. The underlying problem being addressed is usually referred to as 'omitted variable bias' so often one speaks of 'omitted variables' while they are, in fact, 'variables included to avoid an omitted variable bias'. Feb 4 '14 at 19:06 | 2021-12-05 15:10:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5454840660095215, "perplexity": 1110.6418433666474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00519.warc.gz"} |
https://www.eng-tips.com/viewthread.cfm?qid=483935 | ×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# Opinion: the Board of Engineering is outdated and it hurts us all6
Forum Search FAQs Links MVPs
## Opinion: the Board of Engineering is outdated and it hurts us all
(OP)
Just had a conversation with my sister who’s a dentist. Surprisingly (not?) I learned that the Dental Boards are incredibly powerful and influential. They are heavily involved in politics and lobbying. It’s to protect their pay (one of the highest in all professions) and continue their prestigious professional status.
Which got me thinking about our Board. Simply put, we’re comically outdated. It’s not a realization that I have to dig deep for either.
-In an age of specialists we’re still lumped with all the other engineering disciplines, surveyors and architects. Heck, even within the Civil Engineering umbrella it’s already distinctive between structural, geotechnical, civil, environmental, etc...
-This is the board that still allows Architects to design 1-2 story residential buildings (there are limitations but still, shouldn’t be there to begin with). It waters down our technical skills. Such a dumb move because it sends off the sentiment that we’re not that specialized since someone else can do it. Even a portion of it. I get that Architects used to be the jack-of-all-trades but that was about 70 years ago. That’s how I know the board is full of archaic old timers with outdated thinking.
-There’s no protection for our pay. In the private sector you’re pretty much competing with each other. There’s always an engineer lowballing their service fees and ends up getting the job. In order to remain competitive other firms follow suit and the fees drop altogether.
-The values of the licenses are completely incoherent and inconsistent. To be competitive as a Civil Engineer or Structural Engineer you MUST have your licenses. The same can’t be said about MEs or EEs. It’s an optional bonus for them rather than baseline.
I don’t mean to come off unprofessional and ranty but it’s a sad reality we need to address for our profession. The board needs to break up. It needs an overhaul. There should be one residing over each discipline (I would go as far as saying there should be one for the structural engineers only). Not all lumped together because we “share fundamentals”. Doctors, pharmacists and dentists share massive chemistry and biology fundamentals too and they all have their own board.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
There already are SE boards in some states....but I do get what you are saying. What has amazed me more is organizations like ASCE that seem to be doing nothing with regards to some of these issues. (As I have stated on these boards in the past.)
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
Grass always looks greener on the other side; family doctors are getting undercut by lower-paid nurse practitioners who do not need 4 yrs of medical school nor 3 yrs of residency to start seeing patients and prescribing medicine.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg
FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
It does seem that when it comes to protection of our profession from internal cannibalization, Engineers are pretty bad when compared to most medical professionals. I think most of that has to do with control of the schools and who and how many individuals are admitted. However, with that being said, the sentiment towards medical professionals in our country seems to be reaching a boiling point and I wonder if changes are inevitable.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
It isn't all the fault of the engineering boards. There isn't a lot they can do to control the pay rate when its the engineers themselves that lowball the bids, or undervalue their experience. Controlling the sector and labor market is well beyond the capability of the boards. Example is that instead of selling high quality analysis that latest technology brings as a value added, it is used more often simply as a means of doing the job faster and reducing the rate even further. That method is a race to the bottom. The loopholes in the engineering laws also need to be a bit more restrictive too, but engineers do not typically have the appetite to get involved with the organisation and politics that it takes to do that.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
2
#### Quote (LockeBT)
In an age of specialists
Could it be that's part of the problem? There's always been a general distrust of "specialists" in the US - lots of ink has been spent on the subject - and it seems it's only become worse recently. More and more people are dismissing people who build their careers on knowledge - who needs a college degree, anyway? Note that I'm not in this camp - I'm a 'knowledge worker' and 'professional expert' myself so I certainly understand our value to society, but we live in a society that seems (at least in part) to not want to see our value. Doctors enjoyed a special place between complex and 'unknowable' science and common public need. There are still lots of folks who prefer home remedies to more widely accepted medical cures and treatments - just look at the vaccine 'debate.' We lack the common public need - or at least that's the perception. Nearly everyone in a developed country will see a doctor once in their lives...most at least once per year. A couple of quick (and unconfirmed) google searchs: MD's/1000 people in the US was about 2.6 in 2017 and estimated 278 visits to a doctor's office per 100 people, and there were about 54,625 engineers in the US in 2020 who identified as "Civil/Structural". So that's about .0026 doctors/person and .000166 structural engineers/person in the US. But now consider how many people actually hire a structural engineer directly. If every engineer is capable of doing 120 projects per year, and 60% are from repeat clients, that's 48 'new' clients/year/engineer - so about 2,622,000 people experience working with a structural engineer for the first (and probably only) time each year. So lets reset the per capita for the engineers...54,625/2,622,000=0.02083...or about 10x the per capita rate for doctors. Factor in the repeat clients - contractors, architects, etc. - you can probably bring it down to closer 6 or 7 times. Again, these numbers are really rough and unsubstantiated (thanks, Google!), but I think it shows that our industry is actually a lot more saturated than medicine when you factor in the demand side. And where supply is more plentiful compared to demand...prices fall. (Thanks, Adam Smith!).
We also have really relaxed educational requirements. Scrape by with a C average at just about any school out there and, after you get your degree, boom! - you're an engineer. Lots of structural engineers practice in subordinate roles without their license - they still have 'structural engineer' in their job title. Until we increase the rigor of the degree requirements and mandate graduate degrees for licensure, we won't benefit from the bump in compensation that the inevitable attrition in obtaining the credential brings (again, Adam Smith discusses this at length...Wealth of Nations, Book 1, Chapter 10). Whether or not to mandate a master's degree is another matter of debate in the industry.
Let's also not forget about how these professions are getting paid. Doctors are getting paid by insurance companies. For the average American, a visit to the doctor is somewhere between free and $50, while the doctor is actually getting closer to$200. Never mind the monthly insurance payment. That's just money people never see. To link the two is an abstraction most people either can't or don't want to understand. A few years ago I tallied it up...for my healthy family of 4 with only "wellness" visits to the doctor...we were paying roughly $1200 per visit to the doctor. And that's with my employer picking up almost 90% of my insurance premium (family premiums were out of pocket). But most people don't figure that out (and our$200,000 bill when my son was born only cost $8,000...so there's that!). A good comparison is design structural engineering vs. forensic structural engineer. Lots of folks bring it up on these boards...forensics pays more. And it's true. How often is there a structural failure that doesn't involve an insurance company? Once the insurance company's pockets open up, the engineer can get a lot more. When the owner/end user is paying for services out of pocket, there's a lot more downward pressure on our fees. Same happens with doctors/hospitals. For most people, big medical bills are paid off on interest free payment plans. Would you design a house for$10,000 and then accept \$50/month for the next 16+ years to pay for it? Because a lot of medical institutions will do it for a surgery/hospital stay.
I think we disagree in the house category - if it fits the IRC, then you don't need a structural engineer. There's lots of things that can be built without our input quite safely. Just like there are several things in medicine that can be handled without an MD or DMD.
I get what you're saying, and in principle I agree with the sentiment...but I can't stand all these comparisons to doctors, lawyers, and real estate agents. Do we get paid as much? No? Do we take on more liability? Depends. Do we take on a much lower risk:reward ratio? You bet. But rather than bemoan the fact that we don't have it as good as them, let's discuss the issues within our industry as they pertain to our industry...not how they're different from fundamentally different industries.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
(OP)
phamEng,
I guess my gripe is about representation (or lack thereof). You explained real well why that is for us. The rule of supply/demand does govern all at the end of the day. And based on other responses we have issues in other areas also (such as education and lower requirements) which makes my statement of putting the bulk of the blame on our board shortsighted.
What would you change and problems would you address, specifically for structural engineers, if you were in charge and have some of the power?
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
(OP)
Also by “specialists” I meant something along the line of an orthodontist to a dentist or a opthomologist to an optometrist. Similar to a structural engineer to a civil engineer.
I’m not quite sure where the distrust comes from but “specialist“ refers to one with a higher more indepth skill-set in a specific field who goes beyond the average requirements for that field. I get referred by my MD left and right to these other guys/gals and the sentiment is that those guys just know more than my MD about a specific area. They’re like MD+ lol
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
For the USA steel industry, I can live with domestic competition but I'll support nearly any reform that would reduce the rate of offshoring being brought on by technology. I'm pretty sure my position (licensed connection engineer for a steel fabricator) could be almost entirely eliminated, in the USA, if requirements for licensure were relaxed. Heck, we already do almost all the detailing overseas, and the popular corporate speak concerning "optimizing the design" and "need for speed" project management is certainly coercing the industry in an undesirable direction.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
LockeBT - maybe it's worse here. I live right between the first (failed) attempt at English settlement in America and the first successful one. So the "I can do this myself - who needs your learning" mentality runs deep. But it is one of those things that has pervaded the American psyche...for better and for worse. Specialist, expert...many find that too narrow and prefer the jack-of-all trades approach because that's a more independent way to live. I enjoy it myself - working on my own car, doing my own home improvement projects, etc...but I also know the value of an expert when one is called for...many do not.
I see what you're saying, though...you're looking at just within our profession. I think it's a false comparison, though. For a neurosurgeon, they went to undergrad (presumably a biology degree), med school to get their MD, and then at least one, probably two residencies before finally becoming a fully accredited surgeon. If you start college at 18, you might get there by the time you're between 30 and 34...assuming you can hack it (pun intended). So that person is a specialized medical doctor, really just starting their career (with an eye-popping mountain of debt in most cases) when a lot of engineers are coming into "senior engineer" positions at their firms and paying off their 10 year federal student loans. For us...get your bachelor's degree and your specialization is a combination of a few electives and the place you choose to get your first job. That's about it. And if you want to change? Might take a pay cut and some convincing of the new employer, but it's doable - a lot easier than restarting the residency process for a doc. So I think rather than a true specialization, we're just practicing quasi-independent branches of related engineering on little more than OJT.
What would I change? I'm a proponent of a required Master's Degree. Earn your Civil Engineering degree, then if you want to continue on to be a PE, get your master's degree in your specialty of choice followed by a 4 year EIT/EI period. If we really want to bring up the quality of the profession, we could go as far as requiring an "approved" EIT period...just as residency programs are required to be certified. That way we can know that engineers in training are actually being trained and not just marooned in a cubicle with little guidance, too much work, and canned recommendation letters at exam time. That's a pipe dream, though, and there's insufficient impetus to force the massive investment that would be required.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
Working in a heavily regulated profession can be its own kind of hell.
= = = = = = = = = = = = = = = = = = = =
P.E. Metallurgy, consulting work welcomed
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
phamENG...
120 projects per year? You must work a whole lot faster than I do.
I have had years where I worked 60+ hours per week all year on less than half a dozen projects. I once worked four months straight at 90 hours and seven days per week on a single project, and there were ten of us on that meat-grinder (six engineers and four CAD drafters). This was the site civil and site electrical design for a 5000-bed state prison and the agency was making major changes to the design up to the final deadline, but the deadline couldn't be moved because it had been set by the state legislature. In fact, on the day we were plotting, stamping, and signing 200+ sheets for our final drawing submittal, I got a call from the agency's program manager (a consultant himself) telling me that, at the request of the agency, the architect (a parallel consultant to us) and his mechanical engineer were moving and resizing the equipment pads for all 120 buildings. He asked if we could incorporate the changes for this submittal. I told him no, but that we could do it with an addendum during bidding.
Seriously, though, my smallest project ever took just 3 hours of my time and I have had many proejcts that took less than a man-week. But, 120 projects per year? Not me.
============
"Is it the only lesson of history that mankind is unteachable?"
--Winston S. Churchill
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
fel3: It depends on the kind of work you do. In my line of work (concrete restoration) my jobs tend to be fairly typical and in all but a handful of cases it takes me more time to drive to site / draft drawings than it does to design things. That's because I've done repairs so many times I can pretty much know what I'm going to do just by hearing a description of the thing. If this was all I did, a project / day or two per day would not be unreasonable. This obviously leads to my boredom, but that's another story (and why I wear a lot of hats).
However, if one is engaged in multi-story or megastructures even a project per year might be a lot!!
phamENG: I'm convinced additional schooling is not the answer to people having no clue as to what they are doing. The problem isn't a lack of knowledge in and of itself so much as it is lack of knowledge combined with A) overconfidence, and B) a lack of feedback due to safety factors pulling their weight in unintended ways. If a designer knows what they are doing, the building stands up. If a designer is a total muddlehead, the building is still likely to stand up. There are very few mechanisms by which poor designers are punished. It seems to me that the intention is for experienced engineers to guide the young, but this remains largely just a pipe dream in practice given financial constraints / time pressures driving a busy schedule with little time for review / mentorship. And of course this leads to a cycle where slightly less knowledgeable people turn into senior designers, who in turn "mentor" the next crop who are even less knowledgeable, and so on and so forth. Schooling wont solve this issue.
EDIT - These issues plague all of us despite our mutual goal (I assume) of trying to actually know(ish) what is going on. Overconfidence is a humanistic trait. And the question remains how do WE get feedback on what we are doing is correct? This is not an easy problem to solve at all.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
fel3 - 120 is a lot...but doable if you do lots of small jobs. I can keep the lights on doing forensic investigations of houses...45 minutes of driving, an hour or two at the house looking it over and taking measurements, and an hour or two running a few numbers and writing a report. If I do 3/week I'm pushing 150 for the year and covering my overhead and a chunk of my salary. Designs...not so much. More like a couple/month, but those are generally with architects, contractors, and other repeat clients. But your point helps mine...if we're doing even less and interacting with fewer new clients, then we're even more 'over saturated'.
Enable - I understand you're point, and I think I agree with it at its core...but I take it another direction. Rather than simply saying more schooling isn't required, I think an overhaul to the schooling is required before we can push more of it. As you mentioned we've gotten ourselves into a cycle of declining level of knowledge in design offices. How do we correct that? The two ways I can think of are to a) enforce a more strict and rigorous continuing education program or b) reprogram at the University level to boost the knowledge going in from the bottom by requiring internships and more practical applications of the engineering prior to graduation. The former has the advantage of building on the senior engineers' experience, but the disadvantage of operational inertia. The senior engineers will be less likely to accept a new or different way of looking at something, and if it proves they've actually been wrong about something...how many of us have the backbone to admit we're wrong? And what do you do with that knowledge? The latter has the advantage of beginning to shape young engineers and prime them for their careers, but without much experience it's hard to ground it in something tangible. I will say that there are several topics related to structural engineering that are important for a successful career that simply aren't taught until you get to a Master's level...at least not here. Steel connection design, advanced structural analysis, dynamics/vibrations, even seismic theory and specific applications of hydrodynamics to wind are all addressed in detail during an MS. Are these things that can be learned on the job? Yes, but there's something to be said for learning the theory early and then going out and learning to apply it. Those financial constraints we brought up...they make it very difficult to both learn the theory and apply it on the job...you typically have to choose one or the other (and if you choose to learn it you probably won't have a job for long). I think having those gaps coming out of school combined with poor mentorship/leadership is what's feeding that cycle.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
What I have noticed with most engineers in my area is that they lack knowledge of basic statics. To me, this is fundamental in everything we do. You can get a "C" in statics, move on to easier classes, graduate and still probably know nothing about statics.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
XR - agreed. This is a place we could learn from the medical profession. My wife went to med school for her master's (not an MD), but she took courses with MD students in some cases where they overlapped. She was required to get a higher grade than the MD candidates to pass. The MDs going into family practice, dermatology, etc. only needed a C, she needed a B, and the MDs looking to become surgeons had to get an A for any hope of being accepted to a residency. So if you're going to be a traffic engineer...statics maybe isn't so important. A C will do fine. Civil Engineering Technologists who want to go into structural work? A B would probably be acceptable. But if you're going to become a full fledged structural engineer, then you'd better have an A in statics.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
2
I find that a lot of engineers see the negatives in their peers rather than positives. I say see the positives, look for the good. Change your mindset, the profession is going good and is great. Looking at salaries of my peers in my local area generally we are fairly paid.
The only gripe I would put forward for boards is to be more proactive rather than reactive. It would be good if they provide knowledge based cpd, or encourage more open sharing of engineering knowledge.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
Actually I got a "C" in statics back in the day. (Considering it was a ME professor that was [the icing on the cake] just about impossible to understand.....I consider myself lucky to have gotten that.) But I went on to get As & Bs in more advanced structural analysis classes.....including some at the graduate level....this is not to mention the fact I am a SE in numerous states.
So a "C" in any single undergrad class isn't anywhere near the top in the list of concerns in this profession (IMHO).
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
I see many parallels between medicine and engineering.
Your standard medical general practitioner (family doctor) isnt swimming in cash. They deal with the routine and send patients onward when a specialist is needed.
The average strcutural engineer designing the same slab over an over again isnt making the big bucks. If something crosses their desk that they arent comfortable with, they usually know the right specialist to refer their client to. And the specialist engineers who get into the specialist work usually make good dough.
The difference is, the medical practitioner values entering specialty work. whereas for the structural engineer, any mention that they should leave the world of routine structural engineering is met with contempt. Why would they invest all this time to become an engineer if they arent going to do "real engineering"?
The engineer debates going back to school for a masters degree in advanced concrete or steel formulation, which may enhance his knowledge of that same old slab he's been designing for years, but wont change the outcome of his designs.
He stays put at his desk, copying and pasting standard details from job to job, forwarding calls for interesting engineering work onto the specialists, the guys who dont do real engineering work, but handle that "oddball stuff that needs an engineer"
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
WARose - good point. My statistics grade (and now my knowledge of statistics) suffered greatly from the the fact that my professor's accent was thick enough to cut with a knife. Brilliant man and okay one on one, but in a lecture hall? You can forget it.
If I may, though, I think you're missing the 'forest' of my comment for the 'trees'. You struggled through statics not because of difficulty with the subject, but because you couldn't understand the prof. You rectified it later and figured it out. There are plenty of students who scrape by with a C because they just don't get it...and then still go on to careers where using it is an imperative. How many people come on these boards, claiming to be a junior structural engineer, and their questions are pure statics? So perhaps the grade itself isn't the perfect metric, but there should be something to better evaluate a candidate's aptitude for a particular field of engineering.
NorthCivil - I agree with most of that, but I think you have the roles swapped. The guy spec'ing the same slab every day and picking his down out of a table is not the one doing 'real' engineering...it's the guy doing the advanced odd ball stuff that's really doing the engineering. (I might be prejudiced - when I worked in an office I was the odd ball guy who got most of the tough engineering problems and that's still where I'm happiest working.)
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
The state boards are set up to regulate the profession, not to protect the profession or advocate for the profession. If the engineering that gets done, gets done properly, then the boards are doing their work. If we all go broke in the process, that's not their concern.
Personally, I don't have a lot of problem with the way things are set up, or with the testing or qualification that goes on. While I can think of various major and minor tweaks, pretty much any change is going to benefit one person and hurt another.
On the comments on people hiring their own engineers- I think the result is about as would be expected. Most people aren't in a position to ever need that service. It's like comparing the need for a doctor to the need for, say, a podiatrist. It doesn't mean the profession is more or less worthwhile or valued, just that most people don't have the specific problem that would require consultation with that specialist.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
#### Quote:
How many people come on these boards, claiming to be a junior structural engineer, and their questions are pure statics? So perhaps the grade itself isn't the perfect metric, but there should be something to better evaluate a candidate's aptitude for a particular field of engineering.
Good point....but I'm not sure what the metric would be.
In actuality, I've known several who graduated with fantastic grades but just didn't make it as engineers because they just couldn't put it all together. They turned everything into a research project.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
(OP)
JStephen,
I work in CA btw. I wonder who regulates the board requirements. The DCA?
I understand the boards are set up to regulate the profession. I believe part of regulation entails standardizing engineering fees (at least minimum fees). However one of the comments above mentioned that it's beyond their functions. Which is unfortunate if you ask me. Because it creates a sphere of cannibalism and dissolves intra-professional support. What I mean is that in the private sector, unless you are a CEO or one of the principals within a firm, having an SE is a disadvantage. As in, heck, why pay for an SE when a PE can be trained to do the same thing with lower pay? That's not economical. The principal engineers will use their SE stamps anyway, they don't need another SE unless there is something extremely specialized.
Until this day, I still wonder why I had to take the Surveying Exam for my PE. As in...why? The Seismic Exam makes sense because that's truly state-specific since CA is a high seismic region. But Surveying? Who added this? What are the justifications for this? Is it just to make money? I failed the exam my Surveying Exam first time and I have no idea how I passed my 2nd time. Sometimes I wonder if I even deserve my PE lol. Point is the CA Board is outdated with that requirement. I understand students are to be exposed and trained in CE related disciplines (including Surveying). But that is purely for the sake of academics. The PE Exam is a PROFESSIONAL exam. It should hold the standards of...well, professionals. As in, it should be applicable and relevant to the work. I have honestly not used a single thing I have learned in Surveying ever since. Studying for it was a waste of time/resource for me. I was going through the uninterested motion of learning something knowing I will never ever use it. If I take it today I will fail miserably.
Also, whose idea is it to require me to pay for my PE and my SE license renewal? I mean it isn't much but it's so backwards. The SE license is a higher tier version of the PE correct? Why treat them as 2 separate licenses?
Perhaps this comes off as incoherent rant and my gripe is misguided and misplaced. It might not be the board after all. I have only worked for 10 years and (please tell me I'm wrong) I can sense the decline in our profession. I think we do such a good job that it self-penalizes. Buildings aren't collapsing. Structures perform overall pretty well barring extreme events. Majority of our work is over-designed (for good reasons) and better performing than ever. All those factors in unison lower the demand for our work (and pay). It's the reality I find hard to swallow.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
LockeBT - to your point about the board fixing prices...I certainly don't agree with that - particularly in a state as large as California. What baseline would you tie it to? You'd either end up charging market rate for downtown San Francisco and nobody in Salinas would ever be able to hire an engineer, or the other way around and leave pilesmountains of cash on the table when you do work in San Francisco. Fixing prices will either hurt the consumers (and, indirectly, the profession) or hurt the profession directly.
One 'solution' that I've heard of...I think it was Alabama that was considering it a few years back?...not allowing fixed fee proposals. You'd be required to charge by the hour. It's an imperfect system, since I'm sure you could agree over lunch to not do more than X hours, but it's something. I don't completely support it, since hourly projects limit my margins and properly negotiated lump sum projects are significantly more lucrative, but I can see an advantage if you have trouble with your budgets. By doing it time and material/ cost plus you may not feel as much pressure to send a project out half cocked as you would if you burn through a tight fee and only have half the engineering done.
WARose - also true. I wonder if there's be a way to establish some sort of 'certified' internship. It could ensure (or at least increase the chances of) meaningful exposure and experience in the practice of engineer and weed out the high academic performers that can't function in practice from the good junior engineers. If ABET put it together and made it a degree requirement under their accreditation system, then it could have an impact. But it also feels like it would need to be part of an accredited master's program - once the student has a 4 year degree and has chosen their specialty, they do 6 months to a year of one of these internships in that specialty and it satisfies a requirement for their master's program. Internships prior to graduation are beneficial, sure, but I know for me the only structural classes before senior year were statics and structural analysis 1 - everything else was a 400 level requirement or elective, so I would have been utterly useless to a structural firm. When I took my FE exam, I had no idea what the wide-flange available moment curves were - I'd never seen them before.
### RE: Opinion: the Board of Engineering is outdated and it hurts us all
phamENG-
I didnt read through all of your posts, but I read the first. I generally agree. I want to say two things: 1.) I don't think we need to require post graduate education for SE's. Even without it, we still put in an equivalent amount of time, including apprenticeship or qualifying experience, and testing as a general practitioner... and as you become competent as an SE, you realize it takes ten years of experience before you are capable of going solo. Although a post graduate education starts you out ahead, I dont think you necessarily end up there after that ten years. 2.) I am convinced, the pay difference has everything to do with working for private owners. As you noted, insurance companies pay more... but so do government agencies. Even my CE buddies get paid better.... because they contract with the government. I know SE's who design bridges..... who also get paid much better. Not only do they work primarily for government agencies... but they also DONT WORK FOR ARCHITECTS.
Bottom line is, this is a great profession but a terrible business....
BTW- 120 jobs? Really? I do like 25, although I could do maybe twice that... if my newly launched practice would take off....
To comment on the original post- I dont think its the boards job to standardize pay. Even the AIA no longer does that. What I try to remember, is my clients dont know anything about what I do. So, I explain it. And I explain what the next guy might and might not do, for the lower fee. And I dont want to work for the client that just wants a rubber stamp... Ive put too much time into my profession, to pimp it out.
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
#### Posting in the Eng-Tips forums is a member-only feature.
Click Here to join Eng-Tips and talk with other members! Already a Member? Login
#### Resources
Design for Additive Manufacturing (DfAM)
Examine how the principles of DfAM upend many of the long-standing rules around manufacturability - allowing engineers and designers to place a part’s function at the center of their design considerations. Download Now
Taking Control of Engineering Documents
This ebook covers tips for creating and managing workflows, security best practices and protection of intellectual property, Cloud vs. on-premise software solutions, CAD file management, compliance, and more. Download Now
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free! | 2021-06-12 15:13:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25384408235549927, "perplexity": 1898.463533910846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487584018.1/warc/CC-MAIN-20210612132637-20210612162637-00195.warc.gz"} |
https://www.physicsforums.com/threads/triple-integrals-again.312282/ | # Triple integrals again
1. May 6, 2009
### joemama69
1. The problem statement, all variables and given/known data
evaluate the integral y2z2dV over W, which is the region bounded by x = 1 - y2 - z2 adn the plane x = 0
2. Relevant equations
3. The attempt at a solution
since x = 0, that makes y2 + z2 = 1, unit circle in the yz plane right?
so would the answer be the area of the circle times y2z2
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. May 6, 2009
### tiny-tim
uhh? the answer is a number … how can ∫∫∫ y2z2dV have any y or z in it?
3. May 6, 2009
### Cyosis
Draw the region. You will see that it is a paraboloid with a maximum at y=z=0 and x=1. The paraboloid is cut off at x=0 so the region looks like the top of paraboloid. Which is not just a circle.
4. May 11, 2009
### Cyosis
Re: Which method is best for triple integration
Those integration limits are not correct. For A) if your integration order is dzdydx there will be a z left in your answer. Every other order of integration has a similar problem so it can't be right.
For B) you don't integrate over x,y and z, but over r, theta and x. This means you've to find limits for r, theta and x. The polar coordinates in this case would be $x=x, y=r \cos \theta, z=r \sin \theta,y^2+z^2=r^2$. Try to find the correct boundaries for both A and B. Realize the region is a paraboloid with an extremum at 1 on the x axis and cut off by the yz-plane.
I will leave it for you to decide which one is easier.
5. May 11, 2009
### HallsofIvy
Re: Which method is best for triple integration
You might try first swapping x and z so that the integrand is $x^2y^2dV$ and the bounding surfaces are $z= 1- x^2- y^2$ and z= 0.
In cylindrical coordinates that would be $(r^2cos^(\theta)r^2sin^2(\theta) rdrd\theta dz= r^5 sin^2(\theta)cos^2(\theta)$ and the boundaries $z= 10- r^2$ and z= 0.
6. May 11, 2009
### joemama69
Re: Which method is best for triple integration
Well I was originally thinking polar was easier but i think ive changed my mind.
BTW im not using x2y2 and z = 1 - x2 - y2, z = 0
I need help setting up my limits
z.. 0 to 1 - x2 - y2
x... -(1-y2)1/2 to 1-y2)1/2
y... -1 to 1
whats right and whats wrong
7. May 13, 2009
### HallsofIvy
Re: Setting up Integration limits
So you are integrating over the region bounded by $z= 1- x^2- y^2$ and z= 0?
I would recommend converting to cylindrical coordinates: $x= r cos(\theta)$ and $y= r sin(\theta)$ so that $z= 1- r^2$. Don't forget that the "differential of volume" in cylindrical coordinates is $dV= r dr d\theta$.
If you are required to do this in rectangular coordinates (or just want to), then, yes, your limits of integration are correct. Projecting the parboloid onto the z= 0 plane, you get the circle $x^2+ y^2= 1$. y going from -1 to 1 will cover that and, for any given y, x will go from $-\sqrt{1- y^2}$ to $\sqrt{1-x^2}$. Finally, for any given x and y, z going from 0 to $1- x^2- y^2$ will cover the solid.
8. May 13, 2009
### joemama69
Ok, i tried it in cylindrical but i got my limts wrong
z is from 0 to 1-r2
r is from 0 to $$\sqrt{1-z}$$
theta is from 0 to pi
9. May 13, 2009
### Cyosis
You switched z and x around right? You can't have a z in your limits, it will mean you end up with a variable. The radii of the circular slices vary from the base r=1 to the top r=0. Also to integrate over the full circular slice you need to adjust your $\theta$ limits.
10. May 13, 2009
### joemama69
im integrating over dz,dr,dQ Q = theta
dz = 0 to 1-r2
dr = 0 to 1
dQ = 0 to 2pi
$$\int\int\int$$r4cos2Q sin2Q r dz dr dQ
11. May 13, 2009
### Cyosis
Yes that should give you the same answer as the Cartesian method.
12. May 13, 2009
### joemama69
ok i got stuck integrating
in integrated z and pluged in the limits and now im integrating r
$$\int$$(r5 - r7)cos2Q sin2Q dr = 1/24 .........
1/24 $$\int$$ cos2Q sin2Q dQ =
1/24 $$\int$$ ( - /sin2Q) sin2Q dQ =
1/24 $$\int$$ sin2Q - sin4Q dQ =
Now im stuck... my calculater gave me pi/4, but how do i do it by hand
13. May 13, 2009
### HallsofIvy
$sin^2(Q)- sin^4(Q)= sin^2Q(1- sin^2(Q))= sin^2(Q)cos^2(Q)$
Use trig identities $sin^2(Q)= (1/2)(1- cos(Q))$, $cos^2(Q)= (1/2)(1+ cos(Q))$. You may need to use those twice.
14. May 13, 2009
### joemama69
i havent those identities in a while
1/24 $$\int$$cos2Q sin2Q dQ =
1/24 $$\int$$(1/2 + 1/2 cosQ)(1/2 - 1/2 cosQ) dQ =
1/96 $$\int$$(1 + cos2Q) dQ =
1/96 $$\int$$(1 + 1/2 + 1/2 cosQ) dQ =
1/96 $$\int$$(3/2 + 1/2 cosQ) dQ =
1/96 (3Q/2 + 1/2 sinQ)] from 0 to 2pi = pi/32
my calc said it was pi/4 do u see any problems
15. May 13, 2009
### Cyosis
I would personally start out with $\cos^2 \theta \sin^2 \theta=(\cos \theta \sin \theta)^2=\frac{1}{4}\sin^2 (2\theta)$. This way you can skip a few steps.
You used Halls identities which are incorrect the correct ones are $\cos^2x=\frac{1}{2}(1+\cos 2x),\sin^2x=\frac{1}{2}(1-\cos 2x)$. Note the double angle.
16. May 13, 2009
### joemama69
ok when i integrated with the last identity i got pi/96 did u get this too
17. May 13, 2009
I got pi/96. | 2018-01-24 02:33:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6487483978271484, "perplexity": 1553.16000508098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00356.warc.gz"} |
http://meteothink.org/docs/meteoinfolab/numeric/functions/atan.html | # atan¶
mipylib.numeric.minum.atan(x)
Trigonometric inverse tangent, element-wise.
The inverse of tan, so that if y = tan(x) then x = atan(y).
Parameters
x – (array_like) Input values, atan is applied to each element of x.
Returns
(array_like) Out has the same shape as x. Its real part is in [-pi/2, pi/2] .
Examples:
>>> atan([0, 1])
array([0.0, 0.7853982]) | 2020-01-25 00:42:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2840592563152313, "perplexity": 7660.621275789023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00037.warc.gz"} |
http://mathhelpforum.com/algebra/47003-solve-p-x-0-a.html | Math Help - Solve P(x)=0.....
1. Solve P(x)=0.....
For the polynomial, solve P(x)=0
$P(x)=(3x+2)(x-1)(4x+5)$
how do i solve this question?
2. if
$
P(x)=(3x+2)(x-1)(4x+5) = 0
$
then either:
1. $3x+2 = 0$
or
2. $x-1 = 0$
or
3. $4x+5 = 0$
or every combination of the above thus the solution follows immediately:
$
x = - \frac{2}
{3},1, - \frac{5}
{4}
$
3. thanks for your help! it really helped me complete the other questions i had easily | 2016-07-26 17:20:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9954396486282349, "perplexity": 4273.6798845984895}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00278-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://www.codeforge.com/read/165463/extractC2forcell.m__html | extractC2forcell.m in standardmodelrelease
function mC2 = extractC2forcell(filters,fSiz,c1SpaceSS,c1ScaleSS,c1OL,cPatches,cImages,numPatchSizes);
%function mC2 = extractC2forcell(filters,fSiz,c1SpaceSS,c1ScaleSS,c1OL,cPatches,cImages,numPatchSizes);
%
%this function is a wrapper of C2. For each image in the cell cImages,
%it extracts all the values of the C2 layer
%for all the prototypes in the cell cPatches.
%The result mC2 is a matrix of size total_number_of_patches \times number_of_images where
%total_number_of_patches is the sum over i = 1:numPatchSizes of length(cPatches{i})
%and number_of_images is length(cImages)
%The C1 parameters used are given as the variables filters,fSiz,c1SpaceSS,c1ScaleSS,c1OL
%for more detail regarding these parameters see the help entry for C1
%
%% a bug was fixed on Jul 01 2005
numPatchSizes = min(numPatchSizes,length(cPatches));
...
...
... to be continued.
This is a preview. To get the complete source file, | 2013-05-23 08:22:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599184393882751, "perplexity": 6365.651829921501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703035278/warc/CC-MAIN-20130516111715-00064-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://mathhelpforum.com/threads/does-this-differential-equation-have-very-lengthy-solution.280894/ | # Does this differential equation have very lengthy solution?
#### Vinod
Hello,
$\frac{dy}{dx}=\frac{x+1}{y(y+2)}$
Solution: $(y^2+2y)dy=(x+1)dx$
Integrating both the sides, we get
$\frac{(y^3+3y^2)}{3}=\frac{(x^2+2x)}{2}+c$
Now here i am stuck. When i put this differential equation in wolfram alpha it gave me very lengthy solution.Now what should i do?
Last edited:
#### SlipEternal
You forgot the $+C$. Other than that, if you are looking to get $y$ in terms of $x$, it is not possible with a single formula. You have a cubic polynomial in $y$, so when you solve the cubic, you will have three solutions, and those solutions are not "pretty". I recommend stopping when you get to the step you are currently on (after adding the missing $+C$). There is no need to find an explicit formula for $y$. You did your due diligence and simplified it as much as is needed.
1 person | 2019-10-23 16:26:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711239099502563, "perplexity": 209.8140071049441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987834649.58/warc/CC-MAIN-20191023150047-20191023173547-00209.warc.gz"} |
https://willwolf.io/2014/05/02/clustering-continued-a-gaucho-on-vacation/ | In our previous post we chose to cluster South American airports into $$k = 3$$ distinct groups. Moving forward, we'll take a closer look into what this really means.
As mentioned previously, the k-means algorithm incorporates some element of mathematical randomness. On one k-means trial, the algorithm may assign 30% of our airports to Cluster 1, 65% to Cluster 2, and 5% to Cluster 3. Other times, this distribution could look more like 35%, 55%, and 10% respectively. The more clusters we input, or the larger we make k, the less the distributions vary by trial.
Irrespective, we previously deemed $$k = 3$$ to be the way to go, informed by statistical methods and not by counting the number of McDonald's each airport houses, so with $$k = 3$$ we will proceed. Qualitatively, these clusters can be said to define airports as "major" airports, "semi-major" airports, and "non-major" airports.
In addition to cluster sizes varying by trial, the actual number assigned to each cluster - 1, 2, or 3 - will vary by trial as well. For example, in Argentina, Buenos Aires' international airport is assumed to be consistently placed in the cluster pertaining to "major" airports; however, the cluster number assigned to this group on a given trial could be 1, 2, or 3. Below is a histogram of cluster sizes across 100 k-means trials for the 293 South American airports being examined.
Since numbers assigned to each cluster (1, 2, or 3) change by trial, this graph isn't particularly useful. However, it does given significant evidence that cluster sizes will indeed vary by trial, which we'll use later on. As such, it follows that clusters should not be judged by their respective cluster numbers, but rather, by the mean "centers" values associated with the airports grouped within. This is what really defines the clusters themselves, or in other words, what makes each cluster pertain to "major," "semi-major," and "non-major" air hubs (I'll continue to keep these words in quotations, since k-means clustering is ultimately an attempt to give a quantitative definition to an ultimately qualitative distinction, which is always, at best, an approximation).
Upon first examining which airports were actually clustered together - and again, we're using $$k = 3$$, and considering all routes between all airports in South America - it is immediately clear that airports from the same country are consistently put into the same cluster groups. Even though Buenos Aires' airport, with 50 distinct routes continent-wide, and Santa Rosa, Argentina's airport, with only 1 distinct route continent-wide (that being to Buenos Aires), are clearly categorically different in "major-ness," they are consistently put into the same cluster. This is probably for one or both of the following reasons: "non-major" airports are "piggy-backing" onto the more "major" domestic airports in their respective countries (as they are generally just 1 flight away, as is the case with Santa Rosa); or, the extensiveness of the domestic air network itself outweighs the international, continent-wide connectivity that a single airport can offer, therefore grouping "major" and "non-major" airports from the same country together more frequently than "major" and "semi-major" airports from different countries. Clearly, our goal is to consistently have, at a minimum, the continent's most "major" airports grouped together - those of Buenos Aires, São Paulo, Bogotá, Lima, and Santiago, for example - but unfortunately this is not the case. Back to the drawing board.
Instead, what I choose to do in this post is compare and contrast the clusterings of individual domestic networks. For this, I choose only the countries with at least 3 airports running domestic routes (as we of course need as many airports as we do clusters), being Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Peru, and Venezuela. Our aim here to figure out the proportion of "major" airports, "semi-major" airports, and "non-major" airports in each country.
To do this, we first cluster and then examine the means of each cluster's centers. From there, we simply take the means - average shortest path lengths for airports in each cluster - and order from smallest to biggest. This will ensure that the smallest means correspond to our "major" airports, second-smallest means correspond to "semi-major" airports, and largest correspond to "non-major" airports. One problem still remains: cluster sizes will still vary by trial, as shown clearly in the graph above. Therefore, I run 100 k-means trials for each country, compute population proportions across these trials, and compare with a stacked-bar ggplot. The red bars are for "major" airports, green for "semi-major," and blue for "non-major."
# create stacked bar chart in ggplot
ggplot(km_by_country, aes(x=Country, y=FractOfWhole, fill=Cluster))
+ geom_bar(position="stack", stat="identity", width=0.75)
+ labs(
y="Percentage of Total Domestic Airports",
title="Cluster Proportionality of Domestic Airports"
)
Now for the fun part.
First, we see that Argentina, Colombia, and Peru have comparatively few "major" airports; most routes in these countries will be sourced by a select few hubs. In Argentina, this is primarily Buenos Aires; in Colombia, primarily Bogotá and Cali (and to a surprising extent Rio Negro and San Andrés Island); and in Peru, primarily Lima. At the opposite end of the spectrum, Brazil and Bolivia house a relatively even distribution of airport types. In Brazil, this is likely due to the sheer volume and variety of domestic routes (~120 working airports), meaning that no matter where you are, you're never that far from anywhere else. In Bolivia, with only ~15 working airports, it seems that the load is simply shared rather evenly across the board, with no one airport as the single, outright major hub, and smaller airports servicing a nice handful of routes themselves.
So - what does this all mean? Countries with more evenly distributed "major," "semi-major," and "non-major" airports make travel much easier. If you're in Central Argentina and want to go somewhere by air, you're rather likely to require a layover in the nation's capital (which is not near the center of the country either) before moving to your destination. In Colombia, while there are many active airports, if you want to travel somewhere a bit "off-path" you're likely to require just a few more layovers than you had hoped. Lastly, if you're in Brazil, unless you're stuck on a canoe in the Amazon Rainforest, you're never really in the middle of logistical nowhere.
In a future post, it will be interesting to look more closely at the economic causes and effects for these air distributions. For now, let's just be thankful we're not gauchos in Patagonia planning a vacation.
Photo Credit: Jimmy Nelson | 2022-07-01 20:29:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4725130796432495, "perplexity": 2527.0504772402287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00483.warc.gz"} |
https://xenforo.com/community/threads/prevent-users-from-replying-to-conversations.203416/#post-1562449 | XF 2.2Prevent users from replying to conversations
TLDR
Active member
Hello,
I negated all permissions related to conversations, and now I noticed that people are still in their conversations. I tried it myself, and while I can't start a new one, I am still able to reply.
So obviously I was about to ask if I missed something in the settings, but as I was typing this thread, this thread popped up.
So I guess this would be not in the core? Is that right?
Or is there a way to disable conversations alltogether?
Thank you!
Replies
2
Views
134
Replies
0
Views
312
Replies
4
Views
265
Replies
4
Views
240
Replies
2
Views
322 | 2022-09-26 19:42:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511605858802795, "perplexity": 733.5583189729164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00535.warc.gz"} |
http://mathschallenge.net/full/rational_roots_quadratic | #### Problem
In the quadratic equation $ax^2 + bx + c = 0$, the coefficients $a$, $b$, $c$ are non-zero integers.
Let $b = -5$. By making $a = 2$ and $c = 3$, the equation $2x^2 - 5x + 3 = 0$ has rational roots. But what is most remarkable is that it is possible to interchange these coefficients in any order and the quadratic will still have rational roots.
Suppose that $b$ is chosen at random. Prove that there always exist coefficients $a$ and $c$ that will produce rational roots. Moreover, once determined, no matter how these three coefficients are shuffled, the quadratic equation will still yield rational roots.
#### Solution
We shall prove this in two different ways.
Proof 1:
Although the first proof is elegant and provides a method for determining one set of values of $a$ and $c$, given $b$, it tells us nothing about the true nature of the problem, nor does it reveal that there are infinitely many different sets of values of $a$ and $c$ that can be determined for any given $b$.
It can be verified that the quadratic equation, $2x^2 + x - 3 = 0$ has rational roots and every arrangement of coefficients will yield rational roots. But the important observation is to note that any integral multiple of this "base" equation, with $b = 1$, will lead to another quadratic with rational roots for every arrangement. For example, if we multiply by 7, we get $14x^2 + 7x - 21 = 0$. This is equivalent to making $b = 7$ and determining that $a = 14$ and $c = -21$ produce a quadratic with rational roots for every arrrangement of the coefficients.
Proof 2:
This proof is perhaps the most revealling and makes use of the fact that the discriminant, $b^2 - 4ac$, must be the square of a rational if the roots of the equation are to be rational. In fact because all the coefficients are integer we can go further by saying that the discriminant must be square.
We also note that interchanging the positions of $a$ and $c$ has not effect on the rationality of the discriminant. Hence we only need consider the three cases of $a$, $b$, and $c$ being the coefficient of the $x$ term in the general quadratic. We shall initially consider the cases where $b$ and $c$ are the coefficient of $x$.
Let $b^2 - 4ac = r^2$ and $c^2 - 4ab = s^2$. We need to show that $r$ and $s$ are both integer.
\begin{align}\therefore r^2 - s^2 &= b^2 - c^2 + 4ab - 4ac\\(r + s)(r - s) &= (b - c)(b + c + 4a)\end{align}
Let $r + s = b + c + 4a$ and $r - s = b - c$.
Adding both equations we get $2r = 2b + 4a \implies r = b + 2a$, and subtracting gives, $2s = 2c + 4a \implies s = c + 2a$.
In other words, we have shown that $r$ and $s$ are both integer.
We must now show that the third discriminant, $a^2 - 4bc$, is also square.
Now $r^2 = (b + 2a)^2 = b^2 + 4ab + 4a^2 = b^2 - 4ac \implies 4a^2 + 4ab + 4ac = 0$
Similarly, $s^2 = (c + 2a)^2 = c^2 + 4ac + 4a^2 = c^2 - 4ab \implies 4a^2 + 4ab + 4ac = 0$
As $a \ne 0$, from $4a(a + b + c) = 0$ we deduce that $a + b + c = 0$.
As $a = -(b + c)$, squaring both sides gives, $a^2 = b^2 + c^2 + 2bc$. Subtracting $4bc$ from both sides:
$a^2 - 4bc = b^2 + c^2 -2bc = (b - c)^2$ QED
Is it possible that a quadratic equation exists for which any combination of the coefficients will yield rational roots and $a$ + $b$ + $c$ 0?
Problem ID: 274 (21 Apr 2006) Difficulty: 3 Star
Only Show Problem | 2016-02-07 17:12:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504358768463135, "perplexity": 146.81537595959952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701150206.8/warc/CC-MAIN-20160205193910-00015-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://ahyeonhwang.wordpress.com/2016/07/05/multisubject-neural-nets-project/ | MultiSubject Neural Nets Project
Background
Neural data is limited by the number of subjects available for research and the technology used for experimentation. This is more evident with brain-machine interfaces (BMIs) which require invasive data. A BMI is a direct communication pathway between a brain and an external device that allows a person to control devices with his or her brainwaves. BMIs are used for neuroprosthetics applications that aim to restore cognitive and sensory motor functions including hearing, sight, speech, and movement. As both anatomy and functional responses differ between individuals, we will compare data from multiple subjects in order to improve the performance of a single-subject BMI. This project will explore multisubject data through computer simulations of the MNIST toy dataset and then to human neural data collected during speech production.
Deep Feedforward Neural Network with MNIST
The basic outline of the deep neural network was created using Pylearn2 and Theano, a Python library and mathematical expression compiler. We first used the MNIST toy dataset, a 60,000×784 matrix of handwritten digits from 0 to 9, which makes 10 classes total. As this project is based on a classification task, we used a machine learning task called supervised learning that maps a set of input to its correct output. The cost and gradients were calculated using logistic regression and backpropagation, which updates the weights by minimizing the cost function. Backpropagation is a method used for computing the gradient, while stochastic gradient descent performs the learning. The cost function is the cross-entropy function.
$Cost(\theta) = -\sum_{x} (y(x)log(\hat{y}(x)))$
These are preliminary notes from Jupyter notebook:
• $\hat{y} = x*W + b$ = T.nnet.softmax(X_sym.dot(W) + b.dimshuffle(‘x’, 0))
• cost = T.mean(T.nnet.categorical_crossentropy(y_hat, y_sym))
• accuracy = T.mean(T.eq(y_sym, T.argmax(y_hat, axis = 1)))
Theano functions
• f = theano.function(inputs = [X_sym, y_sym], outputs = [cost, accuracy])
Other variables from code:
• train_objective: cost being optimized by training
• train_y_nll: negative log likelihood of the current parameter values
• nvis: number of visible units
Softmax regression (multinomial logistic regression) is used to classify K number of classes: y(i)∈{1,…,K}. Whereas in logistic regression the labels are binary, y(i)∈{0,1}, softmax allows us to handle multiple classes. It transforms a level of activation into a probability.
$(p_{1}…p_{n}) = softmax (y_{1}…..y_{n})$ = $({\frac{e^{y_{1}}}{\sum_{j=1}^{n} e^{y_{j}}}}...{\frac{e^{y_{n}}}{\sum_{j=1}^{n} e^{y_{j}}}})$
The main implementation of the neural network was taken from Gustav’s blog (http://www.arngarden.com/2013/07/29/neural-network-example-using-pylearn2/), which shows a step-by-step process of training a neural network. Once the outline was largely set up, we grouped the code inside a function called analyze with the parameters n_train, params, and n_iter=5, with n_train being the number of training samples, params being the number of parameters, and n_iter as the number of iterations of analyze to be run. This function returns misclass_all_iter, a 5×3 matrix displaying the misclassification values for the train, validation, and test sets for 5 iterations.
The MNIST dataset was divided into 3 sets: training, validation, and test. The training set is the original set that is trained, the validation set is used for tuning the parameters of the model and to avoid over-fitting, and the test set is used for performance evaluation. The performance on the test set gives a realistic estimate of the performance of the model on unseen, new data. The training set starts from a random number between 0 and 50,000-n_train and stops at n+n_train, and the validation set is the last 10,000 digits of the MNIST data set. The data was tested against various layer types such as Sigmoid, Tanh, and RectifiedLinear, and the momentum and learning rate adjustors were used for optimization.
This was used to set up the neural network:
layers=[hidden_layer,output_layer]
ann = mlp.MLP(layers, nvis=784)
trainer.setup(ann, ds_train)
After that, I made a plot showing the accuracy vs. the number of training samples. The accuracy curve increased for all three sets then plateaued. Next, Spearmint was used to perform Bayesian optimization on the results. A new function called main(job_id, params) was implemented with various parameters set for optimization. The plot from Spearmint resulted in a higher accuracy curve for the training set as opposed to those of the validation and test sets.
Multilayer Perceptron (Deep Feedforward Network)
After implementing the example neural network, we moved onto a multilayer perceptron (MLP) model with multiple inputs representing neural data from different subjects, the hidden layer, and an output layer which predicts the accuracy of the classification of the MNIST digits. The purpose of using this model is based on the hypothesis that leveraging data from multiple subjects can improve performance as opposed to just using one dataset which doesn’t provide a lot of information. An MLP model consists of an input layer, hidden layer(s), and an output layer, which is shared among all the inputs. A feedforward network defines a mapping $y = f(x; \theta)$ and learns the value of the parameters theta that result in the best approximation.
$y = f(x; \theta, w) = \Phi(x; \theta)^T*w$
*The goal is to learn $\Phi$, which represents the hidden layer activation function.
We first worked on changing the cost functions to fit the functionalities and parameters of the model by modifying the original code for a single MLP from the LISA Lab. Next, we created a new file called multisubject_network.py to extract some of the code from the previous files that had been written for a single layer MLP. We created a separate for-loop for each mlp in n_MLP and created an empty MLP list. The next step is to fix the bugs and make sure everything works. | 2017-10-23 04:08:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3755436837673187, "perplexity": 1115.214622575691}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00427.warc.gz"} |
http://passmathsng.com/jambmaths/95 | # Jambmaths
Maths Question
Question 14
A trader realizes 10 – x2 naira profit from the sales of x bags of corns. How many bags will give him maximum profit.
Question 34
If $y=2x\cos 2x-\sin 2x$, find $\frac{dy}{dx}$when $x=\tfrac{\pi }{4}$
Question 36
If the volume of hemisphere is increasing at a steady rate of 18πm3s–1 . At what rate is its radius changing when it is 6m
Question 33
If the gradient of the curve $y=2k{{x}^{2}}+x+1$ at x = 1 is 9. Find k
Question 35
Differentiate (2x + 5)2 (x – 4) with respect to x
Question 37
Find the rate of change of the V of a sphere with respect to its radius r when r =1
Question 38
If $y=x\sin x$ find $\frac{dy}{dx}$ when $x=\tfrac{\pi }{2}$
Question 39
Find the dimension of the rectangle of greatest areas which has a fixed perimeter p.
Question 8
Find the derivative of y =sin25x with respect to
Question 9
The slope of the tangent to the curve $y=3{{x}^{2}}-2x+5$at the point (1,6) is
Question 11
A circle with radius 5cm has its radius increasing at the rate of 0.2cms-1. What will be the corresponding increase in the area?
Question 12
If $y={{x}^{2}}-\frac{1}{x},$find $\frac{dy}{dx}$
Question 18
Find the maximum value of y in the equation $y=1-2x-3{{x}^{2}}$
Question 36
Find the slope of the curve $y=2{{x}^{2}}+5x-3$at (1,4)
Question 38
If $y=3\sin (-4x),\frac{dy}{dx}\text{ is }$
Question 39
Determine the maximum value of $y=3{{x}^{2}}-{{x}^{3}}$
Question 12
If $y=3\cos (\tfrac{x}{3})$find $\frac{dy}{dx}$when $x=\tfrac{3\pi }{2}$
Question 13
Find the derivative of $(2+3x)(1-x)$with respect to x
Question 14
What is the rate of change of the volume V of a hemisphere with respect to its radius when r =2
Question 15
Find the derivative o the function $y=2{{x}^{2}}(2x-1)$ at the point x = –1
Question 1
If $y={{(1-2x)}^{3}}$Find the value of $\frac{dy}{dx}$at x =1
Question 3
The radius of circular disc is increasing at the rate of 0.5cm/sec. At what rate is the area of the disc increasing when its radius is 6cm
Question 3
The radius of circular disc is increasing at the rate of 0.5cm/sec. At what rate is the area of the disc increasing when its radius is 6cm
Question 5
The maximum value of the function $f(x)=2+x-{{x}^{2}}$is
Question 6
Find the derivative of $y=\sin (2{{x}^{3}}+3x-4)$
Question 3
Differentiate ${{\left( {{x}^{2}}-\tfrac{1}{x} \right)}^{2}}$ with respect to x
Question 4
Find the value of x for which the function $3{{x}^{3}}-9{{x}^{2}}$ is minimum
Question 6
Differentiate ${{(\cos \theta -\sin \theta )}^{2}}$ with respect to θ
Question 36
If $y=x\cos x$find $\frac{dy}{dx}$
Question 37
If $y={{(1+x)}^{2}},$find $\frac{dy}{dx}$
Question 38
Find the value of x for which the function $f(x)=2{{x}^{3}}-{{x}^{2}}-4x+4$has a maximum value
Question 36
Find the derivative of $y=\frac{{{x}^{7}}-{{x}^{5}}}{{{x}^{4}}}$
Question 37
Differentiate sin xx cos x
Question 38
Find the minimal value of the function $y=x(1+x)$
Question 36
If $y=3\cos 4x,\text{ }\frac{dy}{dx}$ equals
Question 37
If $s=(2+3t)(5t-4),\text{ find }\frac{ds}{dt}$ when t = $\tfrac{4}{5}$secs.
Question 38
What is the value of x will make the function $x(4-x)$ a maximum?
Question 39
The distance travelled by a particle from a fixed point is given as $s={{t}^{3}}-{{t}^{2}}-t+5$find the minimum distance that the particle can cover from the fixed point.
Question 40
If $y={{(2x+1)}^{3}},\text{ find }\frac{dy}{dx}$
Question 41
If $y=x\sin x,\text{ find }\tfrac{dy}{dx}$ | 2017-08-20 02:05:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.773091733455658, "perplexity": 970.647178701192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00422.warc.gz"} |
http://mathoverflow.net/revisions/93863/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
2 added 497 characters in body
The central idempototents of a finite dimensional algebra form a finite Boolean algebra with 1 as the max and the central primitive idempotents as the atoms. They are The decomposition of 1 into central idempotents is thus unique. So the central idempotents are the face lattice of a simplex. The order is $e\leq f$ if $e\in fA$. Your simplicial complex would be the order complex of this Boolean algebra and so would be the barycentric subdivision of a simplex.
Added details. The boolean algebra operations are given by $e\wedge f=ef$, $e\vee f=e+f-ef$ and $\neg e=1-e$. The finiteness follows, for example, because one can look at the regular representation of the algebra $A$ by matrices and observe that the central idempotents form a commutative semigroup of idempotent matrices. Such a semigroup is simultaneously diagonalizable and there are only $2^n$ diagonal idempotent $n\times n$ matrices.
1
The central idempototents of a finite dimensional algebra form a finite Boolean algebra with 1 as the max and the central primitive idempotents as the atoms. They are unique. So the central idempotents are the face lattice of a simplex. The order is $e\leq f$ if $e\in fA$. Your simplicial complex would be the order complex of this Boolean algebra and so would be the barycentric subdivision of a simplex. | 2013-06-20 05:20:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.738761305809021, "perplexity": 273.7115155735706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00090-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.homebuiltairplanes.com/forums/threads/have-we-reached-the-end-of-the-steam-gauge-era.26860/page-4 | # Have we reached the end of the Steam Gauge era?
### Help Support HomeBuiltAirplanes.com:
#### Wanttaja
##### Well-Known Member
Well, again not even e-ink, tho it's closer I guess. I'm talking make a traditional gauge but use some kind of servo, or microstepper motor to move your dial or indicators physically, in the real world. Like a CNC machine ya know? It's more work but the result is something that again should be indistinguishable from a real gauge. You should be able to outfit a P51 with these and someone would think "this looks original" and then you reveal the secret plot twist that they've been flying evil digital the whole time! Muwahahahahahaha *moustache twirl*
Digital to mechanical interfaces. Brrr. Not that easy in a small, cheap package. Also needs decent feedback to the electronics to ensure the mechanicals are indicating what the electronics tell them to do.
Certainly realize this sort of thing is done with stuff like CNC, but now we're talking lightweight, long-term vibration, life-critical stuff. Steam gauges are probably the better implementation.
As for as E-Ink, I love my Kindle, but the stuff isn't fast.
Ron Wanttaja
#### Mark Z
##### Well-Known Member
I may be misinterpreting your comment, but what do you mean basically one off? There ar hundreds flying.
Maybe "one off" isn't a proper adjective. My question to you is how many hours have you flown one of those hundreds flying? You may or may not like this airframe.
#### bmcj
##### Well-Known Member
HBA Supporter
Well, again not even e-ink, tho it's closer I guess. I'm talking make a traditional gauge but use some kind of servo, or microstepper motor to move your dial or indicators physically, in the real world. Like a CNC machine ya know? It's more work but the result is something that again should be indistinguishable from a real gauge. You should be able to outfit a P51 with these and someone would think "this looks original" and then you reveal the secret plot twist that they've been flying evil digital the whole time! Muwahahahahahaha *moustache twirl*
digital failure rates might give advantage to mechanical and mechanical failure rates might give advantage to digital. Now we can have a system that can fail two ways.
My buddy (since passed) made a very lucrative business retrofitting hot rods and muscle cars with custom round gauge panels that were fed by remote digital sensors. He could fabricate the dual face to replicate any vintage gauge.
#### Toobuilder
##### Well-Known Member
Log Member
IF you're responding to me, let me say that if there is such a thing as I am looking for out there, I'd love to hear of it. But so far I've found that while there are discrete digital gauges, there's two issues with them as far as I understand: they look digital in that they use LED displays or LCD screens or something else, which isn't as appealing (and for my specific warbird aesthetic especially) but is also not as intuitive as a dial for some. I've seen many that will espouse the benefits of a physical dial vs any sort of flat screen display. And the second issue is they are actual instruments with their own internal chipsets interpreting data. This makes them expensive and vital.
I'd want gauges that look and act exactly like analog steam gauges on the front side of the panel, such that you would not notice any difference between an aircraft with these and one that had actual steam gauges wired in the old way. But on the backside, you would only have short housings and a USB or SATA cable or something-else port, with the corresponding type of common data cables all tied to a central hub on an EFIS computer, so that each instrument is simply a dumb 'display'. It just outputs the dial positions it is told to by the computer that does all the actual work. The hope would be that by having zero logic in the instrument it could theoretically be fairly affordable and not as prone to obsolescence when better sensors or computing methods or whatever is available. Also if one craps out, you still have that data, you could use an iPad or other monitoring screen to reference the data the gauge is unable to render.
As far as I've found, there is no such system out there. Either the sexy EFIS which comes with a big glass display, digital gauges that individually replace individual steam gauges, or classic steam. If someone does make this, I havn't come across it yet!
There are a few points in this thread which bear a closer look - some are more esoteric than others.
First off is the need to maintain a certain aesthetic so that the instruments fit the theme of a particular aircraft. I get that just a little, but I'd say that market is very, very small. After all, new Waco biplanes come with glass these days. A more practical issue is the very real difficulty people have when transitioning from steam to glass. For a bunch of people the information presented is just too overwhelming to be useful. My -8 has both glass and steam and its funny how some pilots focus on the steam airspeed and altitude tucked down in the corner of the panel instead of the far more obvious (to me) versions in living color right in front of their face on the PFD. This can be easily overcome by most, so one should not fear change in that respect.
But the one thing I do agree with here wholehartedly is the need for a remote EFIS/EMS "black box" which feeds a standardized display. Display screens are cheap and getting better all the time, why not allow THAT to be the thing that gets replaced every 2 years instead of the whole system? This way you can fit the size screen you want connected with a simple cable and upgrade whenever you want At low cost. The person who brings this concept to market is going to change the homebuilt avionics industry.
#### rbrochey
##### Well-Known Member
Glass and computer glitches grounded the United fleet yesterday... sat technology is not all it's cracked up to be... notice how much longer it takes to buy a pound of 6 penny nails where a computer is involved?... I think the sat technology is going to be even less reliable in the future... this morning the cloud cover made it impossible to use my flip phone but my land line worked... it always works. Give me steam gauges...
#### TXFlyGuy
##### Well-Known Member
The answer in a word is yes. In fact, it happened years ago.
BJC
#### TXFlyGuy
##### Well-Known Member
Glass and computer glitches grounded the United fleet yesterday... sat technology is not all it's cracked up to be..
Not yesterday. About 2 months ago, maybe. But then it was actually an FAA issued ground stop. Another knee jerk reaction by the feds.
#### TFF
##### Well-Known Member
Glass is great, but nothing cooler than getting up in a B-17 or DC3 with all the gauges. My boss and I saw a friend ferrying a QueenAir somewhere, and we asked if we could climb in for a minute. My boss sat down and he just sat there; I could tell the angels were singing a big choral wall of sound. Except for duel 430s all round gauges. My boss has 30,000 hrs split even between helicopters and airplanes. He loves flying glass panels, but they will never get the response of love and respect those round gauges had that day. Even he would say he would rather have the glass, but in reality he misses the skill needed to fly VOR to VOR. Thomas Wolf "You cant go home again", but boy it would be great if you could.
#### TXFlyGuy
##### Well-Known Member
Glass is great, but nothing cooler than getting up in a B-17 or DC3 with all the gauges. My boss and I saw a friend ferrying a QueenAir somewhere, and we asked if we could climb in for a minute. My boss sat down and he just sat there; I could tell the angels were singing a big choral wall of sound. Except for duel 430s all round gauges. My boss has 30,000 hrs split even between helicopters and airplanes. He loves flying glass panels, but they will never get the response of love and respect those round gauges had that day. Even he would say he would rather have the glass, but in reality he misses the skill needed to fly VOR to VOR. Thomas Wolf "You cant go home again", but boy it would be great if you could.
Sure, there is a certain nostalgia about antique things. We were even going to put period correct instruments in our Mustang. Until, that is, we looked at the cost, the weight, the reliability, and the real world of FAA IFR flying going forward. The modern glass panel gives you so much more information, and capability, and reliability.
If round dials were really cool, Garmin would make them.
#### 103
##### Well-Known Member
For the last 50 or 75 years at least, the fundamental mind-set in aviation has been maximum reliability AND the presence of one or more back-up systems everywhere it is possible to have them. That shouldn't go away IMHO. Anything pneumatic can have one kind of problem (vacuum pump, bugs in the pitot tube) and anything electronic can have another kind of problem (battery, bad circuit breaker).
Modern electronics have gotten a lot better in terms of reliability, but the truth is that your phone and iPad can still be hacked by some 15 year old kid 5000 miles away from wherever you are. Nowdays, there is a strong trend towards having aircraft electronics and data systems linked and networked and sharing data with ATC, etc. So there is a network connection and there is a data pathway that can eventually be cracked. So the very LAST thing that I want is for a network outage, innocent glitch, or intentional meddling to affect my navigation. "Honest, General, the little highway-in-the-sky icon told me I was on course a hundred miles away from here, and then all of a sudden it said ATTENTION Lottery Winner!!! and it wanted my credit card number!"
As much as I will always want to carry a paper chart, I absolutely love the little $75 computer tablet and the free AvAre moving map software that I fly with. They both have a place in my airplane. For a homebuilt VFR minimalist airplane (our beloved VP-21 or Flying Motorcycle) I would say that a small tablet or large phone Velcro'ed to the panel, and perhaps one or two 2 inch pneumatic instruments, would be a good compromise. Somebody should be thinking about making a (mechanical) combination airspeed and altimeter (concentric dials or split quadrants) in a 2 1/4 inch size instrument. Outer ring is non-sensitive altitude, inner ring is airspeed - or top half separate from bottom half. Sort of like a 2 in 1 combo EGT/CHT, but air operated for ALT/ASI. Ramble/Rant switch off For the last 50 or 75 years at least, the fundamental mind-set in aviation has been maximum reliability AND the presence of one or more back-up systems everywhere it is possible to have them. That shouldn't go away IMHO. Anything pneumatic can have one kind of problem (vacuum pump, bugs in the pitot tube) and anything electronic can have another kind of problem (battery, bad circuit breaker). Modern electronics have gotten a lot better in terms of reliability, but the truth is that your phone and iPad can still be hacked by some 15 year old kid 5000 miles away from wherever you are. Nowdays, there is a strong trend towards having aircraft electronics and data systems linked and networked and sharing data with ATC, etc. So there is a network connection and there is a data pathway that can eventually be cracked. So the very LAST thing that I want is for a network outage, innocent glitch, or intentional meddling to affect my navigation. "Honest, General, the little highway-in-the-sky icon told me I was on course a hundred miles away from here, and then all of a sudden it said ATTENTION Lottery Winner!!! and it wanted my credit card number!" As much as I will always want to carry a paper chart, I absolutely love the little$75 computer tablet and the free AvAre moving map software that I fly with. They both have a place in my airplane.
For a homebuilt VFR minimalist airplane (our beloved VP-21 or Flying Motorcycle) I would say that a small tablet or large phone Velcro'ed to the panel, and perhaps one or two 2 inch pneumatic instruments, would be a good compromise.
Somebody should be thinking about making a (mechanical) combination airspeed and altimeter (concentric dials or split quadrants) in a 2 1/4 inch size instrument. Outer ring is non-sensitive altitude, inner ring is airspeed - or top half separate from bottom half. Sort of like a 2 in 1 combo EGT/CHT, but air operated for ALT/ASI.
Ramble/Rant switch off
#### 103
##### Well-Known Member
For the last 50 or 75 years at least, the fundamental mind-set in aviation has been maximum reliability AND the presence of one or more back-up systems everywhere it is possible to have them. That shouldn't go away IMHO. Anything pneumatic can have one kind of problem (vacuum pump, bugs in the pitot tube) and anything electronic can have another kind of problem (battery, bad circuit breaker).
Modern electronics have gotten a lot better in terms of reliability, but the truth is that your phone and iPad can still be hacked by some 15 year old kid 5000 miles away from wherever you are. Nowdays, there is a strong trend towards having aircraft electronics and data systems linked and networked and sharing data with ATC, etc. So there is a network connection and there is a data pathway that can eventually be cracked. So the very LAST thing that I want is for a network outage, innocent glitch, or intentional meddling to affect my navigation. "Honest, General, the little highway-in-the-sky icon told me I was on course a hundred miles away from here, and then all of a sudden it said ATTENTION Lottery Winner!!! and it wanted my credit card number!"
As much as I will always want to carry a paper chart, I absolutely love the little \$75 computer tablet and the free AvAre moving map software that I fly with. They both have a place in my airplane.
For a homebuilt VFR minimalist airplane (our beloved VP-21 or Flying Motorcycle) I would say that a small tablet or large phone Velcro'ed to the panel, and perhaps one or two 2 inch pneumatic instruments, would be a good compromise.
Somebody should be thinking about making a (mechanical) combination airspeed and altimeter (concentric dials or split quadrants) in a 2 1/4 inch size instrument. Outer ring is non-sensitive altitude, inner ring is airspeed - or top half separate from bottom half. Sort of like a 2 in 1 combo EGT/CHT, but air operated for ALT/ASI.
Ramble/Rant switch off
#### cluttonfred
##### Well-Known Member
HBA Supporter
I believe that there is specific FAA regulation prohibiting glass panels in anything designed before 1970. ;-) Seriously, I'll be interested to see if someone comes up with a standardized analog gauge face driven my a central electronic unit to replicate the look of analog gauges but with the convenience and lower cost of electronics.
#### mcrae0104
##### Well-Known Member
HBA Supporter
Log Member
I'll be interested to see if someone comes up with a standardized analog gauge face driven my a central electronic unit to replicate the look of analog gauges but with the convenience and lower cost of electronics.
It's sort of been done. This is an old MGL unit (10 or 15 years old maybe). I think they allow you to customize the layout of the screen on their newer units so I suppose it may be possible to replicate a standard six pack.
#### BJC
##### Well-Known Member
HBA Supporter
I believe that there is specific FAA regulation prohibiting glass panels in anything designed before 1970. ;-) Seriously, I'll be interested to see if someone comes up with a standardized analog gauge face driven my a central electronic unit to replicate the look of analog gauges but with the convenience and lower cost of electronics.
There were some of those a dozen or more years ago. Haven’t seen any in recent years.
Today, one can select an EFIS display format that replicated the standard six. Since I had only a few minutes time flying with glass, I selected the standard six for the first flight of my Sportsman. After takeoff on the second flight, I selected the standard digital display and have never gone back.
I still prefer an analog airspeed display for aerobatics, because I normally am glancing at it to see where the needle is pointing rather than to read a number.
BJC
#### Hot Wings
##### Grumpy Cynic
HBA Supporter
Log Member
T
I still prefer an analog airspeed display for aerobatics, because I normally am glancing at it to see where the needle is pointing rather than to read a number
BJC
And this is why I don't like glass. If I want data I want it NOW. I want to be able to spot a trend without having to mental math.
I don't generally need data to 1% accuracy. If I do then I'll look at the digital.
#### Mark Z
##### Well-Known Member
If it all comes down to a single item in the VFR world, I’d opt for an oil pressure idiot light.
#### Mark Schoening
##### Well-Known Member
HBA Supporter
My new project, a cruzer, will have airspeed, altimeter, compass, and engine gauges. Pure VFR - lookin' out and sight-seenin' is more enjoyable than info overload....Cheaper, too.
Mission driven, going to breakfast and lookin' down...It's all I need.
#### Toobuilder
##### Well-Known Member
Log Member
Cars switched to stepper motor driven instruments decades ago, and even that technology has been surpassed by pure digital displays. The individual capillary tube or resistance bridge instruments are becoming an increasingly niche product. It would be pretty easy to build a "universal" stepper motor display that looks and acts like an instrument panel from the 1960's, but nobody could really agree on "universal" anyway.
#### TFF
##### Well-Known Member
Acclimation and aesthetics. Just like the first digital clocks and watches. A lot has to do with where you enter and where you have to go with the technology. I like gauges. Clockwork guts. Mechanical movement. That is an aesthetic love of machines for me. I entered the a&p world with glass. You can put a lot of useful information in a small easy to read space. Once you know where to look, you don’t have to look so far and idiot lights can be put anywhere the programmer wants. If you fly an airbus/ eurocopter helicopter, the engine gauge just shows a “good” meter unless a parameter goes to yellow or red. Only then can you read what is wrong. How many can see the oil pressure in a Cessna or Mooney? They are not in obvious places and it takes practice to know where to look. Most pilots don’t ever look. Maybe at startup. They see altimeter and they see airspeed and maybe once an hour they think about checking the oil or cht, then they go back to their daydream. So what is better mechanical gauges no one looks at or computer that decides for you. Seems to be inaction on the pilots part either way.
#### gtae07
##### Well-Known Member
I trained on steam without GPS, flying cross-countries with a watch, sectional, and E6-B. I've flown basic VFR steam and Skyview in the same airplane; most of the hours on steam. And I've found that other than a slight edge to a round airspeed or altitude gauge, glass wins hands-down everywhere else once you depart from day VFR around the patch. The biggest difficulty I had in adjusting to the Skyview was relearning my speeds (the steam was marked in MPH and the Skyview was in knots).
With glass I spend less time heads-down dicking with calculations and trying to read a map and juggling paperwork and looking up to make big corrections. I spend more time looking out the canopy, with a better sense of where I am (and where everyone else is), what's ahead of me, and what the airplane is doing. I can see information (e.g. weather) instead of having to hear it from FSS and try to picture it in my head and cross-correlate it with a small map. I don't have to spend as much brainpower on low-level computational and administrative tasks, freeing me to focus on higher-level tasks. Especially with even a basic autopilot, I'm not nearly as fatigued at the end of a long flight.
To be fair, a large chunk of this capability can be had with an iPad, Foreflight, and ADS-B In, and most of it I don't really use when I'm making that day VFR local fun flight--just I keep the map pulled up so I can keep clear of the Class B and glance at it for traffic. But it really earns its keep on the longer flights.
Relevant story: So one day (2012 ish) my parents were flying up the east coast to go visit some friends from my dad's Navy days. They ran into some weather about the time they got to Virginia. At the time, Dad's airplane was in the basic VFR steam configuration mentioned earlier, plus a Garmin 195 (monochrome handheld GPS). Mom (not a pilot) wound up doing the flying while Dad tried to navigate them out of the weather and get them to somewhere they could land. Once they did, Mom turned to him and said "I don't care what it costs, you upgrade that panel or I'm not flying anywhere with you ever again". She'd seen what was available in other guys' airplanes, and even though she wasn't a pilot she could immediately appreciate the usefulness it could bring even to her.
Acclimation and aesthetics. Just like the first digital clocks and watches. A lot has to do with where you enter and where you have to go with the technology.
As I said, I trained in the all-steam no-GPS world. But my first professional job had me testing (what was at the time) the most advanced cockpit in the civilian world, in the engineering sim. I'd seen representations of early glass cockpits (1980s era, think MD-88 and 757 level) in my computer flight sims and my visit to the sim at Dad's employer, but I was utterly blown away the first time I sat in our engineering sim as a new co-op. Full-color moving map with a cursor and weather overlay?! And then they showed me the early alpha version of what would come to be called "synthetic vision". What was this magic?!
Now Dad's little homebuilt has that (and more!) on a strikingly-similar-looking display.
We're very near the point now where a basic EFIS and engine monitor package is cost-competitive to good VFR steam (I think we've passed it for IFR already). I think we're at (or even past) the point where it's lighter and easier to install. Now that STCs are out there to put experimental EFIS into certified airplanes, I expect that the market for round gauges, especially gyros and vacuum equipment, will dry up. | 2020-08-10 05:51:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34080538153648376, "perplexity": 2599.559437534167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00057.warc.gz"} |
https://www.physicsforums.com/threads/the-collision-clock-experiment.418803/ | # The Collision Clock Experiment
1. Jul 27, 2010
### calebhoilday
This is a thought experiment that best depicts my issue in understanding special relativity, particularly the congruency of the velocity addition and velocity subtraction formula with other aspects of the theory.
This experiment consists of two clocks; clock A and clock B.
Clock A consists of a cannon that fires a projectile every second at a wall. The distance between the wall and the clock is (0.75)^0.5 light-seconds. The velocity of a projectile is (0.75)^0.5 C. These parameters result in a collision between a projectile and the wall every second in the observing frame of reference.
Clock B consists of two cannons that fire a projectile every second at each other. The distance between the cannons is 2x(0.75)^0.5 light-seconds. The velocity of each projectile is (0.75)^0.5 C. These parameters result in a collision between projectiles every second.
The purpose of this experiment is to convert from the frame of an observer to that of a projectile and determine the duration between event 1 firing and event 2 collision.
I am lead to believe that duration in a moving frame, divided by the duration in a stationary frame is equal to (1- (V^2/C^2))^0.5 . If this is the case, a projectile from either clock can be expected to consider that 0.5 seconds pass between event 1 and event 2 as time is dilated for the projectile so that for every one second on an observer’s clock, 0.5 seconds pass on a projectile’s clock based on the velocity the projectiles have.
The process of determining the duration between event 1 and event 2 in the projectiles frame of reference consists of determining:
(1) The distance between projectile and object one that the projectile will collide with in the frame of the projectile.
(2) The velocity of the object that the projectile will collide with in the frame of the projectile.
(3) The duration between event 1 and event 2 in the projectiles frame of reference, by dividing the distance as determined in (1) by the velocity as determined in (2)
Clock A
(1)
D =Length in observers frame/ (length of projectile with velocity/ length of projectile when stationary) = 2(0.75)^0.5 light-seconds
(2)
U = (S-V)/(1-(SV/C^2)
= (0.75)^0.5
(3)
T=D/U
= 2(0.75)^0.5/(0.75)^0.5
= 2 seconds
Clock B
(1)
D =Length in observers frame/ (length of projectile with velocity/ length of projectile when stationary) = 4(0.75)^0.5 light-seconds
(2)
U = (S-V)/(1-(SV/C^2)
= 2(0.75)^0.5/(1+0.75)
=(8/7)*(0.75)^0.5
(3)
T=D/U
= 4(0.75)^0.5/(8/7)*(0.75)^0.5
= 3.5 seconds
As can be seen Clock A and Clock B produce different results for the duration between event 1 and event 2 in the projectiles frame of reference, when in both the duration for the observer is 1 second. Neither of them produced a result that was consistent with the formula derived duration of 0.5 seconds.
????
2. Jul 27, 2010
### starthaus
This is wrong, the correct ratio is the inverse of what you "are lead to believe"
3. Jul 27, 2010
### calebhoilday
If what you are saying starthaus is correct, then everything is good for Clock A. My only problem is that i always considered that it was the moving observe that dilated or considered less time to pass, what your saying is the reverse.
4. Jul 28, 2010
### Austin0
Hi ca;ebholiday
I am somewhat confused regarding your parameters here. Comparing A to B
If I am understanding just B then with the two opposite cannons it seems that the
Relative velocity would be .989c But this provides no information regarding the distance between the two frames. The only available information is relative to the rest frame i.e. .866 ls between the two cannons as calculated in cannonball frame.
SO if you use the v relative to the rest frame you get the t'=.5 s
If you use the relative v between cannonballs you get t'=.4378 s
If you do use the .989c gamma as applied to the distance you get t' = .1295 s but I dont think this has a reasonable basis because self evidently the v wrt the rest frame is not .989c
SO I am unsure about your projectile length and dont get the 3.5 s you arrived at????
Or maybe I am just not understanding
5. Jul 28, 2010
### calebhoilday
I’ve seen it depicted before that if one was travelling with velocity that outside objects appear expanded. Iv gone about it this way, to be consistent with literature, even though it is probably wrong. If you inverse my length manipulations then you get a result that is consistent with the formula for clock A, but not for clock B. The Problem still remains.
6. Jul 28, 2010
### Staff: Mentor
You must take great care in analyzing these "clocks" since event 1 and event 2 take place at different locations. You can't simply apply the time dilation formula; these are not simple clocks. (A simple clock is one where in its own frame the 'ticks' take place at a single location.)
This is a misleading way of putting it. A clearer statement might be: An observer in a 'stationary' frame will observe a moving clock to run slow by a factor of gamma = 1/(1- (V^2/C^2))^0.5. This time dilation formula applies to measurements of moving clocks--normal 'clocks' that have a single location in the moving frame--not to generic 'durations'. To analyze how 'durations' between spatially separated events will transform between frames, you must also consider length contraction and the relativity of simultaneity. (All of which is contained in the Lorentz transformations.)
I suggest you start over.
7. Jul 28, 2010
### calebhoilday
Im guessing it is a relativity of simultaneity issue.
Event 1 and event 2 may not be happening in the same location, but event 1 for clock A and clock B happen in the same position and event 2 happens in the same position for both clocks also.
This idea of a simple clock doesn't make any sense to me. Time is defined by motion and requires changes in location that are consistant to be measured.
Should I change "D =Length in observers frame/ (length of projectile with velocity/ length of projectile when stationary)" to D =Length in observers frame * (length of projectile with velocity/ length of projectile when stationary) ???
8. Jul 28, 2010
### JesseM
True in practice, but in relativity an ideal clock is basically what you get if you take a light clock and consider the limit as the distance between mirrors approaches zero.
9. Jul 28, 2010
### Staff: Mentor
Yes it is.
Stop calling them 'clocks'; you'll just continue to confuse matters.
It might be helpful to imagine actual clocks co-located with the two events. In the rest frame of the cannons, those clocks are synchronized. In the frame of the moving cannonball, they are not.
You don't have a wristwatch or a wall clock? That's the sort of clock I'm talking about.
It takes one second for the cannonball to travel from cannon to target (in the frame of the cannons). Whether there's a collision every second depends on the rate at which you fire the cannon. Fire it every 10 seconds and the collisions will be 10 seconds apart. (Again, in the frame of the cannons.)
10. Jul 28, 2010
### calebhoilday
Sorry, I left out that the cannons fire every second, but I think it really doesn’t matter a whole lot.
A Collision clock is the same as any other clock that can exists. A wrist watch isn't a simple clock, it relies on something to move, even if it is just electrons over very small distances.
If considering relativity of simultaneity, the difference between events from one frame to another is dependent on the relative velocity of these frames and nothing else according to the formula. All the projectiles have the same velocity, compared to the cannons or the observer.
The velocity addition formula is meant to be the outcome of combining length contraction, time dilation and relativity of simultaneity. I just don't get how it is and this is the best way I can depict why.
11. Jul 28, 2010
### Staff: Mentor
If all you want to do is figure out the time it takes for the cannonball to go from cannon to target in the frame of the cannonball, you can just use the time dilation formula. From the view of the cannon frame, the time as measured on a clock moving with the cannonball is dilated by a factor of 2. So if the cannon frame measures the travel time as 1 second, the cannonball will measure the trip to take 1/2 second.
You can also get this by taking the distance the cannonball frame measures between cannon and target (use length contraction) and dividing it by the speed (which is given). You'll get the same answer, of course.
12. Jul 28, 2010
### JesseM
No, there is only contraction, in every frame. If my ship is moving at relativistic speed relative to your ship, then in my rest frame your ship is shrunk relative to mine, and in your rest frame my ship is shrunk relative to yours. All inertial frames are equally valid, and the laws of physics work the same way in all of them.
In your example of clock A, in the frame of the cannon the projectile moves at 0.866c and the distance from the cannon to the wall is 0.866 light-seconds, so the time is one second. In the frame of the projectile, the distance between the cannon and the wall is shrunk by a factor of $$\sqrt{1 - 0.866^2}$$ = 0.5, so the distance is only 0.5*0.866 = 0.433 light-seconds. The projectile is at rest while the wall is moving towards the projectile at 0.866c, so the time for the projectile to reach the wall is distance/speed = 0.433/0.866 = 0.5 seconds.
For clock B, we have to consider the relativity of simultaneity. If two events are simultaneous and a distance x apart in one frame, then in a frame moving at v relative to that first frame, the time between the events is gamma*(vx/c^2), with gamma=1/sqrt(1 - v^2/c^2). In your example the two cannons fire simultaneously and a distance of 2*0.866 = 1.732 light-seconds apart in their rest frame, and the left projectile's frame is moving at 0.866c relative to the cannon frame (giving us gamma=2), so the right cannon fired 2*(0.866c*1.732) = 3 seconds earlier than the left cannon. The distance between the cannons at any moment is 0.5*1.732 = 0.866 light-seconds in the frame of the left projectile--so if we define x=0 as the position of the left projectile, then at the moment the left cannon fires the right cannon is at x=0.866 light seconds, and since the right cannon is moving towards the left projectile at 0.866c in this frame, that must mean that 3 seconds earlier (when the right cannon fired) the right cannon was at position x=0.866 + 3*0.866 = 3.464 light-seconds.
According to the velocity addition formula, the right projectile was traveling at (0.866c + 0.866c)/(1 + 0.866*0.866) = 1.732c/1.75 = 0.9897c. So, if the right projectile was fired at a distance of 3.464 light-seconds from x=0, it reaches x=0 after an interval of 3.464/0.9897 = 3.5 seconds. Since the right cannon fired 3 seconds before the left cannon in this frame, it must hit the left projectile 0.5 seconds after the left cannon was fired in this frame.
So, when you correctly factor into account length contraction, the relativity of simultaneity, and relativistic velocity addition, you do find that the time to collision in the projectile frame is 0.5 seconds for both clocks A and B.
13. Jul 28, 2010
### calebhoilday
Here are the true results without relativity of simultaneity factored in.
Clock A
(1)
D =Length in observers frame*(length of projectile with velocity/ length of projectile when stationary) = 0.5(0.75)^0.5 light-seconds
(2)
U = (S-V)/(1-(SV/C^2)
= (0.75)^0.5
(3)
T=D/U
= 0.5(0.75)^0.5/(0.75)^0.5
= 0.5 seconds
Clock B
(1)
D =Length in observers frame* (length of projectile with velocity/ length of projectile when stationary) = (0.75)^0.5 light-seconds
(2)
U = (S-V)/(1-(SV/C^2)
= 2(0.75)^0.5/(1+0.75)
=(8/7)*(0.75)^0.5
(3)
T=D/U
= (0.75)^0.5/(8/7)*(0.75)^0.5
= 0.875 seconds
How do you get 3.5 seconds as clock B time before factoring in relativity of simultaneity?
14. Jul 29, 2010
### JesseM
I didn't, that figure took into account the relativity of simultaneity which says that in the frame of the left projectile, the right cannon fired 3 seconds earlier than the left one.
15. Jul 29, 2010
### calebhoilday
In in my calculations my length is 0.866 light seconds not 3.464 light seconds. Could you please explain jesseM. sorry im having difficulty following your formatting.
16. Jul 29, 2010
### Austin0
Hi JesseM Working off your figures above, leads to some interesting questions.
If we posit the projectiles being clocks that set to 0 on leaving the cannon:
The above simultaneity result, of the right cannon firing 3 s before the left presents some pregnant implications. The situation is symmetrical so the same applies to the right cannonclock i.e. the left cannon fired 3 s earlier, yes??
We can assume the cannon frame observers would see the clocks emerging with proper time =0 so this can be taken as hypothetical empirical reality ,that all frames would agree upon.
Likewise a cannon frame observer proximate to collision would see .5 s on both clocks.
So the idea that from the frame of the left cannonclock the right clock passed 3 s of proper time before impact would appear to not be consistent with observed reality.
It could only be that it passed 3 s of the left frames coordinate time, yes?
This would seem to infer that although both cannonclock frames would agree that proximate observers in their respective frames would be 3 s out of synch with the opposite cannonclock, that would only be possible if it was actually their clocks that were out of synch.
I.e. If it is established that both cannonclocks actually, by observation , read 0 at firing then it makes no sense to say an opposite observer see's it reading -3 at that point.
There is no point where it reads -3 .
This leads to some interesting speculation wrt the reality and meaning of simultaneity.
I.e. Is it something that occurs basically within a frame perhaps as a result of the synch convention or does it have actual temporal meaning as some seem to think.
There are other questions here but I d like your take on this. SOmething I am missing maybe?
Thanks
BTW did your "how to" book help with Heidigger , hermeneutically speaking?
17. Jul 29, 2010
### JesseM
Remember that in the frame where the left projectile is at rest, both the left and right cannons are moving at 0.866c. At the moment the left projectile is fired from the left cannon, the right cannon is indeed 0.866 light-seconds away from the left projectile. But 3 seconds earlier, the motion of the right cannon means that it was an additional distance of 3*0.866c from the position where the left projectile would later come to rest in this frame, which I defined as being position x=0. So, the total distance of the right cannon from position x=0 at the moment the right cannon fired must be 0.866 light seconds + 3*0.866c = 3.464 light-seconds.
18. Jul 29, 2010
### JesseM
In the frame where the right projectile is at rest after being fired, yes.
If you want to assume that each projectile has an onboard clock which is set to T=0 at the moment it is fired from the cannon, that's fine.
All frames would agree on the time the two onboard clocks show at the moment they collide, since this each clock's reading will be at the same local point in spacetime when they collide, and different frames never disagree about local events.
Proper time is frame-invariant, all frames agree that the two projectiles experienced 0.5 seconds of proper time between being fired and colliding. And if you're talking about the time between the right cannon firing and the impact, in the frame where the left projectile is at rest (after being fired, it's moving along with the cannon beforehand) the coordinate time is 3.5 seconds, not 3 seconds. 3 seconds was the coordinate time between the right cannon firing and the left cannon firing.
3 seconds is coordinate time, yes.
Each frame uses a system of clocks and rulers at rest in that frame (with the clocks synchronized in that frame according to the Einstein synchronization convention) to define the position and time coordinates of each event. So, imagine a ruler moving inertially in such a way that the left projectile is at rest relative to the ruler between being fired and colliding with the right projectile, and also imagine that we have a series of clocks at rest relative to the ruler, attached to each ruler-marking. Then when I say the left cannon is fired at x=0 light-seconds and t=0 seconds in this frame, I mean that the event of the left cannon firing occurs right next to the x=0 light-second mark on the ruler, and the clock at that mark reads t=0 seconds when it fires. And when I say that the right cannon fires at x=3.464 light-seconds, t=-3 seconds, I mean that the event of the right cannon firing occurs right next to the x=3.464 light-second mark on this ruler, and the clock sitting at that mark reads t=-3 seconds when this happens.
For an illustration of this sort of thing, you could check out the pictures and description I gave of two ruler/clock systems moving alongside each other in the OP of this thread...
I don't remember mentioning Heidegger on these forums, are you thinking of another poster? Or maybe you saw a Heidegger book on my librarything page linked on my website?
19. Jul 29, 2010
### JesseM
Incidentally, note that it actually is possible to get this result without referring to the relativity of simultaneity, as long as we know that the two projectiles hit each other at the exact midpoint of the two cannons. Instead of having x=0, t=0 be the coordinates of the left cannon firing as I assumed earlier, let's redefine x=0, t=0 to be the coordinates of the right projectile hitting the left projectile, in the frame where the left projectile was at rest. If the two cannons are 0.866 light-seconds apart in this frame, and the two projectiles crash at the midpoint, then in this frame the left cannon must be at x=-0.433 light-seconds at t=0, while the right cannon must be at x=0.433 light-seconds at t=0. And we know in this frame the right cannon is moving towards the left projectile at 0.866c, so the right cannon's position as a function of time must be:
x(t) = 0.433 - 0.866*t
Meanwhile, because of relativistic velocity addition, the right projectile must be moving towards the left projectile at 0.9897c. And if the right projectile reaches x=0 at t=0, its position as a function of time must be:
x(t) = -0.9897*t
So, to figure out when the right projectile emerged from the right cannon, just set those two equal:
0.433 - 0.866*t = -0.9897*t
0.433 = -0.1237*t
Dividing both sides by -0.1237:
t = -3.5
So, the right projectile must have been shot from the right cannon 3.5 seconds before the two projectiles collided in this frame.
Meanwhile, the left projectile is at rest at x=0 in this frame, so for the left projectile x(t)=0. And the left cannon was at x=-0.433 at t=0, and it's moving in the -x direction at 0.866c, so its position as a function of time is:
x(t) = -0.433 - 0.866*t
So setting x(t) for the left projectile equal to x(t) for the left cannon to find out when the left projectile was fired gives:
0 = -0.433 - 0.866*t
0.433 = -0.866*t
Dividing both sides by -0.866:
t = -0.5
So, the left projectile must have been shot from the left cannon 0.5 seconds before the two projectiles collided.
20. Jul 31, 2010
### calebhoilday
it kinda makes sense at first glance, ill have to run through the math. You have to remember that if the cannon on the right was fired before, that the velocity of the projectile from the left is 0 and will have to consider the firing of both cannons as simultaneous. As the cannon on the right couldn't be fired on the right until the left and once fired the right had already been fired, you deal with the projectile on the left never witnessing the right cannon firing. Not sure if this is an issue or not. | 2018-12-13 08:15:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.579855740070343, "perplexity": 1059.2343406675527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824601.32/warc/CC-MAIN-20181213080138-20181213101638-00196.warc.gz"} |
https://en.wikipedia.org/wiki/Category:Surgery_theory | # Category:Surgery theory
Not to be confused with surgery, a medical operation.
In mathematics, specifically in topology, surgery theory is a cutting and glueing technique used to modify manifolds, and is the main tool for classifying high-dimensional manifolds, where high means ${\displaystyle >4}$.
The main article for this category is Surgery theory.
## Pages in category "Surgery theory"
The following 28 pages are in this category, out of 28 total. This list may not reflect recent changes (learn more). | 2016-10-23 08:18:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5580991506576538, "perplexity": 2583.4999210899487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719192.24/warc/CC-MAIN-20161020183839-00513-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://johncarlosbaez.wordpress.com/2020/02/18/topos-theory-part-7/ | ## Topos Theory (Part 7)
I’m almost done explaining why any presheaf category is an elementary topos, meaning that
• it has finite colimits;
• it has finite limits;
• it’s cartesian closed.
• it has a subboject classifier.
Last time I explained why such categories are cartesian closed; now let’s talk about the subobject classifier!
### Subobject classifiers
In the category of sets, 0-element and 1-element sets play special roles. The 0-element set is the initial object, and any 1-element set is a terminal object. But 2-element sets are also special! They let us talk about ‘truth values’.
We can take any such set, call it 2, and call its elements ‘true’ and ‘false’. Then subsets of any set $X$ correspond in a one-to-one way with functions
$\chi \colon X \to 2$
by associating to any subset $S \subseteq X$ the function taking the value ‘true’ on $S$ and ‘false’ elsewhere: its characteristic function.
The idea of a subobject classifier is simply to copy this idea in other categories. A subobject classifier, which we’ll call $\Omega,$ will serve as an ‘object of truth values’ in the category at hand. To make this precise, we’ll demand that subobjects of any object $X$ correspond bijectively to morphisms $\chi \colon X \to \Omega.$
That’s the idea. But to make the idea useful we need to say how subobjects of $X$ correspond to morphisms $\chi \colon X \to \Omega.$ And of course before that we’ll need to say exactly what a subobject is!
In $\mathsf{Set}$ we can pick out a subset of $X$ using a monomorphism $i \colon S \rightarrowtail X,$ where I use the funny arrow to denote a monomorphism (affectionately known as a ‘mono’). The idea is that the image of $i$ is the subset of $X$ we care about. But two different monos can have the same image, so we have to be careful!
Suppose we have two different monos
$i \colon S \rightarrowtail X$
and
$i' \colon S' \rightarrowtail X$
in $\mathsf{Set}.$ When do they have the same image? The answer is: precisely when there’s an isomorphism
$f \colon S \to S'$
such that
$i = i' \circ f.$
Puzzle. Prove this.
So, more generally, in any category, we shall say that two monos $i \colon S \rightarrowtail X$ and $i' \colon S' \rightarrowtail X$ are isomorphic iff there’s an isomorphism $f \colon S \to S'$ such that $i = i' \circ f.$
Definition. A subobject of an object $X$ in a category $C$ is an isomorphism class of monos into $X,$
We will eventually study subobjects to see how much they behave like subsets—there’s a lot to say about that! But first let’s see what a subobject classifier is.
Again we return to the category $\mathsf{Set}$ to see how this should work. We need to know how subobjects of $X \in \mathsf{Set}$ arise from functions $\chi \colon X \to 2.$ For this we need to know which elements of $X$ get mapped by $\chi$ to ‘true’, so we need to specify which element of the set $2$ counts as ‘true’. We do this using a function
$\mathrm{true} \colon 1 \rightarrowtail 2$
which of course is a mono. Then, given any function
$\chi \colon X \to 2$
we can take the pullback of $f$ along $\mathrm{true}$ and get a mono
$i \colon S \rightarrowtail X$
Please see diagram (2) in Section I.3 of Mac Lane and Moerdijk to see this pullback in all its glory.
The usual description of pullbacks in $\mathsf{Set}$ assures us that
$S \cong \{x \in X : \; \chi(x) = \mathrm{true} \}$
and the map $i \colon S \rightarrowtail X$ is the obvious inclusion.
We can copy this procedure in any category with pullbacks, leading to the definition we seek:
Definition. Let $\mathsf{C}$ be a category with pullbacks. A subobject classifier is an object $\Omega \in \mathsf{C}$ equipped with a mono
$\mathrm{true} \colon 1 \rightarrowtail \Omega$
such that for any object $X \in \mathsf{C}$ there is a bijection between morphisms
$\chi \colon X \to \Omega$
and subobjects of $X,$ given as follows: for any such morphism $\chi$, we pull back $\mathrm{true}$ along $\chi$ obtaining a mono
$i \colon S \rightarrowtail X$
and then take the subobject of $X$ corresponding to this.
Here we are using a fact:
Puzzle. Show that the pullback of any mono is a mono.
Thus, $i$ is automatically a mono, because we are assuming $\mathrm{true}$ is.
### Subobjects in presheaf categories
Let’s see what subobjects and subobject classifiers look like in presheaf categories. So now let $\mathsf{C}$ be any category and let
$\widehat{\mathsf{C}} = \mathsf{Set}^{\mathsf{C}^{\mathrm{op}}}$
be the category of presheaves on $\mathsf{C}.$
To get started: what are monos in $\widehat{\mathsf{C}}$ like? Remember from Part 5 pullbacks are computed pointwise in presheaf categories. Furthermore, monos can be defined in terms of pullbacks:
Puzzle. Show that a morphism $f \colon X \to Y$ is a mono iff its pullback along itself is the identity $1 \colon X \to X.$
This implies that determining whether a morphism in $\widehat{\mathsf{C}}$ is a mono must also be a ‘pointwise’ matter: that is, one that you can check by looking at one object $c \in \mathsf{C}$ at a time. But you can prove this directly:
Puzzle. Let $i \colon F \Rightarrow G$ be a morphism between presheaves $F , G \in \widehat{\mathsf{C}},$ that is, a natural transformation between functors $F, G \colon \mathsf{C}^{\mathrm{op}} \to \mathsf{Set}.$ Show that $i$ is a mono iff for each $c \in \mathsf{C},$
$i_c \colon F(c) \to G(c)$
is a mono in $\mathsf{Set},$ that is, a one-to-one function.
Thus, people say a morphism of presheaves $i \colon F \to G$ makes $F$ into a subpresheaf of $G$ when each function $i_c \colon F(c) \to G(c)$ is one-to-one, so $F(c)$ corresponds to a subset of $G(c).$ It’s good to look at this in examples, like the case of graphs, where we get the concept of ‘subgraph’: one graph included in another.
Of course, we should remember that a subobject of $G$ really an equivalence class of monos. Two different monos $i \colon F \rightarrowtail G$ and $i \colon F' \rightarrowtail G$ give the same subobject of $G$ if there’s an isomorphism between $F$ and $G$ that makes the obvious triangle commute. But sometimes people slack off and say $F$ is a subobject of $G$ if it’s equipped with a mono $i \colon F \rightarrowtail G.$ I was coming close to doing that in the last paragraph.
### Subobject classifiers in presheaf categories
Now we’re ready to understand the subobject classifier in a presheaf category. I’ll just tell you what it is. The subobject classifier in the presheaf category $\widehat{\mathsf{C}}$ assigns to each object $c \in \mathsf{C}$ the set of ‘sieves’ on $c.$ So what’s a sieve?
Definition. Given a category $\mathsf{C},$, a sieve on an object $c \in \mathsf{C}$ is a collection of morphisms to $c$ such that if $f \colon d \to c$ is in the sieve and $g \colon d' \to d$ is any morphism, then $f \circ g \colon d'\to c$ is in the sieve.
In other words, a sieve is a collection of morphisms with a fixed target that’s closed under precomposition. The name ‘sieve’ should remind you that if a piece of grain can get through a sieve, any smaller piece of grain can also get through. You can think of $f \circ g$ as ‘smaller’ than $f$ in some sense.
Here’s a slick way to think about sieves. Remember that the Yoneda embedding
$y \colon \mathsf{C} \to \widehat{\mathsf{C}}$
sends any object $c \in \mathsf{C}$ to a presheaf
$y(c) = \mathrm{hom}(-, c)$
called a representable presheaf.
Here’s the cool fact: a sieve on $c$ is just the same as a subobject of $y(c)!$ For each object $d$ it gives a subset of $\mathsf{hom}(d,c),$ and for each morphism $g \colon d' \to d$ it gives a map from $\mathsf{hom}(d,c)$ to $\mathsf{hom}(d',c),$ namely the map sending each $f$ to $f \circ g.$
The subobject classifier $\Omega$ in $\widehat{\mathsf{C}}$ is a beautiful thing: it assigns to each object the set of all sieves on that object!
That is, for each object $c \in \mathsf{C},$ the set $\Omega(c)$ is the set of all sieves on $c.$ But we also need to say what $\Omega$ does to morphisms. Given a morphism $f \colon c' \to c,$ the map
$\Omega(f) \colon \Omega(c) \to \Omega(c')$
sends sieves on $c$ to sieves on $c'$ as follows. For any sieve $S$ on $c,$ we say a morphism is in $\Omega(f)(S)$ if its composite with $f$ is in $S.$
We also need to describe
$\mathrm{true} \colon 1 \to \Omega$
A terminal presheaf $1$ sends each object of $\mathsf{C}$ to a one-element set, so $\mathrm{true}$ must pick out an element of $\Omega(c)$ for each object $c \in \mathsf{C}$ in a natural way (it’s a natural transformation.)
In other words, $\mathrm{true}$ must choose a sieve on $c$ for each $c \in \mathsf{C}$ in a natural way. The naturality condition here says that if $g \colon c' \to c,$ a morphism is in the chosen sieve on $c'$ iff its composite with $g$ is in the chosen sieve on $c.$
How does $\mathrm{true}$ do this wonderful thing? Simple: for each object $c$ it chooses the sieve containing all morphisms to $c.$ Then the naturality condition holds trivially.
Of course, we have to check that
$\mathrm{true} \colon 1 \to \Omega$
really is a subobject classifier for $\widehat{\mathsf{C}}.$ Only this will let us really understand what we’ve done here.
I’ll talk about this more next time, perhaps focusing on examples to build up intuition. For now I recommend that you read Section I.4 of Mac Lane and Moerdijk’s book for a general proof—and also a look at some examples!
The series so far:
Part 1: sheaves, elementary topoi, Grothendieck topoi and geometric morphisms.
Part 2: turning presheaves into bundles and vice versa; turning sheaves into etale spaces and vice versa.
Part 3: sheafification; the adjunction between presheaves and bundles.
Part 4: direct and inverse images of sheaves.
Part 5: why presheaf categories are elementary topoi: colimits and limits in presheaf categories.
Part 6: why presheaf categories are elementary topoi: cartesian closed categories and why presheaf categories are cartesian closed.
Part 7: why presheaf categories are elementary topoi: subobjects and subobject classifiers, and why presheaf categories have a subobject classifier.
Part 8: an example: the topos of time-dependent sets, and its subobject classifier.
### 3 Responses to Topos Theory (Part 7)
1. Toby Bartels says:
Typos: $d' to d$ should be $d' \to d$, and $\Omega(f)(S)$ should be $\Omega(f)(S)$.
2. Let’s look at an example of a presheaf topos, to see what various things I’ve been talking about actually look like—especially the subobject classifier.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2022-01-28 20:21:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 142, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9159616231918335, "perplexity": 322.23410315929107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320306335.77/warc/CC-MAIN-20220128182552-20220128212552-00150.warc.gz"} |
https://k12.libretexts.org/Bookshelves/Mathematics/Geometry/07%3A_Similarity/7.11%3A_Inscribed_Similar_Triangles | 7.11: Inscribed Similar Triangles
Division of a right triangle into similar triangles using an altitude.
Inscribed Similar Triangles Theorem
Remember that if two objects are similar, their corresponding angles are congruent and their sides are proportional in length. The altitude of a right triangle creates similar triangles.
Inscribed Similar Triangles Theorem: If an altitude is drawn from the right angle of any right triangle, then the two triangles formed are similar to the original triangle and all three triangles are similar to each other.
In $$\Delta ADB$$, $$m\angle A=90^{\circ}$$ and $$\overline{AC}\perp \overline{DB}$$:
So, $$\Delta ADB\sim \Delta CDA\sim \Delta CAB$$:
This means that all of the corresponding sides are proportional. You can use this fact to find missing lengths in right triangles.
What if you drew a line from the right angle of a right triangle perpendicular to the side that is opposite that angle? How could you determine the length of that line?
Example $$\PageIndex{1}$$
Find the value of $$x$$.
Solution
Set up a proportion.
\begin{aligned} \dfrac{\text{ shorter leg in }\Delta SVT}{\text{ shorter leg in }\Delta RST}&=\dfrac{\text{ hypotenuse in }\Delta SVT}{ \text{ hypotenuse in }\Delta RST} \\ \dfrac{4}{x}&=\dfrac{x}{20} \\ x^2&=80 \\ x&=\sqrt{80}=4\sqrt{5} \end{aligned}
Example $$\PageIndex{2}$$
Now find the value of $$y$$ in $$\Delta RST$$ above.
Solution
Use the Pythagorean Theorem.
\begin{aligned} y^2+(4\sqrt{5})^2&=20^2 \\y^2+80&=400 \\ y^2&=320 \\ y&=\sqrt{320}=8\sqrt{5}\end{aligned}
Example $$\PageIndex{3}$$
Find the value of $$x$$.
Solution
Separate the triangles to find the corresponding sides.
Set up a proportion.
\begin{aligned} \dfrac{\text{ shorter leg in } \Delta EDG}{\text{ shorter leg in } \Delta DFG}&=\dfrac{\text{ hypotenuse in }\Delta EDG}{\text{ hypotenuse in }\Delta DFG} \\ \dfrac{6}{x}&=\dfrac{10}{8} \\ 48&=10x \\ 4.8&=x \end{aligned}
Example $$\PageIndex{4}$$
Find the value of $$x$$.
Solution
Set up a proportion.
\begin{aligned}\dfrac{\text{ shorter leg of smallest }\Delta}{\text{ shorter leg of middle } \Delta}=\dfrac{ \text{ longer leg of smallest }\Delta }{\text{ longer leg of middle }\Delta} \\ \dfrac{9}{x}&=\dfrac{x}{27} \\ x^2&=243 \\ x&=\sqrt{243}=9\sqrt{3}\end{aligned}
Example $$\PageIndex{5}$$
Find the values of $$x$$ and $$y$$.
Separate the triangles. Write a proportion for $$x$$.
Solution
\begin{aligned} \dfrac{20}{x}&=\dfrac{x}{35} \\ x^2&=20\cdot 35 \\ x&=\sqrt{20\cdot 35} \\ x&=10\sqrt{7}\end{aligned}
Set up a proportion for y. Or, now that you know the value of x\) you can use the Pythagorean Theorem to solve for $$y$$. Use the method you feel most comfortable with.
$$\begin{array}{rlrl} \frac{15}{y} & =\frac{y}{35} & (10 \sqrt{7})^{2}+y^{2} & =35^{2} \\ y^{2} & =15 \cdot 35 & 700+y^{2}&=1225 \\ y & =\sqrt{15 \cdot 35} & y&=\sqrt{525}=5 \sqrt{21} \\& & y &=5 \sqrt{21} \end{array}$$
Review
Fill in the blanks.
1. $$\Delta BAD\sim \Delta ______ \sim \Delta ______$$
2. $$\dfrac{BC}{?}=\dfrac{?}{CD}$$
3. $$\dfrac{BC}{AB}=\dfrac{AB}{?}$$
4. $$\dfrac{?}{AD}=\dfrac{AD}{BD}$$
Write the similarity statement for the right triangles in each diagram.
Use the diagram to answer questions 7-10.
1. Write the similarity statement for the three triangles in the diagram.
2. If $$JM=12$$ and $$ML=9$$, find $$KM$$.
3. Find $$JK$$.
4. Find $$KL$$.
Find the length of the missing variable(s). Simplify all radicals.
1. Fill in the blanks of the proof for the Inscribed Similar Triangles Theorem.
Given: $$\Delta ABD$$ with $$\overline{AC}\perp \overline{DB}$$ and $$\angle DAB$$ is a right angle.
Prove: $$\Delta ABD\sim \Delta CBA\sim \Delta CAD$$
Statement Reason
1. 1. Given
2. $$\angle DCA$$ and $$\angle ACB$$ are right angles 2.
3. $$\angle DAB\cong \angle DCA\cong \angle ACB$$ 3.
4. 4. Reflexive PoC
5. 5. AA Similarity Postulate
6. $$B\cong \angle B$$ 6.
7. $$\Delta CBA\cong \Delta ABD$$ 7.
8. $$\Delta CAD\cong \Delta CBA$$ 8.
Vocabulary
Term Definition
Inscribed Similar Triangles Theorem The Inscribed Similar Triangles Theorem states that if an altitude is drawn from the right angle of any right triangle, then the two triangles formed are similar to the original triangle and all three triangles are similar to each other.
Perpendicular Perpendicular lines are lines that intersect at a 90∘ angle. The product of the slopes of two perpendicular lines is -1.
Proportion A proportion is an equation that shows two equivalent ratios.
Pythagorean Theorem The Pythagorean Theorem is a mathematical relationship between the sides of a right triangle, given by $$a^2+b^2=c^2$$, where a and b are legs of the triangle and c is the hypotenuse of the triangle.
Video: Inscribed Similar Triangles Principles - Basic
Activities: Inscribed Similar Triangles Discussion Questions
Study Aids: Right Triangle Similarity Study Guide
Practice: Inscribed Similar Triangles
Real World: Inscribed Similar Triangles
7.11: Inscribed Similar Triangles is shared under a CK-12 license and was authored, remixed, and/or curated by CK-12 Foundation via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2022-07-05 13:42:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995790123939514, "perplexity": 1379.1913760258378}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00716.warc.gz"} |
https://www.askiitians.com/forums/9-grade-maths/pl-give-the-answer-with-steps-see-the-following-a_239479.htm | # pl give the answer with steps, see the following attachment. thanks
Arun
25758 Points
3 years ago
Dear student
take LCM
hence you will get
( 5 + 1 – 2 sqt 5 + 5 + 1 + 2 sqrt 5) / (5-1)
= 12 /4 = 3
hence a = 3 and b = 0
Piyush Kumar Behera
436 Points
3 years ago
As the previous solution was not illustrated properly,Hence here is a better solution..
$\frac{\sqrt{5}-1}{\sqrt{5}+1}+\frac{\sqrt{5}+1}{\sqrt{5}-1}$
Taking LCM of the above expression we get
$=\frac{(\sqrt{5}-1)^{2}+(\sqrt{5}+1)^{2}}{(\sqrt{5}-1)(\sqrt{5}+1)}$
$= \frac{5+1-2\sqrt{5}+5+1+2\sqrt{5}}{\sqrt{5}^{2}-1^{2}}$
$=\frac{12}{5-1}$
a = 12/4 = 3
b = 0
So the answer is a = 3 and b = 0. | 2022-10-05 16:16:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5559103488922119, "perplexity": 2574.1351988635683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00180.warc.gz"} |
https://labs.tib.eu/arxiv/?author=A.%20Sternberg | • ### SHINING, A Survey of Far Infrared Lines in Nearby Galaxies. II: Line-Deficit Models, AGN impact, [CII]-SFR Scaling Relations, and Mass-Metallicity Relation in (U)LIRGS(1803.04422)
March 12, 2018 astro-ph.GA
The SHINING survey (Paper I; Herrera-Camus et al. 2018) offers a great opportunity to study the properties of the ionized and neutral media of galaxies from prototypical starbursts and active galactic nuclei (AGN) to heavily obscured objects. Based on Herschel/PACS observations of the main far-infrared (FIR) fine-structure lines, in this paper we analyze the physical mechanisms behind the observed line deficits in galaxies, the apparent offset of luminous infrared galaxies (LIRGs) from the mass-metallicity relation, and the scaling relations between [CII] 158 $\mu$m line emission and star formation rate (SFR). Based on a toy model and the Cloudy code, we conclude that the increase in the ionization parameter with FIR surface brightness can explain the observed decrease in the line-to-FIR continuum ratio of galaxies. In the case of the [CII] line, the increase in the ionization parameter is accompanied by a reduction in the photoelectric heating efficiency and the inability of the line to track the increase in the FUV radiation field as galaxies become more compact and luminous. In the central $\sim$kiloparsec regions of AGN galaxies we observe a significant increase in the [OI] 63 $\mu$m/[CII] line ratio; the AGN impact on the line-to-FIR ratios fades on global scales. Based on extinction-insensitive metallicity measurements of LIRGs we confirm that they lie below the mass-metallicity relation, but the offset is smaller than those reported in studies that use optical-based metal abundances. Finally, we present scaling relations between [CII] emission and SFR in the context of the main-sequence of star-forming galaxies.
• ### SHINING, A Survey of Far Infrared Lines in Nearby Galaxies. I: Survey Description, Observational Trends, and Line Diagnostics(1803.04419)
March 12, 2018 astro-ph.GA
We use the Herschel/PACS spectrometer to study the global and spatially resolved far-infrared (FIR) fine-structure line emission in a sample of 52 galaxies that constitute the SHINING survey. These galaxies include star-forming, active-galactic nuclei (AGN), and luminous infrared galaxies (LIRGs). We find an increasing number of galaxies (and kiloparsec size regions within galaxies) with low line-to-FIR continuum ratios as a function of increasing FIR luminosity ($L_{\mathrm{FIR}}$), dust infrared color, $L_{\mathrm{FIR}}$ to molecular gas mass ratio ($L_{\mathrm{FIR}}/M_{\mathrm{mol}}$), and FIR surface brightness ($\Sigma_{\mathrm{FIR}}$). The correlations between the [CII]/FIR or [OI]/FIR ratios with $\Sigma_{\mathrm{FIR}}$ are remarkably tight ($\sim0.3$ dex scatter over almost four orders of magnitude in $\Sigma_{\mathrm{FIR}}$). We observe that galaxies with $L_{\mathrm{FIR}}/M_{\mathrm{mol}} \gtrsim 80\,L_{\odot}\,M_{\odot}^{-1}$ and $\Sigma_{\mathrm{FIR}}\gtrsim10^{11}$ $L_{\odot}$ kpc$^{-2}$ tend to have weak fine-structure line-to-FIR continuum ratios, and that LIRGs with infrared sizes $\gtrsim1$ kpc have line-to-FIR ratios comparable to those observed in typical star-forming galaxies. We analyze the physical mechanisms driving these trends in Paper II (Herrera-Camus et al. 2018). The combined analysis of the [CII], [NII], and [OIII] lines reveals that the fraction of the [CII] line emission that arises from neutral gas increases from 60% to 90% in the most active star-forming regions and that the emission originating in the ionized gas is associated with low-ionization, diffuse gas rather than with dense gas in HII regions. Finally, we report the global and spatially resolved line fluxes of the SHINING galaxies to enable the comparison and planning of future local and high-$z$ studies.
• ### The SINS/zC-SINF survey of z~2 galaxy kinematics: SINFONI adaptive optics-assisted data and kiloparsec-scale emission line properties(1802.07276)
Feb. 20, 2018 astro-ph.GA
We present the "SINS/zC-SINF AO survey" of 35 star-forming galaxies, the largest sample with deep adaptive optics-assisted (AO) near-infrared integral field spectroscopy at z~2. The observations, taken with SINFONI at the Very Large Telescope, resolve the Ha and [NII] line emission and kinematics on scales of ~1.5 kpc. In stellar mass, star formation rate, rest-optical colors and size, the AO sample is representative of its parent seeing-limited sample and probes the massive (M* ~ 2x10^9 - 3x10^11 Msun), actively star-forming (SFR ~ 10-600 Msun/yr) part of the z~2 galaxy population over a wide range in colors ((U-V)_rest ~ 0.15-1.5 mag) and half-light radii (R_e,H ~ 1-8.5 kpc). The sample overlaps largely with the "main sequence" of star-forming galaxies in the same redshift range to a similar K_AB = 23 magnitude limit; it has ~0.3 dex higher median specific SFR, ~0.1 mag bluer median (U-V)_rest color, and ~10% larger median rest-optical size. We describe the observations, data reduction, and extraction of basic flux and kinematic properties. With typically 3-4 times higher resolution and 4-5 times longer integrations (up to 23hr) than the seeing-limited datasets of the same objects, the AO data reveal much more detail in morphology and kinematics. The now complete AO observations confirm the majority of kinematically-classified disks and the typically elevated disk velocity dispersions previously reported based on subsets of the data. We derive typically flat or slightly negative radial [NII]/Ha gradients, with no significant trend with global galaxy properties, kinematic nature, or the presence of an AGN. Azimuthal variations in [NII]/Ha are seen in several sources and are associated with ionized gas outflows, and possible more metal-poor star-forming clumps or small companions. [Abridged]
• ### PHIBSS: Unified Scaling Relations of Gas Depletion Time and Molecular Gas Fractions(1702.01140)
Jan. 11, 2018 astro-ph.GA
This paper provides an update of our previous scaling relations (Genzel et al.2015) between galaxy integrated molecular gas masses, stellar masses and star formation rates, in the framework of the star formation main-sequence (MS), with the main goal to test for possible systematic effects. For this purpose our new study combines three independent methods of determining molecular gas masses from CO line fluxes, far-infrared dust spectral energy distributions, and ~1mm dust photometry, in a large sample of 1444 star forming galaxies (SFGs) between z=0 and 4. The sample covers the stellar mass range log(M*/M_solar)=9.0-11.8, and star formation rates relative to that on the MS, delta_MS=SFR/SFR(MS), from 10^{-1.3} to 10^{2.2}. Our most important finding is that all data sets, despite the different techniques and analysis methods used, follow the same scaling trends, once method-to-method zero point offsets are minimized and uncertainties are properly taken into account. The molecular gas depletion time t_depl, defined as the ratio of molecular gas mass to star formation rate, scales as (1+z)^{-0.6}x(delta_MS)^{-0.44}, and is only weakly dependent on stellar mass. The ratio of molecular-to-stellar mass mu_gas depends on (1+z)^{2.5}x (delta_MS)^{0.52}x(M*)^{-0.36}, which tracks the evolution of the specific star formation rate. The redshift dependence of mu_gas requires a curvature term, as may the mass-dependences of t_depl and mu_gas. We find no or only weak correlations of t_depl and mu_gas with optical size R or surface density once one removes the above scalings, but we caution that optical sizes may not be appropriate for the high gas and dust columns at high-z.
• ### LLAMA: Nuclear stellar properties of Swift BAT AGN and matched inactive galaxies(1710.04098)
Oct. 11, 2017 astro-ph.GA
In a complete sample of local 14-195 keV selected AGNs and inactive galaxies, matched by their host galaxy properties, we study the spatially resolved stellar kinematics and luminosity distributions at near-infrared wavelengths on scales of 10-150 pc, using SINFONI on the VLT. In this paper, we present the first half of the sample, which comprises 13 galaxies, 8 AGNs and 5 inactive galaxies. The stellar velocity fields show a disk-like rotating pattern, for which the kinematic position angle is in agreement with the photometric position angle obtained from large scale images. For this set of galaxies, the stellar surface brightness of the inactive galaxy sample is generally comparable to the matched sample of AGN but extends to lower surface brightness. After removal of the bulge contribution, we find a nuclear stellar light excess with an extended nuclear disk structure, and which exhibits a size-luminosity relation. While we expect the excess luminosity to be associated with a dynamically cooler young stellar population, we do not typically see a matching drop in dispersion. This may be because these galaxies have pseudo-bulges in which the intrinsic dispersion increases towards the centre. And although the young stars may have an impact in the observed kinematics, their fraction is too small to dominate over the bulge and compensate the increase in dispersion at small radii, so no dispersion drop is seen. Finally, we find no evidence for a difference in the stellar kinematics and nuclear stellar luminosity excess between these active and inactive galaxies.
• ### LLAMA: Normal star formation efficiencies of molecular gas in the centres of luminous Seyfert galaxies(1710.04224)
Oct. 11, 2017 astro-ph.GA
Using new APEX and JCMT spectroscopy of the CO 2-1 line, we undertake a controlled study of cold molecular gas in moderately luminous Active Galactic Nuclei (AGN) and inactive galaxies from the Luminous Local AGN with Matched Analogs (LLAMA) survey. We use spatially resolved infrared photometry of the LLAMA galaxies from 2MASS, WISE, IRAS & Herschel, corrected for nuclear emission using multi-component spectral energy distribution (SED) fits, to examine the dust-reprocessed star-formation rates (SFRs), molecular gas fractions and star formation efficiencies (SFEs) over their central 1 - 3 kpc. We find that the gas fractions and central SFEs of both active and inactive galaxies are similar when controlling for host stellar mass and morphology (Hubble type). The equivalent central molecular gas depletion times are consistent with the discs of normal spiral galaxies in the local Universe. Despite energetic arguments that the AGN in LLAMA should be capable of disrupting the observable cold molecular gas in their central environments, our results indicate that nuclear radiation only couples weakly with this phase. We find a mild preference for obscured AGN to contain higher amounts of central molecular gas, which suggests a connection between AGN obscuration and the gaseous environment of the nucleus. Systems with depressed SFEs are not found among the LLAMA AGN. We speculate that the processes that sustain the collapse of molecular gas into dense pre-stellar cores may also be a prerequisite for the inflow of material on to AGN accretion disks.
• ### ISM conditions in z~0.2 Lyman-Break Analogs(1706.04107)
June 13, 2017 astro-ph.GA
We present an analysis of far--infrared (FIR) [CII] and [OI] fine structure line and continuum observations obtained with $Herschel$/PACS, and CO(1-0) observations obtained with the IRAM Plateau de Bure Interferometer, of Lyman Break Analogs (LBAs) at $z\sim 0.2$. The principal aim of this work is to determine the typical ISM properties of $z\sim 1-2$ Main Sequence (MS) galaxies, with stellar masses between $10^{9.5}$ and $10^{11}$ $M_{\odot}$, which are currently not easily detectable in all these lines even with ALMA and NOEMA. We perform PDR modeling and apply different IR diagnostics to derive the main physical parameters of the FIR emitting gas and dust and we compare the derived ISM properties to those of galaxies on and above the MS at different redshifts. We find that the ISM properties of LBAs are quite extreme (low gas temperature, high density and thermal pressure) with respect to those found in local normal spirals and more active local galaxies. LBAs have no [CII] deficit despite having the high specific star formation rates (sSFRs) typical of starbursts. Although LBAs lie above the local MS, we show that their ISM properties are more similar to those of high-redshift MS galaxies than of local galaxies above the main sequence. This data set represents an important reference for planning future ALMA [CII] observations of relatively low-mass MS galaxies at the epoch of the peak of the cosmic star formation.
• ### Strongly baryon-dominated disk galaxies at the peak of galaxy formation ten billion years ago(1703.04310)
March 13, 2017 astro-ph.GA
In cold dark matter cosmology, the baryonic components of galaxies are thought to be mixed with and embedded in non-baryonic and non-relativistic dark matter, which dominates the total mass of the galaxy and its dark matter halo. In the local Universe, the mass of dark matter within a galactic disk increases with disk radius, becoming appreciable and then dominant in the outer, baryonic regions of the disks of star-forming galaxies. This results in rotation velocities of the visible matter within the disk that are constant or increasing with disk radius. Comparison between the dynamical mass and the sum of stellar and cold gas mass at the peak epoch of galaxy formation, inferred from ancillary data, suggest high baryon factions in the inner, star-forming regions of the disks. Although this implied baryon fraction may be larger than in the local Universe, the systematic uncertainties (stellar initial mass function, calibration of gas masses) render such comparisons inconclusive in terms of the mass of dark matter. Here we report rotation curves for the outer disks of six massive star-forming galaxies, and find that the rotation velocities are not constant, but decrease with radius. We propose that this trend arises because of two main factors: first, a large fraction of the massive, high-redshift galaxy population was strongly baryon dominated, with dark matter playing a smaller part than in the local Universe; and second, the large velocity dispersion in high-redshift disks introduces a substantial pressure term that leads to a decrease in rotation velocity with increasing radius. The effect of both factors appears to increase with redshift. Qualitatively, the observations suggest that baryons in the early Universe efficiently condensed at the centres of dark matter halos when gas fractions were high, and dark matter was less concentrated. [Abridged]
• ### Deriving a multivariate CO-to-H$_2$ conversion function using the [CII]/CO(1-0) ratio and its application to molecular gas scaling relations(1702.03888)
Feb. 10, 2017 astro-ph.GA
We present Herschel PACS observations of the [CII] 158 micron emission line in a sample of 24 intermediate mass (9<logM$_\ast$/M$_\odot$<10) and low metallicity (0.4< Z/Z$_\odot$<1.0) galaxies from the xCOLD GASS survey. Combining them with IRAM CO(1-0) measurements, we establish scaling relations between integrated and molecular region [CII]/CO(1-0) luminosity ratios as a function of integrated galaxy properties. A Bayesian analysis reveals that only two parameters, metallicity and offset from the star formation main sequence, $\Delta$MS, are needed to quantify variations in the luminosity ratio; metallicity describes the total dust content available to shield CO from UV radiation, while $\Delta$MS describes the strength of this radiation field. We connect the [CII]/CO luminosity ratio to the CO-to-H$_2$ conversion factor and find a multivariate conversion function $\alpha_{CO}$, which can be used up to z~2.5. This function depends primarily on metallicity, with a second order dependence on $\Delta$MS. We apply this to the full xCOLD GASS and PHIBSS1 surveys and investigate molecular gas scaling relations. We find a flattening of the relation between gas mass fraction and stellar mass at logM$_\ast$/M$_\odot$<10. While the molecular gas depletion time varies with sSFR, it is mostly independent of mass, indicating that the low L$_{CO}$/SFR ratios long observed in low mass galaxies are entirely due to photodissociation of CO, and not to an enhanced star formation efficiency.
• ### The Role of Host Galaxy for the Environmental Dependence of Active Nuclei in Local Galaxies(1610.09890)
Jan. 6, 2017 astro-ph.GA
We discuss the environment of local hard X-ray selected active galaxies, with reference to two independent group catalogues. We find that the fraction of these AGN in S0 host galaxies decreases strongly as a function of galaxy group size (halo mass) - which contrasts with the increasing fraction of galaxies of S0 type in denser environments. However, there is no evidence for an environmental dependence of AGN in spiral galaxies. Because most AGN are found in spiral galaxies, this dilutes the signature of environmental dependence for the population as a whole. We argue that the differing results for AGN in disk-dominated and bulge-dominated galaxies is related to the source of the gas fuelling the AGN, and so may also impact the luminosity function, duty cycle, and obscuration. We find that there is a significant difference in the luminosity function for AGN in spiral and S0 galaxies, and tentative evidence for some difference in the fraction of obscured AGN.
• ### Constraints on the Broad Line Region Properties and Extinction in Local Seyferts(1607.07308)
July 25, 2016 astro-ph.GA
We use high spectral resolution (R > 8000) data covering 3800-13000\r{A} to study the physical conditions of the broad line region (BLR) of nine nearby Seyfert 1 galaxies. Up to six broad HI lines are present in each spectrum. A comparison - for the first time using simultaneous optical to near-infrared observations - to photoionisation calculations with our devised simple scheme yields the extinction to the BLR at the same time as determining the density and photon flux, and hence distance from the nucleus, of the emitting gas. This points to a typical density for the HI emitting gas of 10$^{11}$cm$^{-3}$ and shows that a significant amount of this gas lies at regions near the dust sublimation radius, consistent with theoretical predictions. We also confirm that in many objects the line ratios are far from case B, the best-fit intrinsic broad-line H$\alpha$/H$\beta$ ratios being in the range 2.5-6.6 as derived with our photoionization modeling scheme. The extinction to the BLR, based on independent estimates from HI and HeII lines, is A$_V$ $\le$ 3 for Seyfert 1-1.5s, while Seyfert 1.8-1.9s have A$_V$ in the range 4-8. A comparison of the extinction towards the BLR and narrow line region (NLR) indicates that the structure obscuring the BLR exists on scales smaller than the NLR. This could be the dusty torus, but dusty nuclear spirals or filaments could also be responsible. The ratios between the X-ray absorbing column N$_H$ and the extinction to the BLR are consistent with the Galactic gas-to-dust ratio if N$_H$ variations are considered.
• ### The angular momentum distribution and baryon content of star forming galaxies at z~1-3(1510.03262)
May 19, 2016 astro-ph.GA
We analyze the angular momenta of massive star forming galaxies (SFGs) at the peak of the cosmic star formation epoch (z~0.8-2.6). Our sample of ~360 log(M*/Msun) ~ 9.3-11.8 SFGs is mainly based on the KMOS3D and SINS/zC-SINF surveys of H$\alpha$ kinematics, and collectively provides a representative subset of the massive star forming population. The inferred halo scale angular momentum distribution is broadly consistent with that theoretically predicted for their dark matter halos, in terms of mean spin parameter <$\lambda$> ~ 0.037 and its dispersion ($\sigma_{log(\lambda)}$~0.2). Spin parameters correlate with the disk radial scale, and with their stellar surface density, but do not depend significantly on halo mass, stellar mass, or redshift. Our data thus support the long-standing assumption that on average, even at high redshifts, the specific angular momentum of disk galaxies reflects that of their dark matter halos (j_d = j_DM). The lack of correlation between $\lambda$ x (j_d/j_DM) and the nuclear stellar density $\Sigma_{*}$(1kpc) favors a scenario where disk-internal angular momentum redistribution leads to "compaction" inside massive high-redshift disks. For our sample, the inferred average stellar-to-dark matter mass ratio is ~2%, consistent with abundance matching results. Including the molecular gas, the total baryonic disk-to-dark matter mass ratio is ~5% for halos near $10^{12}$ Msun, which corresponds to 31% of the cosmologically available baryons, implying that high-redshift disks are strongly baryon dominated.
• ### Broad [CII] line wings as tracer of molecular and multi-phase outflows in infrared bright galaxies(1604.00185)
April 1, 2016 astro-ph.GA
We report a tentative correlation between the outflow characteristics derived from OH absorption at $119\,\mu\text{m}$ and [CII] emission at $158\,\mu\text{m}$ in a sample of 22 local and bright ultraluminous infrared galaxies (ULIRGs). For this sample we investigate whether [CII] broad wings are a good tracer of molecular outflows, and how the two tracers are connected. Fourteen objects in our sample have a broad wing component as traced by [CII], and all of these also show OH119 absorption indicative of an outflow (in 1 case an inflow). The other eight cases, where no broad [CII] component was found, are predominantly objects with no OH outflow or a low-velocity ($\leq 100\,\text{km s}^{-1}$) OH outflow. The full width at half maximum (FWHM) of the broad [CII] component shows a trend with the OH119 blue-shifted velocity, although with significant scatter. Moreover, and despite large uncertainties, the outflow masses derived from OH and broad [CII] show a 1:1 relation. The main conclusion is therefore that broad [CII] wings can be used to trace molecular outflows. This may be particularly relevant at high redshift, where the usual tracers of molecular gas (like low-J CO lines) become hard to observe. Additionally, observations of blue-shifted Na I D $\lambda\lambda 5890,5896$ absorption are available for ten of our sources. Outflow velocities of Na I D show a trend with OH velocity and broad [CII] FWHM. These observations suggest that the atomic and molecular gas phases of the outflow are connected.
• ### Thick Disks, and an Outflow, of Dense Gas in the Nuclei of Nearby Seyfert Galaxies(1602.06452)
Feb. 20, 2016 astro-ph.GA
We discuss the dense molecular gas in central regions of nearby Seyfert galaxies, and report new arcsec resolution observations of HCN(1-0) and HCO$^+$(1-0) for 3 objects. In NGC 3079 the lines show complex profiles as a result of self-absorption and saturated continuum absorption. H$^{13}$CN reveals the continuum absorption profile, with a peak close to the galaxy's systemic velocity that traces disk rotation, and a second feature with a blue wing extending to $-350$km s$^{-1}$ that most likely traces a nuclear outflow. The morphological and spectral properties of the emission lines allow us to constrain the dense gas dynamics. We combine our kinematic analysis for these 3 objects, as well as another with archival data, with a previous comparable analysis of 4 other objects, to create a sample of 8 Seyferts. In 7 of these, the emission line kinematics imply thick disk structures on radial scales of $\sim$100pc, suggesting such structures are a common occurrence. We find a relation between the circumnuclear LHCN and Mdyn that can be explained by a gas fraction of 10% and a conversion factor {\alpha}HCN $\sim$ 10 between gas mass and HCN luminosity. Finally, adopting a different perspective to probe the physical properties of the gas around AGN, we report on an analysis of molecular line ratios which indicates that the clouds in this region are not self-gravitating.
• ### On the relation of optical obscuration and X-ray absorption in Seyfert galaxies(1511.05566)
Jan. 12, 2016 astro-ph.GA
The optical classification of a Seyfert galaxy and whether it is considered X-ray absorbed are often used interchangeably. But there are many borderline cases and also numerous examples where the optical and X-ray classifications appear to be in conflict. In this article we re-visit the relation between optical obscuration and X-ray absorption in AGNs. We make use of our "dust color" method (Burtscher et al. 2015) to derive the optical obscuration A_V and consistently estimated X-ray absorbing columns using 0.3--150 keV spectral energy distributions. We also take into account the variable nature of the neutral gas column N_H and derive the Seyfert sub-classes of all our objects in a consistent way. We show in a sample of 25 local, hard-X-ray detected Seyfert galaxies (log L_X / (erg/s) ~ 41.5 - 43.5) that there can actually be a good agreement between optical and X-ray classification. If Seyfert types 1.8 and 1.9 are considered unobscured, the threshold between X-ray unabsorbed and absorbed should be chosen at a column N_H = 10^22.3 / cm^2 to be consistent with the optical classification. We find that N_H is related to A_V and that the N_H/A_V ratio is approximately Galactic or higher in all sources, as indicated previously. But in several objects we also see that deviations from the Galactic ratio are only due to a variable X-ray column, showing that (1) deviations from the Galactic N_H/A_V can simply be explained by dust-free neutral gas within the broad line region in some sources, that (2) the dust properties in AGNs can be similar to Galactic dust and that (3) the dust color method is a robust way to estimate the optical extinction towards the sublimation radius in all but the most obscured AGNs.
• ### A deep Herschel/PACS observation of CO(40-39) in NGC 1068: a search for the molecular torus(1508.07165)
Aug. 28, 2015 astro-ph.GA
Emission from high-J CO lines in galaxies has long been proposed as a tracer of X-ray dominated regions (XDRs) produced by AGN. Of particular interest is the question of whether the obscuring torus, which is required by AGN unification models, can be observed via high-J CO cooling lines. Here we report on the analysis of a deep Herschel-PACS observation of an extremely high J CO transition (40-39) in the Seyfert 2 galaxy NGC 1068. The line was not detected, with a derived 3$\sigma$ upper limit of $2 \times 10^{-17}\,\text{W}\,\text{m}^{-2}$. We apply an XDR model in order to investigate whether the upper limit constrains the properties of a molecular torus in NGC 1068. The XDR model predicts the CO Spectral Line Energy Distributions for various gas densities and illuminating X-ray fluxes. In our model, the CO(40-39) upper limit is matched by gas with densities $\sim 10^{6}-10^{7}\,\text{cm}^{-3}$, located at $1.6-5\,\text{pc}$ from the AGN, with column densities of at least $10^{25}\,\text{cm}^{-2}$. At such high column densities, however, dust absorbs most of the CO(40-39) line emission at $\lambda = 65.69\, \mu$m. Therefore, even if NGC 1068 has a molecular torus which radiates in the CO(40-39) line, the dust can attenuate the line emission to below the PACS detection limit. The upper limit is thus consistent with the existence of a molecular torus in NGC 1068. In general, we expect that the CO(40-39) is observable in only a few AGN nuclei (if at all), because of the required high gas column density, and absorption by dust.
• ### 500 Days of SN 2013dy: spectra and photometry from the ultraviolet to the infrared(1504.02396)
July 30, 2015 astro-ph.SR, astro-ph.HE
SN 2013dy is a Type Ia supernova for which we have compiled an extraordinary dataset spanning from 0.1 to ~ 500 days after explosion. We present 10 epochs of ultraviolet (UV) through near-infrared (NIR) spectra with HST/STIS, 47 epochs of optical spectra (15 of them having high resolution), and more than 500 photometric observations in the BVrRiIZYJH bands. SN 2013dy has a broad and slowly declining light curve (delta m(B) = 0.92 mag), shallow Si II 6355 absorption, and a low velocity gradient. We detect strong C II in our earliest spectra, probing unburned progenitor material in the outermost layers of the SN ejecta, but this feature fades within a few days. The UV continuum of SN 2013dy, which is strongly affected by the metal abundance of the progenitor star, suggests that SN 2013dy had a relatively high-metallicity progenitor. Examining one of the largest single set of high-resolution spectra for a SN Ia, we find no evidence of variable absorption from circumstellar material. Combining our UV spectra, NIR photometry, and high-cadence optical photometry, we construct a bolometric light curve, showing that SN 2013dy had a maximum luminosity of 10.0^{+4.8}_{-3.8} * 10^{42} erg/s. We compare the synthetic light curves and spectra of several models to SN 2013dy, finding that SN 2013dy is in good agreement with a solar-metallicity W7 model.
• ### High Resolution Imaging of PHIBSS z~2 Main Sequence Galaxies in CO J=1-0(1507.05652)
July 22, 2015 astro-ph.GA
We present Karl G. Jansky Very Large Array observations of the CO J=1-0 transition in a sample of four $z\sim2$ main sequence galaxies. These galaxies are in the blue sequence of star-forming galaxies at their redshift, and are part of the IRAM Plateau de Bure HIgh-$z$ Blue Sequence Survey (PHIBSS) which imaged them in CO J=3-2. Two galaxies are imaged here at high signal-to-noise, allowing determinations of their disk sizes, line profiles, molecular surface densities, and excitation. Using these and published measurements, we show that the CO and optical disks have similar sizes in main-sequence galaxies, and in the galaxy where we can compare CO J=1-0 and J=3-2 sizes we find these are also very similar. Assuming a Galactic CO-to-H$_2$ conversion, we measure surface densities of $\Sigma_{mol}\sim1200$ M$_\odot$pc$^{-2}$ in projection and estimate $\Sigma_{mol}\sim500-900$ M$_\odot$pc$^{-2}$ deprojected. Finally, our data yields velocity-integrated Rayleigh-Jeans brightness temperature line ratios $r_{31}$ that are approximately unity. In addition to the similar disk sizes, the very similar line profiles in J=1-0 and J=3-2 indicate that both transitions sample the same kinematics, implying that their emission is coextensive. We conclude that in these two main sequence galaxies there is no evidence for significant excitation gradients or a large molecular reservoir that is diffuse or cold and not involved in active star-formation. We suggest that $r_{31}$ in very actively star-forming galaxies is likely an indicator of how well mixed the star formation activity and the molecular reservoir are.
• ### Insights on the Dusty Torus and Neutral Torus from Optical and X-ray Obscuration in a Complete Volume Limited Hard X-ray AGN Sample(1505.00536)
May 4, 2015 astro-ph.GA
We describe a complete volume limited sample of nearby active galaxies selected by their 14-195keV luminosity, and outline its rationale for studying the mechanisms regulating gas inflow and outflow. We describe also a complementary sample of inactive galaxies, selected to match the AGN host galaxy properties. The active sample appears to have no bias in terms of AGN type, the only difference being the neutral absorbing column which is two orders of magnitude greater for the Seyfert 2s. In the luminosity range spanned by the sample, log L_{14-195keV} [erg/s] = 42.4-43.7, the optically obscured and X-ray absorbed fractions are 50-65%. The similarity of these fractions to more distant spectroscopic AGN samples, although over a limited luminosity range, suggests that the torus does not strongly evolve with redshift. Our sample confirms that X-ray unabsorbed Seyfert 2s are rare, comprising not more than a few percent of the Seyfert 2 population. At higher luminosities, the optically obscured fraction decreases (as expected for the increasing dust sublimation radius), but the X-ray absorbed fraction changes little. We argue that the cold X-ray absorption in these Seyfert 1s can be accounted for by neutral gas in clouds that also contribute to the broad line region (BLR) emission; and suggest that a geometrically thick neutral gas torus co-exists with the BLR and bridges the gap to the dusty torus.
• ### Obscuration in AGNs: near-infrared luminosity relations and dust colors(1504.01104)
April 5, 2015 astro-ph.GA
We combine two approaches to isolate the AGN luminosity at near-infrared wavelengths and relate the near-IR pure AGN luminosity to other tracers of the AGN. Using integral-field spectroscopic data of an archival sample of 51 local AGNs, we estimate the fraction of non-stellar light by comparing the nuclear equivalent width of the stellar 2.3 micron CO absorption feature with the intrinsic value for each galaxy. We compare this fraction to that derived from a spectral decomposition of the integrated light in the central arc second and find them to be consistent with each other. Using our estimates of the near-IR AGN light, we find a strong correlation with presumably isotropic AGN tracers. We show that a significant offset exists between type 1 and type 2 sources in the sense that type 1 sources are 7 (10) times brighter in the near-IR at log L_MIR = 42.5 (log L_X = 42.5). These offsets only becomes clear when treating infrared type 1 sources as type 1 AGNs. All AGNs have very red near-to-mid-IR dust colors. This, as well as the range of observed near-IR temperatures, can be explained with a simple model with only two free parameters: the obscuration to the hot dust and the ratio between the warm and hot dust areas. We find obscurations of A_V (hot) = 5 - 15 mag for infrared type 1 sources and A_V (hot) = 15 - 35 mag for type 2 sources. The ratio of hot dust to warm dust areas of about 1000 is nicely consistent with the ratio of radii of the respective regions as found by infrared interferometry.
• ### High-J CO SLEDs in nearby infrared bright galaxies observed by Herschel-PACS(1410.0006)
Jan. 20, 2015 astro-ph.GA
We report the detection of far-infrared (FIR) CO rotational emission from nearby active galactic nuclei (AGN) and starburst galaxies, as well as several merging systems and Ultra-Luminous Infrared Galaxies (ULIRGs). Using Herschel-PACS, we have detected transitions in the J$_{upp}$ = 14 - 20 range ($\lambda \sim$ 130 - 185 $\mu$m, $\nu \sim$ 1612 - 2300 GHz) with upper limits on (and in two cases, detections of) CO line fluxes up to J$_{upp}$ = 30. The PACS CO data obtained here provide the first well-sampled FIR extragalactic CO SLEDs for this range, and will be an essential reference for future high redshift studies. We find a large range in the overall SLED shape, even amongst galaxies of similar type, demonstrating the uncertainties in relying solely on high-J CO diagnostics to characterize the excitation source of a galaxy. Combining our data with low-J line intensities taken from the literature, we present a CO ratio-ratio diagram and discuss its potential diagnostic value in distinguishing excitation sources and physical properties of the molecular gas. The position of a galaxy on such a diagram is less a signature of its excitation mechanism, than an indicator of the presence (or absence) of warm, dense molecular gas. We then quantitatively analyze the CO emission from a subset of the detected sources with Large Velocity Gradient (LVG) radiative transfer models to fit the CO SLEDs. Using both single-component and two-component LVG models to fit the kinetic temperature, velocity gradient, number density and column density of the gas, we derive the molecular gas mass and the corresponding CO-to-H$_2$ conversion factor, $\alpha_{CO}$, for each respective source. For the ULIRGs we find $\alpha$ values in the canonical range 0.4 - 5 M$_\odot$/(K kms$^{-1}$pc$^2$), while for the other objects, $\alpha$ varies between 0.2 and 14.} Finally, we compare our best-fit LVG model ..
• ### Combined CO & Dust Scaling Relations of Depletion Time and Molecular Gas Fractions with Cosmic Time, Specific Star Formation Rate and Stellar Mass(1409.1171)
Dec. 5, 2014 astro-ph.GA
We combine molecular gas masses inferred from CO emission in 500 star forming galaxies (SFGs) between z=0 and 3, from the IRAM-COLDGASS, PHIBSS1/2 and other surveys, with gas masses derived from Herschel far-IR dust measurements in 512 galaxy stacks over the same stellar mass/redshift range. We constrain the scaling relations of molecular gas depletion time scale (tdepl) and gas to stellar mass ratio (Mmolgas/M*) of SFGs near the star formation main-sequence with redshift, specific star formation rate (sSFR) and stellar mass (M*). The CO- and dust-based scaling relations agree remarkably well. This suggests that the CO-H2 mass conversion factor varies little within 0.6dex of the main sequence (sSFR(ms,z,M*)), and less than 0.3dex throughout this redshift range. This study builds on and strengthens the results of earlier work. We find that tdepl scales as (1+z)^-0.3 *(sSFR/sSFR(ms,z,M*))^-0.5, with little dependence on M*. The resulting steep redshift dependence of Mmolgas/M* ~(1+z)^3 mirrors that of the sSFR and probably reflects the gas supply rate. The decreasing gas fractions at high M* are driven by the flattening of the SFR-M* relation. Throughout the redshift range probed a larger sSFR at constant M* is due to a combination of an increasing gas fraction and a decreasing depletion time scale. As a result galaxy integrated samples of the Mmolgas-SFR rate relation exhibit a super-linear slope, which increases with the range of sSFR. With these new relations it is now possible to determine Mmolgas with an accuracy of 0.1dex in relative terms, and 0.2dex including systematic uncertainties.
• ### AGC198606: A gas-bearing dark matter minihalo?(1412.0286)
Nov. 30, 2014 astro-ph.CO, astro-ph.GA
We present neutral hydrogen (HI) imaging observations with the Westerbork Synthesis Radio Telescope of AGC198606, an HI cloud discovered in the ALFALFA 21cm survey. This object is of particular note as it is located 16 km/s and 1.2 degrees from the gas-bearing ultra-faint dwarf galaxy Leo T while having a similar HI linewidth and approximately twice the flux density. The HI imaging observations reveal a smooth, undisturbed HI morphology with a full extent of 23'x16' at the 5x10^18 atoms cm^-2 level. The velocity field of AGC198606 shows ordered motion with a gradient of ~25 km/s across ~20'. The global velocity dispersion is 9.3 km/s with no evidence for a narrow spectral component. No optical counterpart to AGC198606 is detected. The distance to AGC198606 is unknown, and we consider several different scenarios: physical association with Leo T, a minihalo at a distance of ~150 kpc based on the models of Faerman et al. (2013), and a cloud in the Galactic halo. At a distance of 420 kpc, AGC198606 would have an HI mass of 6.2x10^5 Msun, an HI radius of 1.4 kpc, and a dynamical mass within the HI extent of 1.5x10^8 Msun.
• ### Evidence for Wide-Spread AGN Driven Outflows in the Most Massive z~1-2 Star Forming Galaxies(1406.0183)
Sept. 5, 2014 astro-ph.CO, astro-ph.GA
In this paper we follow up on our previous detection of nuclear ionized outflows in the most massive (log(M*/Msun) >= 10.9) z~1-3 star-forming galaxies (Forster Schreiber et al.), by increasing the sample size by a factor of six (to 44 galaxies above log(M*/Msun) >= 10.9) from a combination of the SINS/zC-SINF, LUCI, GNIRS, and KMOS^3D spectroscopic surveys. We find a fairly sharp onset of the incidence of broad nuclear emission (FWHM in the Ha, [NII], and [SII] lines ~ 450-5300 km/s), with large [NII]/Ha ratios, above log(M*/Msun) ~ 10.9, with about two thirds of the galaxies in this mass range exhibiting this component. Broad nuclear components near and above the Schechter mass are similarly prevalent above and below the main sequence of star-forming galaxies, and at z~1 and ~2. The line ratios of the nuclear component are fit by excitation from active galactic nuclei (AGN), or by a combination of shocks and photoionization. The incidence of the most massive galaxies with broad nuclear components is at least as large as that of AGNs identified by X-ray, optical, infrared or radio indicators. The mass loading of the nuclear outflows is near unity. Our findings provide compelling evidence for powerful, high-duty cycle, AGN-driven outflows near the Schechter mass, and acting across the peak of cosmic galaxy formation.
• ### The SINS/zC-SINF survey of z~2 galaxy kinematics: Evidence for powerful AGN-driven nuclear outflows in massive star-forming galaxies(1311.2596)
March 27, 2014 astro-ph.CO
We report the detection of ubiquitous powerful nuclear outflows in massive (> 10^11 Msun) z~2 star-forming galaxies (SFGs), which are plausibly driven by an Active Galactic Nucleus (AGN). The sample consists of the eight most massive SFGs from our SINS/zC-SINF survey of galaxy kinematics with the imaging spectrometer SINFONI, six of which have sensitive high-resolution adaptive optics (AO) assisted observations. All of the objects are disks hosting a significant stellar bulge. The spectra in their central regions exhibit a broad component in Halpha and forbidden [NII] and [SII] line emission, with typical velocity FWHM ~ 1500 km/s, [NII]/Halpha ratio ~ 0.6, and intrinsic extent of 2 - 3 kpc. These properties are consistent with warm ionized gas outflows associated with Type 2 AGN, the presence of which is confirmed via independent diagnostics in half the galaxies. The data imply a median ionized gas mass outflow rate of ~ 60 Msun/yr and mass loading of ~ 3. At larger radii, a weaker broad component is detected but with lower FWHM ~ 485 km/s and [NII]/Halpha ~ 0.35, characteristic for star formation-driven outflows as found in the lower-mass SINS/zC-SINF galaxies. The high inferred mass outflow rates and frequent occurrence suggest the nuclear outflows efficiently expel gas out of the centers of the galaxies with high duty cycles, and may thus contribute to the process of star formation quenching in massive galaxies. Larger samples at high masses will be crucial to confirm the importance and energetics of the nuclear outflow phenomenon, and its connection to AGN activity and bulge growth. | 2020-04-07 19:23:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6904935836791992, "perplexity": 3214.5014066108306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00118.warc.gz"} |
https://math.stackexchange.com/questions/3154219/polynomial-p-equals-to-0 | # Polynomial P equals to 0
I have simple questions about polynomials (when I say "polynomials", I mean "formal polynomials", not polynomial mappings).
It might be a little bit strange, but I don't really understand why if we have a polynomial $$P = a_{0} + a_{1}X + a_{2}X^{2} + ... + a_{n}X^{n} \in \mathbb{R}[X]$$ (where $$n \in \mathbb{N}^{\ast}$$), then : $$P = 0_{\mathbb{R}[X]} \Rightarrow \forall i \in \{1, ..., n\}, a_{i} = 0$$
Also, if I write $$P = a_{0} + a_{1}X + a_{2}'X^{2} + a_{2}X^{2} + ... + a_{n}X^{n}$$, can I conclude that : $$P = 0_{\mathbb{R}[X]} \Rightarrow \forall i \in \{1, ..., n\}, a_{i} = 0 \wedge a_{2}' = 0$$ or just that : $$P = 0_{\mathbb{R}[X]} \Rightarrow \forall i \in \{1, ..., n\} - \{2\}, a_{i} = 0 \wedge a_{2} + a_{2}' = 0?$$
Thank you for your answers.
• A "formal polynomial" still has only one coefficient for each $x^i.$ So the second case is true if $a_2+a_2'=0$ and $a_i=0$ for $i\neq 2.$ – Thomas Andrews Mar 19 at 15:45
• If you write your $P$ as $a_{0} + a_{1}X +( a_{2}' + a_{2})X^{2} + ... + a_{n}X^{n}$, the question is settled. – Yves Daoust Mar 19 at 15:48
• $0_{\mathbb{R}[X]}$ is the polynomial where each coefficient is equal to $0$ here. Thank you for your answers, I just wanted to be sure ! – deeppinkwater Mar 19 at 15:51
## 2 Answers
The polynomial $$P$$ has at most $$n$$ roots, unless it is equal to the $$0_{\Bbb{R[X]}}$$ . The last by definition means $$a_i=0$$. That $$P$$ has infinite number of roots, we conclude that $$a_i=0$$ .
About the second question we may conclude that $$a_2+a'_2=0$$ .
Do you know when are two polynomials the same? Say we have $$p(x)=a_nx^n+...+a_2x^2+a_1x+a_0$$ and
$$q(x)=b_mx^m+...+b_2x^2+b_1x+b_0$$
then $$p(x)= q(x)$$ iff $$m=n$$ and $$a_0=b_0;$$ $$a_1=b_1;$$ $$a_2=b_2;$$ $$\vdots$$ $$a_n=b_n;$$
So if $$q(x)=0$$ then $$a_0=a_1=a_2=...=0$$ and you can not say $$a_2=a_2'=0$$. You can and must say $$a_2+a_2'=0$$. | 2019-08-26 03:22:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.900296151638031, "perplexity": 215.66436021244394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330962.67/warc/CC-MAIN-20190826022215-20190826044215-00369.warc.gz"} |
https://dsp.stackexchange.com/questions/64269/orthogonality-of-filter-impulse-response-to-its-even-shift | # Orthogonality of filter impulse response to its even shift
I meet this problem but still do not know how to solve it. Could you guy give me some guides?
Upsampling by 2 ($$U_2$$) followed by filtering by $$g$$, with operator $$G$$
And given: $$_n = \delta_k$$ (Filters with impulse responses orthogonal to their even shifts)
Prove that: $$I = U_2^*G^*GU_2$$
The set of tricks for such problems is typically pretty limited; try (and that's really what solves the issue)
• Multiplying the equation with elements from your right-hand term from left and/or right, so that identity matrices form.
• applying transposes, and hermitians until things happen.
• I just don't know what to do with: <𝑔𝑛,𝑔𝑛−2𝑘>𝑛=𝛿𝑘 – hminle Mar 2 at 0:26
• The notation you're using is unfamiliar. What is $\left < g_n,\,g_{n-2k} \right >_n = \delta_k$ supposed to mean? If you don't know, there's your problem, and it should be in your book or the lecture notes. – TimWescott Mar 2 at 1:09
• @TimWescott I'd have said that these brackets signify the signal vector space inner product between an infinite sequence $g[n]$ and it's $2k$-shifted variant $g[n-2k]$. And that's always 0, unless $k=0$. – Marcus Müller Mar 2 at 9:01 | 2020-10-31 09:58:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405972719192505, "perplexity": 1047.863720352834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00337.warc.gz"} |
https://paperswithcode.com/task/image-retrieval | # Image Retrieval
433 papers with code • 27 benchmarks • 47 datasets
Image retrieval systems aim to find similar images to a query image among an image dataset.
( Image credit: DELF )
## Libraries
Use these libraries to find Image Retrieval models and implementations
4 papers
510
4 papers
509
3 papers
145
2 papers
6,874
# VGGFace2: A dataset for recognising faces across pose and age
23 Oct 2017
The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimize the label noise.
17
# NetVLAD: CNN architecture for weakly supervised place recognition
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph.
15
# Fine-tuning CNN Image Retrieval with No Human Annotation
3 Nov 2017
We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval.
13
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language.
11
# Learning Deep Representations of Fine-grained Visual Descriptions
State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information.
9
# Large-Scale Image Retrieval with Attentive Deep Local Features
We propose an attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELF (DEep Local Feature).
9
# Looking at Outfit to Parse Clothing
4 Mar 2017
This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem.
9
# VSE++: Improving Visual-Semantic Embeddings with Hard Negatives
18 Jul 2017
We present a new technique for learning visual-semantic embeddings for cross-modal retrieval.
9
# Circle Loss: A Unified Perspective of Pair Similarity Optimization
This paper provides a pair similarity optimization viewpoint on deep feature learning, aiming to maximize the within-class similarity $s_p$ and minimize the between-class similarity $s_n$.
9
# Particular object retrieval with integral max-pooling of CNN activations
18 Nov 2015
Recently, image representation built upon Convolutional Neural Network (CNN) has been shown to provide effective descriptors for image search, outperforming pre-CNN features as short-vector representations.
6 | 2022-08-09 19:45:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4420393705368042, "perplexity": 8926.9671945777}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00362.warc.gz"} |
https://meangreenmath.com/2020/01/03/predicate-logic-and-popular-culture-part-205-bob-marley/ | # Predicate Logic and Popular Culture (Part 205): Bob Marley
Let $T$ be the set of all things, let $L(x)$ be the proposition “$x$ is a little thing,” and let $A(x)$ be the proposition “$x$ is going to be all right.” Translate the logical statement
$\forall x \in T(L(x) \Longrightarrow A(x))$.
This matches a line from “Three Little Birds” by Bob Marley.
Context: Part of the discrete mathematics course includes an introduction to predicate and propositional logic for our math majors. As you can probably guess from their names, students tend to think these concepts are dry and uninteresting even though they’re very important for their development as math majors.
In an effort to making these topics more appealing, I spent a few days mining the depths of popular culture in a (likely futile) attempt to make these ideas more interesting to my students. In this series, I’d like to share what I found. Naturally, the sources that I found have varying levels of complexity, which is appropriate for students who are first learning prepositional and predicate logic.
When I actually presented these in class, I either presented the logical statement and had my class guess the statement in actual English, or I gave my students the famous quote and them translate it into predicate logic. However, for the purposes of this series, I’ll just present the statement in predicate logic first. | 2021-01-23 21:27:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7350342273712158, "perplexity": 536.3469184280594}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00606.warc.gz"} |
https://civilengineering.blog/2020/08/31/classification-of-crops/ | # CLASSIFICATION OF CROPS
## CLASSIFICATION OF CROPS
Based upon the use and nature, the crops can be classified as given in Table 5.1.
## Similar Posts
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2022-05-25 09:51:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193830847740173, "perplexity": 7294.097056656423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00745.warc.gz"} |
https://www.neilkatuna.com/blog/the-core-factor-method | The Core Factor Method
May 4, 2021
I arrived in 2011 knowing next to nothing about the fixed income markets, and was immediately put to work helping another colleague build the firm's attribution and risk models. The risk oversight system that we sought to replace fit inside a single Excel notebook. It took forever to open, and, having run up against Excel's row and column limits, could not be expanded or enhanced. For a firm rapidly growing assets and expanding its management capabilities, we had to move quickly.
We struggled with linear factor selection, the process of choosing drivers of risk and return that satisfied necessary statistical properties. Whatever we did, we had to assure portfolio managers and analysts that we were capturing factor return in a way that did not seem too engineered or mathematical, but reflected some underlying fundamental property of how their markets functioned.
So we pushed out thinking about constructing correlation and covariance matrices, the mathematical objects that capture interactions between these factors. And when we did finally turn our attention to covariance matrix construction, we assumed we would do the simplest possible thing: take a sample correlation matrix, diagonalize it, increase its near-zero or negative eigenvalues to some small $$\epsilon > 0$$ while reducing large eigenvalues to preserve the trace, and reconstruct a modified correlation matrix.
While probably the most common technique to prepare correlation matrices for optimization---at least it was in 2011 when we were doing this work---the eigenvalues clipping approach suffers some very significant drawbacks. First, it modifies correlation matrices in ways that we might struggle to justify to a portfolio manager or analyst, or really anyone who can load two timeseries in Excel and compute a sample correlations and covariances themselves. Second, it creates matrices that continue to perform poorly in portfolio optimization. The mean variance portfolio is given by the product of the inverse of a covariance matrix and projected asset class returns. When inverting a covariance matrix, small, noisy eigenvalues $$\epsilon$$ overwhelm better estimated large eigenvalues, and the eigenvectors of such eigenvalues contribute in unexpected ways to the optimal portfolio weights.
It's worth remarking that such small eigenvalues are guaranteed insignificant by Marchenko-Pastur. And indeed some set $$\epsilon$$ to be greater than the Marchenko-Pastur upper bound. But this distorts the correlation matrix to an even greater degree and makes sanity checking individual correlations untenable, even between critical factor returns timeseries: 10 year Treasury returns and percentage changes in spreads, say.
We also considered shrinkage, the textbook approach to eliminating negative eigenvalues of correlation matrices. At least with shrinkage you know how you modify your correlation matrix. That is, chose some $$\alpha$$ so that the constructed correlation correlation matrix has the form $$\alpha E+ (1 - \alpha)I_N$$, where $$E$$ is our sample correlation matrix, and $$I_N$$ is the all-ones matrix of dimension $$N\times N$$.
The principal problem with shrinkage techniques---all shrinkage techniques that I am aware of anyway---is that some correlations ought to be 0 and others strongly negative. The mask $$I_N$$ however is an all-ones matrix, which is to say that in shrinking our sample correlation matrix we treat all correlations as if their true value is 1. That may be fine for equity- and spread-like asset classes, industries, and countries, but it absolutely will not capture interactions between rates and spreads, currencies and commodities, commodities and sectors, and others, precisely the interactions that we count on a computer to model better than we can in our own heads.
My colleague and I were doing this work in 2011 and 2012, just as rotationally invariant, optimal shrinkage techniques and been developed. A hybrid between standard shrinkage and eigenvalue clipping, these techniques ensure that the bilinear form induced by the cleaned matrix, when applied to each eigenvector of the sample matrix, approaches the eigenvalues of the true correlation matrix with convergence $$O(T^{-1/2})$$. A nice result, to be sure, and one I would like to play with if I one day find the time (a future post?). But, what have we done to specific correlations between key factors?
Enter the core factor method. I thought for all the years I worked with him that my colleague had invented the technique. Later, when cleaning out my files, I discovered a Barclays POINT slide from 2011 that sketched it out. I was recently told that MSCI/Barra uses a version of the core factor method now. Certainly when we were mining their work for ideas, they used the then-standard eigenvalue clipping approach.
I took my colleague's fragmentary memory of this slide, formalized the core factor method a bit more carefully, and produced a few interesting results that gave us great confidence in the cleaning technique. Here my notation will depart only slightly from Barclays's.
Assume representative core factors $$X = \{X_1, \cdots, X_M\}$$ have each been observed $$T$$ times so that $$X$$ is an $$M\times T$$ matrix. Regress our factor returns $$F = \{F_1, \cdots, F_N\}$$ on $$X$$ to find a best fit matrix $$B$$ such that $$F = BX + \Phi.$$ $$F$$ and $$\Phi$$ are of course $$N\times T$$. Calculate $$C_F^{\text{Core}} = C_{BX}$$, the sample correlation matrix corresponding to $$BX$$. The $$N \times N$$ matrix $$C_F^{\text{Core}}$$ has rank at most $$M \leq N$$, so we have definitely not improved on $$C_F$$, the sample correlation matrix we are trying to clean. At least not yet.
Now, partition our $$N$$ uncorrelated factors into $$M$$ sectors $$\mathcal{P}_m$$, each with $$N_m = |\mathcal{P}_m|$$ factors such that $$\sum_M^m N_m = N$$. We are going to associate to each core factor $$X_m$$ a sector $$m$$, and with it $$N_m$$ factors $$\{F_i\}_{i\in\mathcal{P}_m}$$.
Take the low-fidelity matrix $$C_F^{\text{Core}}$$ and substitute along its diagonal the correlation matrices corresponding to each sector $$m$$ to create $$C_F^{\text{Estim}}$$. In other words, $$[C_F^{\text{Estim}}]_{i,j} = \left\{ \begin{array}{ll} \text{Cov}(F_i, F_j) & \text{if } i,j \in \mathcal{P}_m, \\ \text{Cov}([BX]_i, [BX]_j) & \text{if } i \in \mathcal{P}_m, j \in \mathcal{P}_{n}, m\neq n. \end{array} \right.$$ The correlation matrix $$C_F^{\text{Estim}}$$ is the final version of our cleaned correlation matrices.
It remains to show that $$C_F^{\text{Estim}}$$ is of full rank, and that it sufficiently approximates the sample correlation matrix $$C$$. As an estimator for the true correlation correlation matrix, however, $$C_F^{\text{Estim}}$$ cannot be expected to perform.
This core factor method differs from Barclays's in some important ways. First, and most importantly, we do not assume that "a factor is correlated only to very specific factors from $$C$$, imposing a structure on $$C_F^{\text{Core}}$$." Doing so, it can be shown, may cause our matrix $$C_F^{\text{Estim}}$$to become singular.
Second, the choice of core factors is not arbitrary. In fact, core factors must be chosen to achieve the properties we want from the matrix.
To be continued...
References
Bouchard, Jean-Philippe and Marc Potters. "Financial applications of random matrix theory: a short review." The Oxford Handbook of Random Matrix Theory. Oxford, 2011.
Ledoit, Olivier and Michael Wolf. "Nonlinear shrinkage estimation of large-dimensional covariance matrices." The Annals of Statistics,Vol. 40, No. 2 (2012): 1024–1060. http://ledoit.net/AOS989.pdf | 2022-06-27 11:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7343850135803223, "perplexity": 889.45373843429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103331729.20/warc/CC-MAIN-20220627103810-20220627133810-00393.warc.gz"} |
https://www.nature.com/articles/s41467-018-04282-w?error=cookies_not_supported&code=eec0c7e3-60b1-4a0c-84cc-9a0af7d557b2 | Article | Open | Published:
# Sculpting and fusing biomimetic vesicle networks using optical tweezers
## Abstract
Constructing higher-order vesicle assemblies has discipline-spanning potential from responsive soft-matter materials to artificial cell networks in synthetic biology. This potential is ultimately derived from the ability to compartmentalise and order chemical species in space. To unlock such applications, spatial organisation of vesicles in relation to one another must be controlled, and techniques to deliver cargo to compartments developed. Herein, we use optical tweezers to assemble, reconfigure and dismantle networks of cell-sized vesicles that, in different experimental scenarios, we engineer to exhibit several interesting properties. Vesicles are connected through double-bilayer junctions formed via electrostatically controlled adhesion. Chemically distinct vesicles are linked across length scales, from several nanometres to hundreds of micrometres, by axon-like tethers. In the former regime, patterning membranes with proteins and nanoparticles facilitates material exchange between compartments and enables laser-triggered vesicle merging. This allows us to mix and dilute content, and to initiate protein expression by delivering biomolecular reaction components.
## Introduction
Vesicles are lipid bilayer-encased aqueous compartments that range from attoliters to picoliters in volume. They are widely used as model architectures in studying the biophysical properties of membranes1, as functional units in biotechnology (e.g., for applications in bio-sensing2, drug delivery3,4 and diagnostics5,6), as miniaturised reaction vessels7,8,9, and as soft-matter microsystems10. These developments have led to increasing interest in using cell-sized giant vesicles as plasma membrane mimics in bottom-up synthetic biology, where they act as a chassis for artificial cells that contain biomolecular components and perform cell-mimetic functions11,12,13,14. The key feature at the heart of these applications is compartmentalisation: isolation of encapsulated cargo from the external environment, allowing ordering of chemical species in space. In principle, there is a second degree of ordering that can take place, which defines the spatial organisation of individual compartments in relation to one another. In biological systems, this manifests itself in the form of tissues, where cells connect with adjacent cells to order them in space. This, together with intercellular communication, enables cells to exhibit higher-order behaviours as a collective. Mimicking this in vesicle-based systems is likewise expected to result in a step change in utility in a suite of applications, particularly if molecular cargo can also be delivered to vesicle interiors on demand.
Several platforms have been previously developed that involve forming higher-order networks of cell-like compartments using related but fundamentally different structures to vesicles. Water-in-oil droplet interface bilayer (DIB) networks have been assembled and engineered to exhibit collective properties, such as self-folding and selective transmission of signals down defined neural-like paths15,16, as well as being functionalized with biomolecules to act as simple electrical devices17,18.
Other systems involve single vesicles that have been divided into sub-compartments linked by open tethers that are pulled using a micropipette19,20,21, or by pre-assembling internal bilayer partitions22. In these structures, however, compartments can still be considered as part of a single vesicular structure: they are encased by a single continuous membrane, and in the former example there is no separation of content between their interiors. These structures are more akin to extended lipid bodies found in organelles such as the endoplasmic reticulum, rather than isolated cells coupled through junctions. In addition, they are not formed by linking pre-existing differentiated vesicles from a population and networks were being confined to 2-D.
There are also impressive examples employing principles of self-assembly to construct vesicle aggregates by embedding molecular recognition modules in the membrane23,24, including complementary DNA strands25,26. Critically, as they are bulk assemblies, networks of defined architectures cannot be formed, and they are instead colloidal aggregates.
Herein, we develop a different approach that allows us to sculpt and manipulate vesicle networks with fine spatiotemporal control. We engineer an in vitro system composed of adherent cell-sized vesicles coupled to an optical tweezer setup, which enables us to selectively connect isolated vesicles from different sub-populations together to generate networks of user-defined architectures. We term these double bilayer delineations as vesicle interface membranes (VIMs).
Our networks are reconfigurable, can be assembled/disassembled on demand and can have their morphology modulated by external stimuli such as temperature and chemical concentration. Chemically distinct compartments could be physically linked over small (several nm) and large (100s µm) distances by closed tethers. Patterning membranes with proteins and nanoparticles facilitate inter-compartment communication across the intermembrane space between adhered vesicles and allows vesicle merging to be triggered using light. This enables us to perform discrete operations on compartments, such mixing and dilution, and to initiate protein expression by delivering transcription/translation biomolecular cargos to a vesicle containing a plasmid. We envisage that this platform could be deployed for the development of new biomaterials, synthetic cell networks, minuturised bioreactors, implanted therapeutic devices and responsive materials.
## Results
### Assembling vesicle networks
Networks were assembled using optical tweezers to deposit vesicles in defined locations, and by exploiting non-specific global intermembrane forces to mediate membrane adhesion. We designed an iso-osmotic system where vesicles made of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and 1 wt.% 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl) (ammonium salt) (Rh-PE) could be trapped by optical tweezers (λ = 1070 nm) due the refractive index (RI) mismatch between each vesicle’s interior (1.5 M sucrose; RI = 1.406) and exterior (0.75 M NaCl; RI = 1.342) (see Supplementary Notes 1 and 2 for details). Vesicles could be trapped with conditions as low as 0.4 M sucrose (interior) and 0.2 M NaCl (exterior). Depending on the size of the vesicles, we could drag and drop individual vesicles simply by turning the laser on and off, at powers ranging from 80 to 470 mW at the back aperture of the objective (23–190 mW at trap). Networks were assembled by depositing vesicles immediately adjacent to one another (see Fig. 1 for schematic of experimental setup). Seconds after contact, the membranes quickly adhered by zipping up from an initial contact point to form a vesicle pair connected by a membrane patch, termed a VIM (see Supplementary Movie 1).
The formation of an adhesive patch is governed by a balance between attractive Van der Waals forces and repulsive electrostatic, undulation and hydration forces that exist between the two membranes27. Although POPC is zwitterionic at neutral pH, it has a slight negative charge28, likely due to the orientation of the headgroups29 and hydration layers30. The presence of NaCl served to screen membrane electrostatic repulsion, allowing the attractive forces to dominate, which resulted in adhesion upon contact. We found that below a critical threshold of 0.2 M NaCl, we could no longer reliably assemble VIMs with well-defined interfaces. As expected, larger NaCl concentrations led to larger VIM interfaces, and the presence of charged lipids increased the minimum concentration of NaCl needed for adhesion (see Supplementary Note 3, Supplementary Fig. 1 and Supplementary Table 1).
This approach could be used to assemble larger-scale networks of user-defined geometries, simply by dragging and dropping individual vesicles at set locations (Fig. 2a). For example, we were able to form 2-D networks of trigonal, square and pentagonal geometries, as well as branching networks of three vesicles linked to a central node. Our ability to manipulate vesicles in the z-direction allowed us to assemble 3-D geometries such as tetrahedrons, square pyramids and three-layered pyramids (Fig. 2b, c). The VIM networks were reconfigurable, in that transformation between geometries was possible (Fig. 2d and Supplementary Movie 2). It was possible to reconfigure a vesicle chain of four vesicles to a 2-D square, and then to a 3-D tetrahedron geometry by lifting one of the vesicles above the remaining three vesicles.
We could controllably transport these networks in space by trapping a single vesicle and dragging the entire adhered network. All the adhered vesicles moved in synchrony, confirming the presence of adhesion patches and showing the VIM networks can be considered discrete assemblies able to be manipulated as a whole.
Networks could be disassembled after generation by diluting the NaCl concentration below the critical threshold for adhesion (Fig. 3a). This was achieved by slow perfusion of DI water through an agarose gel situated above the sample over a period of minutes, with complete detachment observed after ca. 45 min. Diluting NaCl from 0.75 to 0.2 M also enabled VIM morphology to be modulated, specifically the intermembrane contact angle and the area of the adhesive patch (Fig. 3b). It is worth noting that salt dilution causes an osmotic pressure imbalance across the vesicle membranes and that the resulting increase in membrane tension contributes to the reduction of the intermembrane contact area.
### Vesicle interface membranes characterisation and modulation
In order to determine whether the VIM was a hemifused structure composed of a single bilayer or a membrane patch consisting of two adhering bilayers, we performed an assay using fluorescently labelled lipids, where one vesicle was labelled with 1 wt.% fluorescent lipid Rh-PE, and the other unlabelled (Fig. 4a). No free diffusion of lipid between vesicles was observed over 80 min, in contrast to hemifused membranes where mixing in the outer bilayer leaflet would be seen within seconds31, thus confirming the existence of two distinct adherent membranes32. This is further substantiated by obtaining a fluorescence intensity profile across two fluorescently tagged adhered vesicles (Fig. 4b), which shows the adhesion patch having double the intensity of a single bilayer, suggesting the presence of a two bilayer-thick VIM (see Supplementary Note 4 and Supplementary Fig. 2). The VIM membrane was found to be 1.9 times higher in intensity on average than the non-VIM membrane for each vesicle pair (s.d. = 0.17; n = 20).
One of the key parameters associated with these structures is the adhesion energy between individual vesicles, which is determined by the summation of the inter-vesicle forces27. The net adhesion energy (W) can be deduced without accessing the contributions from the individual components simply using the contact angle between membranes of two spheroidal vesicles (θ; Fig. 4c) and membrane area expansion modulus (K) according to Eq. (1), an approach adopted elsewhere33 (see Supplementary Note 5 for details):
$${\mathrm{cos}}\,\theta \approx 1 - \left( {\frac{{2W}}{K}} \right)^{\frac{1}{3}}$$
(1)
VIM contours and contact angles were extracted using a MATLAB image analysis script (Supplementary Note 6 and Supplementary Fig. 3) and a value for K of 213 mJ m−2 obtained from literature34,35. Adhesion energy at room temperature was determined to be 0.9 mJ m−2 (s.d. = 0.3 mJ m−2; n = 20), similar to what was obtained in other systems27,35,36.
The morphology of the vesicle pair could be dynamically tuned by varying the applied laser power, which in turn affects local temperature37. As this increases, the membrane area expands, hence alleviating the membrane stress and promoting the expansion of the contact area. However, the temperature change also influences the expansion modulus38,39 and the intermolecular forces, which determine the net adhesion energy25, making difficult to predict a priori the temperature dependence of the adhesion area. By locally heating the VIM using a laser directed at the centre of one of the vesicles (0.95 W at trap), it was experimentally observed that the interface length increased from 9 to 15 µm and the contact angle increased from 22.8° to 37.1° (Fig. 4d). These values could be modulated by varying the applied laser power (Fig. 4e). We could switch between the morphologies simply by turning the laser source on and off, with conversion between the two arrangements occurring in seconds since the characteristic time for heat dissipation in water is lower than 1 ms at the vesicle length scale (Fig. 4e). The local temperature increase due to the delivered energy from the laser was estimated to be 9 °C (Supplementary Note 8 and Supplementary Table 2). Similar results were seen when the vesicle was heated using a heating stage and not with a laser, further suggesting that this is a temperature-mediated effect (Supplementary Fig. 5A).
It is worth noting that the contact area between the two similarly sized vesicles remained flat during the modulation, despite the fact that the trap was centred on one of the vesicles, and not symmetrically located between the two vesicles. This suggests that no mismatch in the vesicle mechanical properties was induced by the optical tweezer, hence ruling out any optical stretching effect on the equilibrium VIM morphology. Furthermore, a similar modulation of VIM morphology, although within a narrower range, could also be achieved when the trapping beam was located a few microns distant from the vesicles, the latter no longer being optically trapped (Supplementary Note 8 and Supplementary Fig. 5B).
A second key parameter to deduce is the distance between the adherent membranes. This was acquired by obtaining small-angle X-ray scattering (SAXS) of POPC multi-lamellar stacks with 0.75 M NaCl (Supplementary Fig. 6), revealing a d-spacing 6.9 nm. Together with a value for POPC bilayer thickness of 3.8 nm (2zP, distance between the PC headgroups) obtained from literature40, an intermembrane distance of 3.1 nm was calculated. These values will not be identical in adhered vesicles but will likely be comparable (Supplementary Note 9). In adhered vesicles, undulations are dampened due to membrane stretching, which may also lead to hydrophobic attractions in the bilayers due to the exposed hydrophobic core27. In addition, POPC bilayers are estimated to thicken by up to 2 Å in the presence of salt41, which would further reduce the estimated intermembrane distance. The intermembrane value obtained is therefore likely to be an upper-bound value, as it ignores these considerations.
### Connection over large distances through lipid tethers
In addition to linking compartments immediately adjacent (several nm) to one another, we were also able to form tethers that physically connect two chemically distinct vesicles across large distances in space (hundreds of microns). There have been previous examples of controllable tether formation between beads and vesicles42,43 using traps, but here we form tethers between multiple vesicles.
Once formed, if VIMs were left to sediment on a coverslip substrate which was not passivated with bovine serum albumin (BSA), stochastic adhesion of a vesicle on the surface could be observed. The second non-adhered vesicle could thus be moved using optical tweezers, while the partner remained stationary. Surprisingly, during this process, complete fission of the adhesion patch was not observed. Instead, as the vesicle was pulled the adhesion area was found to decrease until a certain point where an interconnecting tether appeared between the two vesicles (Fig. 5, Supplementary Note 7 and Supplementary Fig. 4).
This tether could be pulled for hundreds of microns without breaking. Tether rupture was never observed and increasing the speed at which the tether was pulled led to the vesicle escaping the optical tweezer, demonstrating that optical forces never exceed the tether-tether adhesion force or the tether rupture force. Upon release of the vesicle by turning off the trap, the tether retracted and the VIM reformed an adhesion patch, a process that could be repeated multiple times. These tethers were closed, as demonstrated by lack of diffusion of encapsulated calcein and of membrane-embedded Rh-PE fluorescent lipids between compartments over 20 min (Supplementary Note 10 and Supplementary Fig. 7). This suggests that there is a minute adhesive patch holding the tether together, similar to those found when tethers are formed with streptavidin-labelled beads and biotinylated vesicles43.
Labelling one vesicle with fluorescent lipid enabled us to locate the anchor point between the membranes. This led to the striking observation that in most cases the tether was not composed of a membrane from only one vesicle, but indeed was composed of both, with membranes meeting near the tether midpoint (Fig. 5a and Supplementary Movie 3). On occasions when the anchor point was asymmetrically positioned along the tether (i.e., lying closer to one vesicle), pulling the tether farther led to movement of the anchor point until it reached the midpoint (Fig. 5b).
Uniquely, the tethers we formed are held together with non-specific global forces as opposed to specific molecular forces such as biotin/streptavidin or DNA base pair interactions. One hypothesis is that the formation of tethers is governed by a trade-off between an energetic penalty in breaking the membrane patch and that of deforming the membrane area. Initially, at a large VIM adhesion patch, there is a large energetic cost to pull a tether as the deformation can be resisted by surface tension. The patch therefore decreases in size until a point where it is small enough so that the energetic cost in pulling the tether is lower than breaking the remaining adhesion patch.
### Vesicle communication
One of the grand promises of compartmentalised vesicle soft-matter assemblies is the prospect of using them as picolitre/femtolitre reaction vessels, and as tissue-like cell-mimetic compartments in synthetic biology. To realise this, communication and material transfer between compartments must be demonstrated. This poses a challenge in the VIM system, as the presence of a double membrane at the interfaces precludes the use of single transmembrane protein pores as conduits between compartments.
To tackle this, we use the protein pore alpha-Hemolysin (α-HL; pore diameter 1.4 nm), and rely on diffusion through two pores, one to allow material to leave the donor vesicle into the intermembrane space, and a second to allow it to diffuses from this space to the acceptor vesicle. However, as α-HL inserts into all membranes—both those at the vesicle interface and those facing the external solution—we had to selectively block channels not lying in the intermembrane region to prevent leakage of cargo into the external environment. This was achieved by adding the cyclodextrin blocker (heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin; TRIMEB) to the external solution, which non-covalently occludes the α-HL pore44 thus reducing leakage of material to the exterior45,46. Crucially, as the blocker is slightly larger than the intermembrane distance (maximum point to point distance ca. 4 nm)47, its ability to penetrate this space is diminished, allowing selective translocation of material between compartments. Although α-HL can diffuse out of the VIM area, when it does so, it encounters blockers present in the external solution, inhibiting leakage of encapsulated material.
To confirm successful operation of this system, we conducted a fluorescence leakage assay (Fig. 6) between two vesicles containing α-HL (50 ng µl−1) internally, formed via the emulsion transfer method. A donor vesicle was loaded with Ca2+ (200 mM), an acceptor vesicle with the Ca2+ sensitive Fluo-4 dye (0.54 mM) and EDTA (1 mM) to minimise background fluorescence, with TRIMEB (10 mM) in the exterior (see Fig. 6a for schematic). After a variable lag phase of up to 4 min (Supplementary Note 11 and Supplementary Fig. 8), the acceptor compartment increased in fluorescence as Ca2+ diffused through the pores, until the signal saturated after ca. 5 min (Fig. 6b, c). No fluorescence increase was seen in the control measurements with no protein present or with no blocker present due to leakage of Ca2+ to the exterior. Likewise, fluorescence did not increase when NaCl was absent in the exterior, but α-HL and TRIMEB were present internally. Vesicles were brought into contact, but no VIM formed. This result indicates that where successful inter-vesicle communication was seen, Ca2+ was not diffusing into the bulk and then back into the adjacent vesicle. We also ran fluorescence release experiments on single vesicles to demonstrate the effectiveness of the blocker at preventing ion leakage into the external environment, and results showed that this indeed slowed the rate by fivefold (Supplementary Note 12 and Supplementary Fig. 9). It should be noted that there is likely a residual level of leakiness in the system due to material escaping through the intermembrane space and, due to imperfect blocking of the pores, through the external membranes.
### Vesicle fusion
By slightly altering our experimental system, it was possible to use the optical tweezer to fuse selected compartments in a VIM network, an operation that can be performed with high temporal and spatial resolution. To do this, we attached 150 nm gold nanoparticles (AuNP) on the outer surface of the vesicles using biotin/streptavidin conjugation (Fig. 7a, b). When these enter the focus of the laser, they absorb energy which is dissipated as heat, leading to an extreme temperature increase (>100 °C using our operating powers)48. This is a localised effect with heat dissipation occurring on the length scale comparable to the nanoparticles48. The disruption to the membrane fabric (possibly through membrane expansion and opening of a fusion pore)49 was enough to lead to breakdown of the VIM and fusion of the compartments at sufficiently high powers (>150 mW at trap; see Supplementary Movie 4)49,50. Fusion was complete several seconds after focusing the laser on the VIM. Alternatively, 80 nm AuNPs could also be used, although the powers needed to consistently achieve fusion were higher (>300 mW at trap)48. With 1–2 nm AuNP, fusion was not observed, likely due to the short heat dissipation distances and because they do not resonate at the trapping laser wavelength. Adding calcein dye (50 mM) to one vesicle and fluorescently labelled lipid (Rh-PE) to the other showed that both the vesicle lumen and the membrane itself were completely mixed post-fusion (Fig. 7c). The potential to alter the membrane composition of the fused GUV was demonstrated by fusing two GUVs with a different fluorescently tagged lipid each (Rh-PE and NBD-PE; Fig. 8a). The location of the AuNP on the membrane is still unknown. If the particle does indeed sit in the intermembrane space, it has the potential to form a biotin/streptavidin bridge between the vesicles. However, as the addition of AuNPs does not significantly affect the size of the adhesion patch (Supplementary Note 13 and Supplementary Fig. 10) this effect is not thought to be significant.
As the thermally triggered fusion was localised to the laser spot, we were able to construct a VIM network, and specifically select a single discrete VIM to be fused. For example, we formed a four-vesicle network, fusing each one of the VIMs in turn, until only a single large vesicle remained (Fig. 8b). This approach enabled us to perform a sequential dilution both of the lipid membrane material and of the encapsulated material of a cargo-containing vesicle. A vesicle labelled with 1 wt.% fluorescent Rh-PE and encapsulated calcein (50 mM) was fused with two other non-fluorescent vesicles in a VIM network (Fig. 8c). Each fusion event led to a decrease in fluorescence intensity of the encapsulated material and of the membrane of the cargo-containing compartment.
Next, we demonstrated the ability to conduct controlled biochemistry in the vesicles, using them as cell-like reactors. We isolated the components needed for coupled cell-free transcription and translation in three distinct compartments on a VIM network, then initiated protein synthesis using the laser (Fig. 9). We used the PURExpress protein expression system reconstituted from purified cellular components51, a GFP plasmid, and a three-vesicle network for these experiments. One vesicle contained tRNAs, amino acids and rNTPs (PURExpress solution A). A second contained ribosomes, T7 RNA polymerase, translation factors, aminoacyl-tRNA synthetases and energy regeneration enzymes (PURExpress solution B). A third contained the plasmid. In order to optically distinguish the vesicle populations, two vesicle types were labelled with Cy5-PE and Rh-PE (Cy5 and TRITC filers) and the third was left unlabelled. A network of vesicles (5 ± 2 µm diameter each) was formed with 0.25 M NaCl externally and 0.5 M sucrose internally. The three compartments were fused, and GFP expression followed over time (FITC filter; 1 s exposure). GFP production was detected 20 min after fusion, with the reaction proceeding for ca. 100 min before plateauing (Fig. 9b). These results suggest that the high salt conditions needed to form a VIM do not prohibit biochemical reactions from being performed in the vesicle interior due to the shielding effect of the lipid membrane. The variability in the results (Supplementary Note 13 and Supplementary Fig. 11) is likely due to variability in both vesicle volumes and encapsulation efficiencies of the individual components in the cell-free expression mixture52,53.
## Discussion
The flexibility of this platform is derived from the use of optical tweezers, which have previously been used to transport vesicles in space54,55, as well as in the generation of droplet networks56. Optical tweezers are non-invasive thus eliminating the possibility of cross-contamination, and they can be turned on and off on-demand, and used to manipulate objects in 3-D space with submicrometre resolution.
The structures we form have several interesting properties and potential applications. First, the fact that networks can be modulated and indeed disassembled based on external stimuli (salt and temperature) is a feature which could prove useful for sensing applications and in the construction of responsive systems.
Second, the structures we form are biomimetic and can be used as simplified models in the study of cell biology, with tethers and VIMs being synthetic analogues of neural axons and synapses. VIMs have similar length scales to cellular junctions, and the tethers closely resemble lipid membrane nanotubes, also called tunnelling nanotubes57, which bridge distinct living cells for chemical exchange and signalling purposes. As in our experiments, some types of tunnelling nanotubes are not open-ended tethers, instead containing an anchor point where the membranes of the two connected cells meet58. In this condition, exchange of membrane or cytosolic contents by simple diffusion across the nanotubes is forbidden while the anchor point can exhibit high mobility. Tether formation also raises the possibility of selective inter-compartment communication across large distances. We note that the structures we form differ from previously reported open nanotube-linked vesicles19,20,59 formed using micromanipulators, in that in our case there is no free exchange of material between these compartments; the internal volumes of the vesicles remain distinct. The technological basis of our approach is also conceptually different.
Finally, the demonstration of on-demand vesicle fusion to initiate biochemical reactions shows that our system can serve as a platform with which chemistry and biochemistry are performed in miniaturised picolitre vesicle reactors. The use of lasers to induce fusion means defined vesicles carrying reactive cargoes can be brought together with spatiotemporal precision, and the use of near-infrared light limits damage to biological material. In an analogy to droplet microfluidics, this paves the way for vesicle microfluidics, where vesicles are the basic functional unit on which operations are performed. Vesicles have the added advantage of being in an oil-free environment and providing a delineating membrane as a surface which can be decorated with cellular machinery, e.g., for influx/efflux of materials. They are also more physiologically relevant and can be interfaced with cells and tissues as they exist in a bulk aqueous environment.
At present, there are several limitations associated with our system. As the method relies on individually manipulating compartments, it is only suited for the construction of networks composed of a limited number of vesicles. In addition, although the networks are partially reconfigurable, once connections are made, it is not possible to break them without diluting the salt. Therefore, although it is possible to go from a linear to a square to a pyramidal geometry, the reverse is not possible. Our system also depends on the presence of >0.2 M NaCl to mediate adhesion and relies on sucrose being encapsulated internally for effective trapping. This might not be compatible with some physiological systems, and may change the biophysical properties of the membrane, e.g., its fluidity60. To remove the dependence on salt, future systems could employ biotin/streptavidin conjugation, or DNA binding/recognition moieties that are anchored into the vesicle membrane24. A further constraint is that for inter-compartment communication, although leakage to the external environment was slowed by the blocker, release was still observed within 10 min and material escaping through the intermembrane space could also not be discounted. With the design of engineered double-membrane spanning proteins61 and DNA origami constructs in the future62,63, this system could be optimised further. These could also be used to facilitate material transfer through tether-linked vesicles, a functionality that is currently absent.
In conclusion, by engineering a system composed of adherent vesicles that can be manipulated in space using optical tweezers, we were able to connect individual vesicles into networks of user-defined architectures. These networks were reconfigurable which enabled transformations between one geometry to another, and their morphology could be tuned using both physical and chemical variables. Chemically distinct compartments could also be linked across a range of length scales (through five orders of magnitude) using closed lipid tethers derived from both vesicles. By incorporating protein machinery and gold nanoparticles, both inter-vesicle communication and vesicle fusion were achieved, allowing us to mix vesicle material and to trigger cell-free protein expression. The described platform ushers in the possibility of using vesicle networks as models of cellular networks in synthetic biology, for the manufacture of synthetic synapses and gap junctions, as miniature biochemical reaction vessels, and as soft microsystem devices existing in physiological environments. Such systems could also be valuable as models in the study of cell–cell adhesion64, particularly in investigating the role of the bilayer itself in this process.
## Methods
### Materials
All lipids were purchased from Avanti Polar Lipids (Alabaster, AL) as powders and used without further purification. Lipids used include POPC (catalogue number 850457); 18:1 Rh-PE(catalogue number 810150); 18:1 NBD-PE (catalogue number 810145); 18:1 Cy5-PE (catalogue number 810335); 16:0 Biotinyl Cap PE (catalogue number 870277). Lipid mixtures were prepared by co-dissolving the required molar ratios of lipids in chloroform. BSA (≥99% assay, catalogue number A4161), α-HL (catalogue number H9395, 10,000 units mg−1 protein), calcein (catalogue number C0875), TRIMEB (98% purity; catalogue number H4645) and all buffer reagents were purchased from Sigma-Aldrich (Gillingham, UK). Fluo-4 (pentapotassium salt; 99% purity; catalogue number F14200) was purchased from Thermo Fisher Scientific (Loughborough, UK). A summary of materials and conditions used in the individual experiments are given in Supplementary Table 3.
### Vesicle preparation
Unless otherwise specified vesicles were prepared by electroformation. First, lipid in chloroform (20 µl; 1 mg ml−1) was spread evenly on a conductive indium tin oxide coated (ITO) slide, leaving a film which was dried under vacuum for 30 min to remove residual solvent. A 5 mm thick polydimethylsiloxane (PDMS) spacer with a central cut-out was used to separate the slides with the conductive sides facing each other, and the chamber was filled with 1.5 M sucrose solution in DI water. An alternating electric field (1 V, 10 Hz) was applied across the ITO plates using a function generator (Aim-TTi, TG315). After 2 h, the electric field was changed to 1 V, 2 Hz for a further hour, and the resulting vesicles collected.
In the inter-vesicle material transfer experiments, the emulsion transfer method65 was used to form vesicles. This was because encapsulating large, charged molecules is not feasible in electroformation. Briefly, 25 µl of 1.5 M sucrose in buffer (500 mM KCl, 25 mM Tris-HCl, pH 8.0) was added to 250 µl mineral oil with dissolved lipid (10 mg ml−1). A water-in-oil emulsion was made by vortexing this mixture for 30 s and left standing for 10 min to allow a lipid monolayer to effectively stabilise the emulsion. Then, 250 µl of the emulsion was layered above 250 µl of 1.5 M glucose in buffer solution, forming a water/oil column. This was centrifuged (9000 × g, 30 min) resulting in a vesicle pellet. The upper oil phase was removed, and a second centrifugation step was applied (6000 × g, 10 min), followed by removal of the supernatant and resuspension in 250 µl fresh glucose solution.
### Network assembly
A coverslip surface was coated with a BSA monolayer to prevent interactions between the glass substrate and the vesicle membrane. The coating was applied by depositing 300 µl of 1% BSA in DI water on a coverslip and leaving it to evaporate in a 60 °C oven, leaving behind a protein film. The film was subsequently rinsed with DI water and dried under a nitrogen stream. Vesicle assembly chambers were prepared by placing a 1 mm thick PDMS sheet with a 10 mm diameter hole on the coverslip. An external solution containing 0.83 M NaCl was added to the vesicle emulsion at a ratio of 9:1, to give a final concentration of 0.75 M. For fusion experiments, the final NaCl concentration used was 0.25 M to osmotically match the 0.5 M sucrose in the vesicle interior. The sample was then mixed by pipette aspiration, the chamber sealed with a second coverslip, and placed on the optical trapping setup. Individual vesicles were trapped by switching the laser on, and were moved in 3-D relative to the sample by moving the microscope stage (x,y) and changing the focus of the objective (z). VIMs were subsequently formed by positioning two or more vesicles into contact. Details of the optical setup are available in Supplementary Note 1
### Vesicle disassembly via salt dilution
In order to dynamically change the NaCl concentration in the vesicle exterior once vesicle networks were generated, the sample well was sealed with an agarose gel (3 wt.% in DI water; 1 mm thick), which allowed diffusion of substances placed above it without disturbing the networks through the generation of large flows. This setup was also used to assemble extended vesicle networks, as it gave time to assemble networks before vesicles would stochastically come into contact with one another to form aggregates.
### Vesicle fusion and vesicle microreactors
To prepare functionalised AuNP vesicles, we first electroformed POPC vesicles in 0.5 M sucrose solution with 2 wt.% 16:0 Biotinyl Cap PE. When fluorescent lipids were present, these were at 1 wt.%. In the calcein dilution and protein expression experiments, vesicles were formed via emulsion phase transfer (see above), with 0.5 M sucrose/glucose density gradient. Next, 150 nm streptavidin-coated AuNPs (Nanopartz, CO, USA; product C11-150-TS-50; 2.5 mg ml−1) were added to the vesicles 1:9, with the sample vortexed for 30 min to drive conjugation. VIMs were formed as before, but with 0.25 M NaCl in the external solution. Vesicles were fused by focusing and applying a laser of 150 mW (at trap) or greater on the VIM interface. Vesicles were manipulated in space using a <100 mW laser (at trap) to avoid unintentional fusion (Supplementary Note 13). We also used Aurora-DSG Nanoparticles (AuNP-labelled lipids; 1–2 nm; Avanti), which did not result in fusion when embedded in the membrane at 2 wt.%.
In the protein expression experiments, we used an E. coli pJexpress 441 vector (50 ng µl−1) with a T7 promoter expressing the fluorescent protein Dasher GFP (ex = 510 nm, em = 521 nm) supplied by ATUM (CA, USA). The PURExpress kit components (New England Biolabs, MA, USA) were assembled according to the manufacturer’s instructions. The kit components and plasmids were made up to 500 mM sucrose by adding appropriate volumes of 2 M sucrose. PURExpress Solutions A and plasmid were prepared in PBS, and Solution B was prepared 10 mM magnesium acetate PBS on the recommendation of the supplier. Vesicles were formed with emulsion phase transfer, with these three solutions being the inner phase. In all fluorescence experiments, total fluorescence of vesicle interiors was divided by the vesicle area, and the background fluorescence subtracted. Fluorescence values were obtained using ImageJ analysis software as grey values.
### Data availability
All relevant data are available from the authors on reasonable request.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Chan, Y.-H. M. & Boxer, S. G. Model membrane systems and their applications. Curr. Opin. Chem. Biol. 11, 581–587 (2007).
2. 2.
Bally, M. et al. Liposome and lipid bilayer arrays towards biosensing applications. Small 6, 2481–2497 (2010).
3. 3.
Mo, R. et al. Multistage pH-responsive liposomes for mitochondrial-targeted anticancer drug delivery. Adv. Mater. 24, 3659 (2012).
4. 4.
Daraee, H., Etemadi, A., Kouhi, M., Alimirzalu, S. & Akbarzadeh, A. Application of liposomes in medicine and drug delivery. Artif. Cells Nanomed. Biotechnol. 44, 381–391 (2016).
5. 5.
Yezhelyev, M. V. et al. Emerging use of nanoparticles in diagnosis and treatment of breast cancer. Lancet Oncol. 7, 657–667 (2006).
6. 6.
Li, S., Goins, B., Zhang, L. & Bao, A. Novel multifunctional theranostic liposome drug delivery system: construction, characterization, and multimodality MR, near-infrared fluorescent, and nuclear imaging. Bioconjug. Chem. 23, 1322–1332 (2012).
7. 7.
Bolinger, P. Y., Stamou, D. & Vogel, H. An integrated self‐assembled nanofluidic system for controlled biological chemistries. Angew. Chem. Int. Ed. 47, 5544–5549 (2008).
8. 8.
Elani, Y., Law, R. V. & Ces, O. Vesicle-based artificial cells as chemical microreactors with spatially segregated reaction pathways. Nat. Commun. 5, 5305 (2014).
9. 9.
Hindley, J. W. et al. Light-triggered enzymatic reactions in nested vesicle reactors. Nat. Commun. 9, 1093 (2018).
10. 10.
Stano, P., Carrara, P., Kuruma, Y., de Souza, T. P. & Luisi, P. L. Compartmentalized reactions as a case of soft-matter biotechnology: synthesis of proteins and nucleic acids inside lipid vesicles. J. Mater. Chem. 21, 18887–18902 (2011).
11. 11.
Elani, Y. Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology. Biochem. Soc. Trans. 44, 723–730 (2016).
12. 12.
Noireaux, V. & Libchaber, A. A vesicle bioreactor as a step toward an artificial cell assembly. Proc. Natl Acad. Sci. USA 101, 17669–17674 (2004).
13. 13.
Noireaux, V., Maeda, Y. T. & Libchaber, A. Development of an artificial cell, from self-organization to computation and self-reproduction. Proc. Natl Acad. Sci. USA 108, 3473–3480 (2011).
14. 14.
Elani, Y. et al. Constructing vesicle-based artificial cells with embedded living cells as organelle-like modules. Sci. Rep. 8, 4564 (2018).
15. 15.
Villar, G., Graham, A. D. & Bayley, H. A tissue-like printed material. Science 340, 48–52 (2013).
16. 16.
Booth, M. J., Schild, V. R., Graham, A. D., Olof, S. N. & Bayley, H. Light-activated communication in synthetic tissues. Sci. Adv. 2, e1600056 (2016).
17. 17.
Bayley, H. et al. Droplet interface bilayers. Mol. Biosyst. 4, 1191–1208 (2008).
18. 18.
Maglia, G. et al. Droplet networks with incorporated protein diodes show collective properties. Nat. Nano 4, 437–440 (2009).
19. 19.
Karlsson, A. et al. Molecular engineering: networks of nanotubes and containers. Nature 409, 150–152 (2001).
20. 20.
Jesorka, A. et al. Generation of phospholipid vesicle-nanotube networks and transport of molecules therein. Nat. Protoc. 6, 791–805 (2011).
21. 21.
Sott, K. et al. Controlling enzymatic reactions by geometry in a biomimetic nanoscale network. Nano Lett. 6, 209–214 (2006).
22. 22.
Elani, Y., Gee, A., Law, R. V. & Ces, O. Engineering multi-compartment vesicle networks. Chem. Sci. 4, 3332–3338 (2013).
23. 23.
Kisak, E., Kennedy, M., Trommeshauser, D. & Zasadzinski, J. Self-limiting aggregation by controlled ligand−receptor stoichiometry. Langmuir 16, 2825–2831 (2000).
24. 24.
Beales, P. A. & Vanderlick, T. K. Application of nucleic acid–lipid conjugates for the programmable organisation of liposomal modules. Adv. Colloid Interface Sci. 207, 290–305 (2014).
25. 25.
Parolini, L. et al. Volume and porosity thermal regulation in lipid mesophases by coupling mobile ligands to soft membranes. Nat. Commun. 6, 5948 (2015).
26. 26.
Parolini, L., Kotar, J., Di Michele, L. & Mognetti, B. M. Controlling self-assembly kinetics of DNA-functionalized liposomes using toehold exchange mechanism. ACS Nano 10, 2392–2398 (2016).
27. 27.
Bailey, S. M., Chiruvolu, S., Israelachvili, J. N. & Zasadzinski, J. A. Measurements of forces involved in vesicle adhesion using freeze-fracture electron microscopy. Langmuir 6, 1326–1329 (1990).
28. 28.
Klasczyk, B., Knecht, V., Lipowsky, R. & Dimova, R. Interactions of alkali metal chlorides with phosphatidylcholine vesicles. Langmuir 26, 18951–18958 (2010).
29. 29.
Makino, K. et al. Temperature-and ionic strength-induced conformational changes in the lipid head group region of liposomes as suggested by zeta potential data. Biophys. Chem. 41, 175–183 (1991).
30. 30.
Egawa, H. & Furusawa, K. Liposome adhesion on mica surface studied by atomic force microscopy. Langmuir 15, 1660–1666 (1999).
31. 31.
Heuvingh, J., Pincet, F. & Cribier, S. Hemifusion and fusion of giant vesicles induced by reduction of inter-membrane distance. Eur. Phys. J. E 14, 269–276 (2004).
32. 32.
Nikolaus, J., Stöckl, M., Langosch, D., Volkmer, R. & Herrmann, A. Direct visualization of large and protein-free hemifusion diaphragms. Biophys. J. 98, 1192–1199 (2010).
33. 33.
Chiruvolu, S. et al. Higher order self-assembly of vesicles by site-specific binding. Science 264, 1753–1753 (1994).
34. 34.
Henriksen, J. et al. Universal behavior of membranes with sterols. Biophys. J. 90, 1639–1649 (2006).
35. 35.
Marra, J. Direct measurement of the interaction between phosphatidylglycerol bilayers in aqueous electrolyte solutions. Biophys. J. 50, 815 (1986).
36. 36.
Helm, C. A., Israelachvili, J. N. & McGuiggan, P. M. Role of hydrophobic forces in bilayer adhesion and fusion. Biochemistry 31, 1794–1805 (1992).
37. 37.
Peterman, E. J., Gittes, F. & Schmidt, C. F. Laser-induced heating in optical traps. Biophys. J. 84, 1308–1316 (2003).
38. 38.
Pan, J., Tristram-Nagle, S., Kučerka, N. & Nagle, J. F. Temperature dependence of structure, bending rigidity, and bilayer interactions of dioleoylphosphatidylcholine bilayers. Biophys. J. 94, 117–124 (2008).
39. 39.
Stevens, M. J. Coarse-grained simulations of lipid bilayers. J. Chem. Phys. 121, 11942–11948 (2004).
40. 40.
Kučerka, N., Tristram-Nagle, S. & Nagle, J. F. Structure of fully hydrated fluid phase lipid bilayers with monounsaturated chains. J. Membr. Biol. 208, 193–202 (2006).
41. 41.
Böckmann, R. A., Hac, A., Heimburg, T. & Grubmüller, H. Effect of sodium chloride on a lipid bilayer. Biophys. J. 85, 1647–1655 (2003).
42. 42.
Heinrich, V. & Waugh, R. E. A piconewton force transducer and its application to measurement of the bending stiffness of phospholipid membranes. Ann. Biomed. Eng. 24, 595–605 (1996).
43. 43.
Cuvelier, D., Derényi, I., Bassereau, P. & Nassoy, P. Coalescence of membrane tethers: experiments, theory, and applications. Biophys. J. 88, 2714–2726 (2005).
44. 44.
Gu, L.-Q., Braha, O., Conlan, S., Cheley, S. & Bayley, H. Stochastic sensing of organic analytes by a pore-forming protein containing a molecular adapter. Nature 398, 686–690 (1999).
45. 45.
Castell, O. K., Berridge, J. & Wallace, M. I. Quantification of membrane protein inhibition by optical ion flux in a droplet interface bilayer array. Angew. Chem. Int. Ed. 51, 3134–3138 (2012).
46. 46.
Thomas, J. M., Friddin, M. S., Ces, O. & Elani, Y. Programming membrane permeability using integrated membrane pores and blockers as molecular regulators. Chem. Commun. 53, 12282–12285 (2017).
47. 47.
Groom, C. R., Bruno, I. J., Lightfoot, M. P. & Ward, S. C. The Cambridge structural database. Acta Crystallogr. B 72, 171–179 (2016).
48. 48.
Bendix, P. M., Reihani, S. N. S. & Oddershede, L. B. Direct measurements of heating by electromagnetically trapped gold nanoparticles on supported lipid bilayers. ACS Nano 4, 2256–2262 (2010).
49. 49.
Rørvig-Lund, A., Bahadori, A., Semsey, S., Bendix, P. M. & Oddershede, L. B. Vesicle fusion triggered by optically heated gold nanoparticles. Nano Lett. 15, 4183–4188 (2015).
50. 50.
Bahadori, A., Oddershede, L. B. & Bendix, P. M. Hot-nanoparticle-mediated fusion of selected cells. Nano Res. 10, 2034–2045 (2017).
51. 51.
Shimizu, Y. et al. Cell-free translation reconstituted with purified components. Nat. Biotechnol. 19, 751–755 (2001).
52. 52.
Nishimura, K. et al. Cell-free protein synthesis inside giant unilamellar vesicles analyzed by flow cytometry. Langmuir 28, 8426–8432 (2012).
53. 53.
Saito, H. et al. Time‐resolved tracking of a minimum gene expression system reconstituted in giant liposomes. Chembiochem 10, 1640 (2009).
54. 54.
Ichikawa, M. & Yoshikawa, K. Optical transport of a single cell-sized liposome. Appl. Phys. Lett. 79, 4598–4600 (2001).
55. 55.
Bendix, P. M. & Oddershede, L. B. Expanding the optical trapping range of lipid vesicles to the nanoscale. Nano Lett. 11, 5431–5437 (2011).
56. 56.
Friddin, M. S. et al. Optically assembled droplet interface bilayer (OptiDIB) networks from cell-sized microdroplets. Soft Matter 12, 7731–7734 (2016).
57. 57.
Davis, D. M. & Sowinski, S. Membrane nanotubes: dynamic long-distance connections between animal cells. Nat. Rev. Mol. Cell Biol. 9, 431–436 (2008).
58. 58.
Sowinski, S. et al. Membrane nanotubes physically connect T cells over long distances presenting a novel route for HIV-1 transmission. Nat. Cell Biol. 10, 211–219 (2008).
59. 59.
Karlsson, M. et al. Formation of geometrically complex lipid nanotube-vesicle networks of higher-order topologies. Proc. Natl Acad. Sci. USA 99, 11573–11578 (2002).
60. 60.
Los, D. A. & Murata, N. Membrane fluidity and its roles in the perception of environmental signals. Biochim. Biophys. Acta 1666, 142–157 (2004).
61. 61.
Mantri, S., Sapra, K. T., Cheley, S., Sharp, T. H. & Bayley, H. An engineered dimeric protein pore that spans adjacent lipid bilayers. Nat. Commun. 4, 1725 (2013).
62. 62.
Burns, J. R., Stulz, E. & Howorka, S. Self-assembled DNA nanopores that span lipid bilayers. Nano Lett. 13, 2351–2356 (2013).
63. 63.
Langecker, M. et al. Synthetic lipid membrane channels formed by designed DNA nanostructures. Science 338, 932–936 (2012).
64. 64.
Sackmann, E. & Smith, A.-S. Physics of cell adhesion: some lessons from cell-mimetic systems. Soft Matter 10, 1644–1659 (2014).
65. 65.
Fujii, S. et al. Liposome display for in vitro selection and evolution of membrane proteins. Nat. Protoc. 9, 1578–1591 (2014).
## Acknowledgements
This work was supported by EPSRC fellowship ref. EP/N016998/1 awarded to Y.E., by EPSRC grants EP/J017566/1 and EP/K503733/1, by an Imperial College research fellowship awarded to A.S.-R., and by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 607466.
## Author information
G.B., M.S.F., A.S.-R. and Y.E. performed the experiments and designed the experimental setup. G.B., N.E.B., N.J.B. and Y.E. analysed the data. G.B. and N.E.B. developed the mathematical models. G.B., M.F., A.S.-R., N.E.B., N.J.B., O.C. and Y.E. contributed to designing the experiments, discussions and writing the manuscript. O.C. and Y.E. jointly supervised the project.
### Competing interests
The authors declare no competing interests.
Correspondence to Oscar Ces or Yuval Elani.
## Rights and permissions
Reprints and Permissions
• ### Direct manipulation of liquid ordered lipid membrane domains using optical traps
• Mark S. Friddin
• , Guido Bolognesi
• , Ali Salehi-Reyhani
• , Oscar Ces
• & Yuval Elani
Communications Chemistry (2019) | 2019-06-16 00:51:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5822468400001526, "perplexity": 7021.798932280308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997508.21/warc/CC-MAIN-20190616002634-20190616024634-00359.warc.gz"} |
https://www.groundai.com/project/exact-alignment-recovery-for-correlated-erdos-renyi-graphs/ | Exact alignment recovery for correlated Erdős-Rényi graphs
# Exact alignment recovery for correlated Erdős-Rényi graphs
Daniel Cullina, Negar Kiyavash
###### Abstract
We consider the problem of perfectly recovering the vertex correspondence between two correlated Erdős-Rényi (ER) graphs on the same vertex set. The correspondence between the vertices can be obscured by randomly permuting the vertex labels of one of the graphs. We determine the information-theoretic threshold for exact recovery, i.e. the conditions under which the entire vertex correspondence can be correctly recovered given unbounded computational resources.
Graph alignment is the problem finding a matching between the vertices of the two graphs that matches, or aligns, many edges of the first graph with edges of the second graph. Alignment is a generalization of graph isomorphism recovery to non-isomorphic graphs. Graph alignment can be applied in the deanonymization of social networks, the analysis of protein interaction networks, and computer vision. Narayanan and Shmatikov successfully deanonymized an anonymized social network dataset by graph alignment with a publicly available network [1]. In order to make privacy guarantees in this setting, it is necessary to understand the conditions under which graph alignment recovery is possible.
We consider graph alignment for a randomized graph-pair model. This generation procedure creates a “planted” alignment: there is a ground-truth relationship between the vertices of the two graphs. Pedarsani and Grossglauser [2] were the first to approach the problem of finding information-theoretic conditions for alignment recovery. They established conditions under which exact recovery of the planted alignment is possible, The authors improved on these conditions and also established conditions under which exact recover is impossible [3]. In this paper, we close the gap between these results and establish the precise threshold for exact recovery in sparse graphs. As a special case, we recover a result of Wright [4] about the conditions under which an Erdős-Rényi graph has a trivial automorphism group.
## I Model
### I-a The alignment recovery problem
We consider the following problem. There are two correlated graphs and , both on the vertex set . By correlation we mean that for each vertex pair , presence or absence of , or equivalently the indicator variable , provides some information about . The true vertex labels of are removed and replaced with meaningless labels. We model this by applying a uniformly random permutation to map the vertices of to the vertices of its anonymized version. The anonymized graph is , where for all , . The original vertex labels of are preserved and and are revealed. We would like to know under what conditions it is possible to discover the true correspondence between the vertices of and the vertices of . In other words, when can the random permutation be exactly recovered with high probability?
In this context, an achievability result demonstrates the existence of an algorithm or estimator that exactly recovers with high probability. A converse result is an upper bound on the probability of exact recovery that applies to any estimator.
### I-B Correlated Erdős-Rényi graphs
To fully specify this problem, we need to define a joint distribution over and . In this paper, we will focus on Erdős-Rényi (ER) graphs. We discuss some of the advantages and drawbacks of this model in Section II-F.
We will generate correlated Erdős-Rényi graphs as follows. Let and be graphs on the vertex set . We will think of as a single function with codomain : . The random variables , , are i.i.d. and
(Ga,Gb)(e)=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩(1,1)w.p. p11(1,0)w.p. p10(0,1)w.p. p01(0,0)w.p. p00.
Call this distribution , where . Note that the marginal distributions of and are Erdős-Rényi and so is the distribution of the intersection graph : , , and .
When , we say that the graphs and have positive correlation. Observe that
p11−(p10+p11)(p01+p11)=p11p00−p01p10
so is an equivalent, more symmetric condition for positive correlation.
### I-C Results
All of the results concern the following setting. We have , is a uniformly random permutation of independent of , and is the anonymization of by as described in Section I-A. Our main result is the following.
###### Theorem 1.
Let satisfy the conditions
p01+p10 ≤O(1logn) (1) p11 ≤O(1logn) (2) p01p10p11p00 ≤O(1(logn)3) (3) p11 ≥logn+ω(1)n. (4)
Then there is an estimator for given that is correct with probability .
Together, conditions (1) and (2) force and to be mildly sparse. Condition (3) requires and to have nonnegligible positive correlation.
There is a matching converse bound.
###### Theorem 2 (Converse bound).
If satisfies
p11≤logn−ω(1)n,
then any estimator for given is correct with probability .
Theorem 2 was originally proved by the authors in [3]. The proof is short compared to Theorem 1 and it is included in Section V.
A second achievability theorem applies without conditions (1), (2), and (3). This requires condition (4) to be strengthened.
###### Theorem 3.
If satisfies
2logn+ω(1)n≤(√p11p00−√p01p10)2.
then there is an estimator for given that is correct with probability .
Theorem 3 was also originally proved in [3]. In this paper, it appears as an intermediate step in the proof of Theorem 1.
## Ii Preliminaries
### Ii-a Notation
Throughout, we use capital letters for random objects and lower case letters for fixed objects.
For a graph , let and be the node and edge sets respectively. Let denote the set . All of the -vertex graphs that we consider will have vertex set . We will always think of a permutation as a bijective function . The set of permutations of under the binary operation of function composition forms the group .
We denote the collection of all two element subsets of by . The edge set of a graph is .
Represent a labeled graph on the vertex set by its edge indicator function . The group has an action on . We can write the action of the permutation on the graph as the composition of functions , where is the lifted version of :
l(π) : ([n]2)→([n]2) {i,j}↦{π(i),π(j)}.
Thus . Whenever there is only a single permutation under consideration, we will follow the convention .
For a generating function in the formal variable , is the coefficient extraction operator:
[zj]a(z)=[zj]∑iaizi=aj.
When is a matrix of numbers or formal variables and is a matrix of numbers, both indexed by , we use the notation
zk=∏i∈S∏j∈Tzki,ji,j
for compactness.
### Ii-B Graph statistics
Recall that we consider a graph on to be a -labeling of the set of vertex pairs . The following quantities have clear interpretations for graphs, but we define them more generally for reasons that will become apparent shortly.
###### Definition 1.
For a set and a pair of binary labelings , define the type
μ(fa,fb) ∈N[2]×[2] μ(fa,fb)ij =∑e∈S1{(fa,fb)(e)=(i,j)}.
The Hamming distance between and is
Δ(fa,fb) =∑e∈S1{fa(e)≠fb(e)} =μ(fa,fb)01+μ(fa,fb)10,
For a permutation , define
δ(τ;fa,fb)=12(Δ(fa∘τ,fb)−Δ(fa,fb)).
In the particular case of graphs (where and ), is the size of the symmetric difference of the edge sets, . The quantity is central to both our converse and our achievability arguments (as well as the achievability proof of Pedarsani and Grossglauser [2]). When and are graphs on and is a permutation of , is the difference in matching quality between the permutation and the identity permutation.
###### Lemma II.1.
Let . Then there is some such that
μ(fa∘τ,fb)−μ(fa,fb)=(−iii−i)
and .
###### Proof.
Let , and . Let be the vector of all ones. We have because both vectors give the distribution of symbols in . Similarly . The matrix has integer entries, so it must have the claimed form for some value of . Finally,
i =12((k′01+k′10)−(k01+k10)) =12(Δ(Ga∘τ,Gb)−Δ(Ga,Gb)) =δ(τ;fa,fb)\qed
### Ii-C MAP estimation
The maximum a posteriori (MAP) estimator for this problem can be derived as follows. In the following lemma we will be careful to distinguish graph-valued random variables from fixed graphs: we name the former with upper-case letters and the latter with lower-case.
###### Lemma II.2.
Let , let be a uniformly random permutation of , and let . Then
P[Π=π|(Gc,Gb)=(gc,gb)]∝(p10p01p11p00)i
where .
###### Proof.
=P[Π=π|(Gc,Gb)=(gc,gb)] (a)∝P[Π=π,(Gc,Gb)=(gc,gb)] (b)=P[Π=π,(Ga,Gb)=(gc∘l(π),gb)] (c)∝P[(Ga,Gb)=(gc∘l(π),gb)] (d)=pμ(gc∘l(π),gb) (e)∝pμ(gc∘l(π),gb)−μ(gc,gb)(p01p10p00p11)12Δ(gc,gb) (f)=(p01p10p00p11)12Δ(gc∘l(π),gb)
where in we multiply by the constant , in we apply the relationship , and in we use the independence of from and the uniformity of . In we apply the definition of the distribution of , in , we divide by a constant that does not depend on , and in we use Lemma II.1. ∎
Thus MAP estimator is
^Π =argmax^πP[Π=^π|(Gc,Gb)=(gc,gb)] =argmin^πΔ(Gc∘l(^π),Gb)
The permutation achieves an alignment score of . Although is unknown to the estimator, we can analyze its success by considering the distribution of
Δ(Ga∘l(Π−1∘^π),Gb)−Δ(Ga,Gb)=δ(l(Π−1∘^π);Ga,Gb).
Let
Q ={π∈Sn:Δ(Ga∘l(π),Gb)≤Δ(Ga,Gb)} ={π∈Sn:δ(l(π);Ga,Gb)≤0},
the set of permutation that give alignments of and that are at least as good as the true permutation. The identity permutation achieves , so it is always in by definition.
Let be the probability of success of the MAP estimator conditioned on the generation of the graph pair . When is not minimizer of , i.e. there is some such that , . When achieves the minimum, .
The converse argument use the fact the overall probability of success is at most .
The achievability arguments use the fact the overall probability of error is at most
P[|Q|≥1]≤E[|Q|−1]
or equivalently
P[⋁π≠id(π∈Q)]≤∑π≠idP[π∈Q].
Here we have applied linearity of expectation on the indicators for or equivalently the union bound on these events.
### Ii-D Cycle decomposition and the nontrivial region
The cycle decompositions of the permutations and play a crucial role in the distribution of . For a vertex set and a fixed , define , the nontrivial region of the graph, to be the vertex pairs that are not fixed points of , i.e. . We will mark quantities and random variables associated with the nontrivial region with tildes.
Recall that is the number of vertices and let be the number of vertices that are not fixed points of . Let , let , and let be the number of vertex pairs in cycles of length . Then .
The expected value of depends only on the size of the nontrivial region.
.
###### Proof.
Let . Using the alternative expression for from Lemma II.1, we have
=E[δ(τ;Ga,Gb)] =E[μ(Ga,Gb)11−μ(Ga∘τ,Gb)11] =∑e∈SP(Ga(e)=Gb(e)=1)−P(Ga(τ(e))=Gb(e)=1) =∑e∈˜Sp11−(p10+p11)(p01+p11) =~t(p00p11−p01p10).\qed
Let , which is the number of edges in . Let be the number of edges in in the nontrivial region, i.e. . When , the events for are independent and occur with probability , so both and have binomial distributions. Conditioned on , has a hypergeometric distribution.
We use the following notation for binomial and hypergeometric distributions. Each of these probability distributions models drawing from a pool of items, of which are marked. If we take samples without replacement, the number of marked items drawn has the hypergeometric distribution . If we take samples with replacement, the number of marked items drawn has a binomial distribution . Thus
M ∼Bin(t,p11,1) (5) ˜M ∼Bin(~t,p11,1) (6) ˜M|M=m ∼Hyp(~t,m,t). (7)
Hypergeometric and binomial random variables have the following generating functions:
Hyp(a,b,n;z) ≜[xayb](1+x+y+xyz)n[xayb](1+x)n(1+y)n Bin(a,b,n;z) ≜(1−bn+bnz)a
Observe that is symmetric in and . Additionally because the number of marked balls that are drawn is equal to the number of draws minus the number of unmarked balls drawn. For the same reason, .
### Ii-E Proof outline
Both of our achievability proofs have the following broad structure.
• Use a union bound over the non-identity permutations and estimate , where is fixed and are random.
• For a fixed , examine the cycle decomposition and relate to , where has the same number of fixed points as but all nontrivial cycles have length two. This is summarized in Theorem 4.
• Use large deviations methods to bound the lower tail of . This is done in Theorem 5.
Our first achievability result, Theorem 3, comes from applying Theorem 5 in a direct way. This requires no additional assumptions on but does not match the converse bound when is sparse. If has no edges, every permutation is in and the union bound is extremely loose. When , the probability that has no edges is
(1−p11)t≈exp(−tp11)=exp(−c2(n−1)logn).
When , this probability is larger than , so the union bound on the error probability becomes larger than one.
To overcome this, in the proof of Theorem 6 we condition on before applying Theorem 5. It is more difficult to apply Theorem 5 to . In particular, , the number of edges of the intersection graph in nontrivial cycles of , now has a hypergeometric distribution rather than a binomial distribution . One way to analyze the tail of a hypergeometric random variable is to look at the binomial random variable with the same mean and number of samples. This idea is formalized in Lemma III.3. Moving from to would effectively undo our conditioning on . For the most important values of and the typical values of , we have . Thus we exploit the symmetry of the hypergeometric distribution () and replace with , which is more concentrated than .
### Ii-F Related Work
In the perfect correlation limit, i.e. , we have . In this case, the size of the automorphism group of determines whether it is possible to recover the permutation applied to . This is because the composition of an automorphism with the true matching gives another matching with no errors. Whenever the automorphism group of is nontrivial, it is impossible to exactly recover the permutation with high probability. We will return to this idea in Section V in the proof of the converse part of Theorem 1. Wright established that for , the automorphism group of is trivial with probability and that for , it is nontrivial with probability [4]. In fact, he proved a somewhat stronger statement about the growth rate of the number of unlabeled graphs that implies this fact about automorphism groups. Thus our Theorem 1 and Theorem 2 extend Wright’s results. Bollobás later provided a more combinatorial proof of this automorphism group threshold function [5]. The methods we use are closer to those of Bollobás.
Some practical recovery algorithms start by attempting to locate a few seeds. From these seeds, the graph matching is iteratively extended. Algorithms for the latter step can scale efficiently. Narayanan and Shmatikov were the first to apply this method [1]. They evaluated their performance empirically on graphs derived from social networks.
More recently, there has been some work evaluating the performance of this type of algorithm on graph inputs from random models. Yartseva and Grossglauser examined a simple percolation algorithm for growing a graph matching [6]. They find a sharp threshold for the number of initial seeds required to ensure that final graph matching includes every vertex. The intersection of the graphs and plays an important role in the analysis of this algorithm. Kazemi et al. extended this work and investigated the performance of a more sophisticated percolation algorithm[7].
If the networks being aligned correspond to two distinct online services, it is unlikely that the user populations of the services are identical. Kazemi et al. investigate alignment recovery of correlated graphs on overlapping but not identical vertex sets [8]. They determine that the information-theoretic penalty for imperfect overlap between the vertex sets of and is relatively mild. This regime is an important test of the robustness of alignment procedures.
### Ii-G Subsampling model
Pedarsani and Grossglauser [2] introduced the following generative model for correlated Erdős-Rényi (ER) graphs. Essentially the same model was used in [9, 10]. Let be an ER graph on with edge probability . Let and be independent random subgraphs of such that each edge of appears in and in with probabilities and respectively. We will refer to this as the subsampling model. The and parameters control the level of correlation between the graphs. This model is equivalent to with
p11 = rsasb p10 = rsa(1−sb) p01 = r(1−sa)sb p00 = 1−r(sa+sb−sasb).
Solving for from the above definitions, we obtain
r=(p10+p11)(p01+p11)p11=p11+p10+p01+p10p01p11. (8)
Observe that when and are independent, we have . This reveals that the subsampling model is capable of representing any correlated Erdős-Rényi distribution with nonnegative correlation. From (8), we see that is equivalent to and .
## Iii Graphs and cyclic sequences
Let be a matrix of formal variables indexed by :
w=(w00w01w10w11)
and let be a single formal variable. For a set and a permutation , define the generating function
AS,τ(w,z) =∑g∈[2]S∑h∈[2]Szδ(τ;g,h)wμ(g,h)
When and , captures the joint distribution of and :
P[μ(Ga,Gb)=k,δ(τ;Ga,Gb)=i]=[wkzi]AS,τ(p⊙w,z)
where is the element-wise product of the matrices and . This follows immediately from the definition of the distribution.
### Iii-a Generating functions
###### Definition 2.
Let be an finite index set and let be a permutation consisting of a single cycle of length . A cyclic -ary sequence is a pair where .
Let be a permutation of with a single cycle. For any such choices of , the sets of cyclic sequences obtained are isomorphic, so we can define the generating function
aℓ(w,z)=A[ℓ],σ(w,z).
###### Lemma III.1.
Let be a permutation. Let be the number of cycles of length in . Then and
AS,τ(w,z)=∏ℓ∈Naℓ(w,z)tℓ.
###### Proof.
Let , so
δ(τ;g,h)=∑e∈Sγ(g(e),g(τ(e)),h(e)).
Let be the partition of from the cycle decomposition of . Then we have an alternate expression for :
=AS,τ(w,z) =∑g∈[2]S∑h∈[2]S∏e∈Szγ(g(e),g(τ(e)),h(e))wg(e),h(e) =∑g∈[2]S∑h∈[2]S∏Si∈T∏e∈Sizγ(g(e),g(τ(e)),h(e))wg(e),h(e) (a)=∏Si∈T∑g∈[2]Si∑h∈[2]Si∏e∈Sizγ(g(e),g(τ(e)),h(e))wg(e),h(e) =∏Si∈Ta|Si|(w,z).
In , we use the fact that implies . ∎
For , the generating function is very simple. There are 4 possible pairs of cyclic -ary sequences of length one. A cycle of length one in a permutation is a fixed point, so these cyclic sequences are unchanged by the application of and is zero for each of them. Thus .
We define
˜AS,τ(w,z)=∏ℓ≥2aℓ(w,z)tℓ.
Just as captures the joint distribution of and , captures the joint distribution of and . Because does not appear in , we have
[zi]AS,τ(w,z)=a1(w,z)t1[zi]˜AS,τ(w,z).
This implies that and are conditionally independent given .
### Iii-B Nontrivial cycles
For , there are 16 possible pairs of sequences. There are only 4 pairs for which : the cases where and are each either or . In the two cases where , , , and . In the two cases where , . Thus
a2(w,z)=(w00+w01+w10+w11)2+2w00w11(z−1)+2w01w10(z−1−1). (9)
The following theorem relates longer cycles to cycles of length two.
###### Theorem 4.
Let , and . Then for all , .
The proofs of Theorem 4 and several intermediate lemmas are in Appendix A.
### Iii-C Tail bounds from generating functions
The following lemma is a well known inequality that we will apply in the proof of Theorem 5 and in several other places.
###### Lemma III.2.
For a generating function where for all and a real number ,
∑i≤j[zi]g(z)≤z−j∗g(z1).
###### Proof.
∑i≤j[zi]g(z)=∑i≤jgi≤∑igizi−j∗=z−j∗g(z∗).\qed
###### Theorem 5.
For ,
∑i≤0[zi]˜AS,τ(w,z)≤((w00+w01+w10+w11)2−2(√w00w11−√w01w10)2)~t2.
###### Proof.
For all , we have
∑i≤0[zi]˜AS,τ(x,z) =∑i≤0[zi]∏ℓ≥2aℓ(w,z)tℓ (a)≤∏ℓ≥2aℓ(w,z1)tℓ (b)≤a2(w,z1)~t/2 (10)
where follows from by Lemma III.2 and follows from Theorem 4 and .
From (10), where
u =w00+w01+w10+w11 v =w00w11(z−1)+w01w10(z−1−1).
We would like to choose to minimize . Substituting the optimal choice, , into the expression for , we obtain
=minzw00w11z−w00w11−w01w10+w01w10z−1 =2√w00w01w10w11z1−w00w11−w01w10 =−(√w00w11−√w01w10)2. (11)
Combining this with and (10) gives the claimed bound. ∎
### Iii-D Hypergeometric and binomial g.f.
Chvátal provided an upper bound on the tail probabilities of a hypergeometric random variable [11]. The following lemma is essentially a translation of that bound into the language of generating functions.
###### Lemma III.3.
For all , and , and all ,
Hyp(a,b,n;z)≤Bin(a,b,n;z).
###### Proof.
First, we have
(na)(nb)Hyp(a,b,n;z) =[xayb](1+x+y+xyz)n =[xayb]((1+x)(1+y)+xy(z−1))n =∑ℓ(nℓ)[xayb]((1+x)(1+y))n−ℓ(xy(z−1))ℓ =∑ℓ(nℓ)[xa−ℓ](1+x)n−ℓ[yb−ℓ](1+y)n−ℓ(z−1)ℓ =∑ℓ(nℓ)(n−ℓa−ℓ)(n−ℓb−ℓ)(z−1)ℓ =(na)(nb)∑ℓ(aℓ)(bℓ)(nℓ)(z−1)ℓ.
Observe that
(bℓ)(nℓ)nℓbℓ=ℓ−1∏i=0(b−i)n(n | 2020-10-25 17:33:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493280649185181, "perplexity": 574.7098895640349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00134.warc.gz"} |
https://pypi.org/project/pao/ | Project description
# PAO Overview
PAO is a Python-based package for Adversarial Optimization. PAO extends the modeling concepts in [Pyomo](https://github.com/Pyomo/pyomo) to enable the expression and solution of multi-level optimization problems. The goal of this package is to provide a general modeling and analysis capability, and application exemplars serve to illustrate PAO’s general capabilities.
This package was derived from the capabilities in pyomo.bilevel and pyomo.dualize, which are now deprecated.
Project details
Uploaded source | 2023-03-25 02:51:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1703013777732849, "perplexity": 5448.410519346331}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00782.warc.gz"} |
http://math.stackexchange.com/questions/252576/simplification-of-set-formulas-a-bc-b-ac-cap-a-varnothing | Simplification of set formulas: $[ (A - B)^c - (B - A)^c ]\cap A = \varnothing$
Using the set properties, I have to demonstrate that
$[ (A - B)^\mathsf{c} - (B - A)^\mathsf{c}] \cap A = \emptyset.$
So far I've seen some logic properties, but never applied to sets. Could you guys help me?
-
What means $\neg A$? Does this means $\{ x : x \notin A \}?$ – user29999 Dec 6 '12 at 21:22
Complement of a set, like in here: basic-mathematics.com/complement-of-a-set.html I don't know how to put that with LaTeX. – Axel Prieto Dec 6 '12 at 21:23
I don;t know this symbol ¬ for sets. I think it is used in logic statements. – Babak S. Dec 6 '12 at 21:24
I've just edited. Is it clearer now? – Axel Prieto Dec 6 '12 at 21:28
The definition of $A - B$ is $\{x \mid x \in A \text{ and } x \notin B\}$, which can be rewritten as $A \cap B^c$. This fact allows us to write \begin{align*} [(A - B)^\mathsf{c} - (B - A)^\mathsf{c}] \cap A &= [(A \cap B^\mathsf{c})^\mathsf{c} - (B \cap A^\mathsf{c})^\mathsf{c}] \cap A\\ &= [(A \cap B^\mathsf{c})^\mathsf{c} \cap (B \cap A^\mathsf{c})] \cap A\\ &\subseteq A^\mathsf{c} \cap A\\ &= \emptyset. \end{align*}
-
Thank you very much. – Axel Prieto Dec 6 '12 at 21:57
\begin{align}&x\in((A\setminus B)^{\mathsf c}\,\setminus\, (B\setminus A)^{\mathsf c})\cap A\\\implies&x\in(A\setminus B)^{\mathsf c}\land x\notin(B\setminus A)^{\mathsf c}\land x\in A\\ \implies&x\notin(A\setminus B)\land x\in (B\setminus A)\land x\in A\\ \implies&x\in (B\setminus A)\land x\in A\\ \implies&x\in B\land x\notin A\land x\in A\\ \implies& x\notin A\land x\in A\\ \implies &\perp \end{align}
-
Thank you very much. – Axel Prieto Dec 6 '12 at 22:00
+1 This is also (almost) the way I would do it: just expand the definitions to go from sets to the logic level, and use the rules of logic to simplify. Four things I would do differently: I would split up the third step; I would use $\;\iff\;$ instead of $\;\implies\;$ throughout; I would add comments explaining each step, as in the Dijkstra-Feijen proof format; and I would write $\;\text{false}\;$ instead of $\;\perp\;$. – Marnix Klooster Aug 4 '14 at 11:54
The key fact with such set equations is that they can usually be translated into corresponding logical formulae.
For example:
• $x \in A \cap B$ if and only if $x \in A \wedge x \in B$.
• $x \in A^\mathsf{c}$ if and only if $\neg (x \in A)$
• $x \in A - B$ if and only if $x\in A \wedge x \not\in B$.
Applying these rules repeatedly ought to transform your statement about sets into a logical statement. Membership of the left hand side corresponds to a statement which can't possibly be true, so the set has no members. Two sets are equal exactly when they have the same members, so that means it is the empty set.
-
Thank you very much. – Axel Prieto Dec 6 '12 at 22:03
In my notation $A^c = \{ x : x \notin A\}$. Then we must prove that
$$[ (A \cap B^c )^c \cap (B \cap A^c)^c ] \cap A = \emptyset.$$ Notice that $$(A \cap B^c)^c = (A^c \cup B)$$ Then \begin{eqnarray} [(A^c \cup B) \cap (B^c \cup A)] \cap A & =&\left \{ [(A^c \cup B) \cap B^c] \cup [(A^c \cup B) \cap A]\right \} \cap A \\ &=& \left \{ [A^c \cap B^c] \cup [(A^c \cup B)] \right \} \cap A\\ &=& \left \{ [A^c \cap B^c] \cap A \right \} \cup \left \{ [(A^c \cup B)] \cap A \right \}\\ &=& \emptyset \cup \emptyset \\ &=& \emptyset. \end{eqnarray}
-
Thank you very much. – Axel Prieto Dec 6 '12 at 22:01
You're welcome. – user29999 Dec 6 '12 at 22:04 | 2016-05-29 23:23:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9993946552276611, "perplexity": 2211.082668897594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282275.31/warc/CC-MAIN-20160524002122-00014-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://jamesgregson.blogspot.ca/2012_08_01_archive.html | ## Saturday, August 25, 2012
### Python Constructive Solid Geometry Update
In a earlier posts I've alluded to a Python Constructive Solid Geometry (CSG) library that I was working on to allow parametric design. You can do this with OpenSCAD, which is great software, but in my opinion the language leaves a bit to be desired. I wanted a solution that worked with existing languages, specifically C, C++ and Python, so that the results could be integrated easily with other software such as remeshers or FEA packages.
Of course writing a robust CSG library is a daunting undertaking. Fortunately there are existing libraries such as CGAL and Carve that handle this. In my opinion CGAL is the more robust of the two, however it currently has compilation issues under OS-X and is substantially slower than Carve.
Regardless, neither have the interface that I'm looking for, like the ability to directly load meshes, affine transformations and minimal-code ways to perform boolean operations on meshes. So I started work on a C++ wrapper for Carve that would give me the interface I wanted, with a wrapper for Python.
I'm pleased to say that it's coming along quite well and able to produce parts that are non-trivial. The interface is considerably cleaned up from before and I'm now starting to use it for projects. Here's two examples from (another) CNC project:
The code that generated these models is here:
from pyCSG import *
def inch_to_mm( inches ):
return inches*25.4
def mm_to_inch( mm ):
return mm/25.4
def hole_compensation( diameter ):
return diameter+1.0
mounting_hole_radius = 0.5*hole_compensation( inch_to_mm( 5.0/16.0 ) )
def axis_end():
obj = box( inch_to_mm( 4.5 ), inch_to_mm( 1.75 ), inch_to_mm( 0.75 ), True )
screw_hole = cylinder( mounting_hole_radius, inch_to_mm( 3.0 ), True, 20 )
shaft_hole = cylinder( 0.5*hole_compensation( inch_to_mm( 0.5 ) ), inch_to_mm(1.0), True, 20 ).rotate( 90.0, 0.0, 0.0 )
center_hole = cylinder( 0.5*hole_compensation( inch_to_mm( 1.0 ) ), inch_to_mm(1.0), True, 20 ).rotate( 90.0, 0.0, 0.0 )
mount_hole = cylinder( 0.5*hole_compensation( 4.0), inch_to_mm(1.0), True, 10 ).rotate( 90.0, 0.0, 0.0 )
notch = box( inch_to_mm( 1.5 ), 2.0, inch_to_mm( 1.0 ), True )
obj = obj - ( shaft_hole.translate( inch_to_mm( 1.5 ), 0.0, 0.0 ) + shaft_hole.translate( inch_to_mm( -1.5 ), 0.0, 0.0 ) )
obj = obj - ( notch.translate( inch_to_mm( 2.25 ), 0.0, 0.0 ) + notch.translate( inch_to_mm( -2.25 ), 0.0, 0.0 ) )
obj = obj - ( center_hole + mount_hole.translate( -15.5, -15.5, 0.0 ) + mount_hole.translate( 15.5, -15.5, 0.0 ) + mount_hole.translate( 15.5, 15.5, 0.0 ) + mount_hole.translate( -15.5, 15.5, 0.0 ) )
obj = obj - ( screw_hole.translate( inch_to_mm(1.0), 0.0, 0.0 ) + screw_hole.translate( inch_to_mm(-1.0), 0.0, 0.0 ) )
obj = obj - ( screw_hole.translate( inch_to_mm(2.0), 0.0, 0.0 ) + screw_hole.translate( inch_to_mm(-2.0), 0.0, 0.0 ) )
return obj
def carriage():
obj = box( inch_to_mm( 5 ), inch_to_mm( 5 ), inch_to_mm( 1.0 ), True )
shaft_hole = cylinder( inch_to_mm( 0.75 )/2.0, inch_to_mm( 5.5 ), True )
screw_hole = cylinder( inch_to_mm( 0.5 )/2.0, inch_to_mm( 5.5 ), True )
leadnut_hole = cylinder( inch_to_mm(0.25)*0.5, inch_to_mm( 1.0 ), True );
leadnut_access = box( inch_to_mm( 1.5 ), inch_to_mm( 3.0/8.0 ), inch_to_mm( 1.0 ), True )
mhole = cylinder( mounting_hole_radius, inch_to_mm( 2.0 ), True ).rotate( 90.0, 0.0, 0.0 )
obj = obj - ( shaft_hole.translate( inch_to_mm( 1.5 ), 0.0, 0.0 ) + shaft_hole.translate( inch_to_mm( -1.5 ), 0.0, 0.0 ) + screw_hole )
obj = obj - ( leadnut_hole.translate( inch_to_mm( 0.5 ), inch_to_mm( -2.5 ), 0.0 ) + leadnut_hole.translate( inch_to_mm( -0.5 ), inch_to_mm( -2.5 ), 0.0 ) + leadnut_access.translate( 0.0, inch_to_mm( -2.0 ), inch_to_mm( 0.2 ) ) )
for i in range( -2, 3 ):
for j in range( -2, 3 ):
if i != 0 and j != 0:
obj = obj - ( mhole.translate( inch_to_mm( 1.0*i ), inch_to_mm( 1.0*j ), 0.0 ) )
return obj
axis_end().save_mesh("axis_end.obj" )
carriage().save_mesh("carriage.obj" )
As you can see, this approach gives lots of flexibility in terms of manipulating and patterning objects using custom code. The examples above are not great examples of parametric design, but I'm sure you can imagine the sort of stuff that can be done.
I still have to perform a bit of cleanup outside the library to get printable models. I just run each model through MeshLab's planar edge-flipping optimizer. This is a pretty simple step and I plan to integrate it into the library shortly, along with the ability to extrude custom profiles and build surfaces of revolution. When these features are finished I plan to release the code for the library and Python wrapper.
## Saturday, August 11, 2012
### Mint Tin Parallax Protoboard
I recently bought a Parallax Propeller Protoboard. This seems like a nice little processor, 160 MIPS, 32bit, 32 IO pins all at \$25. The ability to have eight cores is also nice, it seems like it would make a good embedded CNC controller, particularly since it is now supported by GCC, so the ongoing GCode interpreter that I'm occasionally working on should be portable to this platform, but be able to offer extended capabilities like a pendent or DRO. But it doesn't come with a case so I decided to build it into an Altoids tin, a la the Mintduino.
Nothing difficult, a few drilled holes and one filed opening for the USB cable. I couldn't fit the cable into the tin with the Protoboard unfortunately, so an elastic keeps it together. Inside I soldered on male headers just below the female headers. I left out one pin, which allows an IDE cable to be used and provides a polarized connection for other projects. I will probably design a small board breaking out the IDE connector to screw terminals sometime in the future to be able to easily interface with the Protoboard.
The IDE cable is cut down to just the first two connectors, allowing it to be rolled up into the Altoids tin when not in use. The board itself is supported on a anti-static foam, which raises the board a bit but insulated from the board with a layer of thin cardboard, since the foam is slightly conductive and could short everything otherwise.
## Friday, August 10, 2012
### Optical Tomography Setup
As noted in a previous post, my Stochastic Tomography paper was accepted to SIGGRAPH 2012. Last Tuesday I was in Los Angeles for the conference to present the paper, including present the synopsis in the conference 'fast-forward' to an appallingly large audience. The photo below shows the seating but during the actual event it was standing room only.
A bit nerve-wracking to say the least. However it went well and after presenting my main talk to a MUCH smaller crowd, I'd like to post some photos of the setup that we used for the paper. I should point out that this project actually did not contribute much to the capture setup, this was previously in place from work done by Borislav Trifonov, Michael Krimerman, Brad Atcheson, Derek Bradley and a slew of others. My work on this paper focused on the algorithms primarily, but I thought people might be interested in a quick overview of the tomography capture apparatus.
The goal of the paper was to build 3D animated models of mixing liquids from multiview video. To accomplish this, we used an array of roughly 16 Hi-Def Sony camcorders arranged in a semi-circle around our capture volume to record video streams of the two fluids mixing.
You can see the cameras in the photo above, all focused on the capture volume which is inside the glass cylinder. Each of these records a video of the mixing process, producing 16 streams of video that look more or less like the photo shown below:
You can see one of the cameras peeking out at the right side of the frame. The cameras are controlled by an Arduino based box that talks to each camcorder using the SONY LANC protocol. This is an unpublished protocol used by SONY editing consoles, however it has been reverse-engineered by others to allow people to control SONY equipment. We implemented this protocol on an Arduino, which allows us to start all the cameras recording, turn them on and off as arrays, switch them between photo and video mode and so on. Unfortunately we can't easily set the exposure levels, transfer files to and from the device, instead we have to painstakingly do this by hand through the on-camera menus, which is error-prone and time-consuming.
The two fluids we use are water, for the clear liquid, and Fluoroscein-Sodium fluorescing dye for the mixing liquid. This fluorescent dye is available in a powder that is soluble in water, which allows us to perform several types of capture. The image above is dye powder dropped onto the surface of the water, this mixes with the water and is slightly denser, forming the Rayleigh-Taylor mixing process you see in that shot. We can also pre-mix the dye powder and simple pour or inject it into the domain, this was the process used for the following two captures that were used in the paper.
This shows an unstable vortex propaging downwards, leaving a complex wake. I recommend watching in high-def (720p or 1080p). The next is alcohol mixed with the dye powder, injected into the cylinder from the bottom. Since alcohol is less dense than water, it rises under buoyancy, mixing as it goes.
In the video above you can see a laminar to turbulent transition as well as lots of complex eddies that form as part of the mixing process.
The captures are illuminated with a set of white LED concert strobe panels. These panels serve two purposes. First they let us get LOTS of light into the scene in a controlled fashion. Second we actually use a strobed illumination at about 30Hz to optically synchronize the cameras and remove rolling-shutted shear effects.
All captures start in darkness so we can tell the time offset from the start of the video to the first frame where there is significant illumination. In fact we can do better than alignment to a single frame, since with the rolling shutter used by these cameras, we can actually determine the first scanline that is exposed. Using a 30Hz illumination pattern, we can also determine the exposure setting of the camera by looking for the last scanline before the light goes off again.
We then have a rolling shutter compensation program that scans through each video and reassembles a new video from the exposed and dark scanlines. The result is a set of videos that are optically synchronized and that have minimal shearing introduced.
This gives us a set of input data, however we also need to perform some geometric calibration of the scene in order to know from what angle each video was recorded and to be able to obtain the ray inside of the capture volume that corresponds to every observed pixel.
To do this, we use an optical calibration library called CalTag that detects self-identifying marker patterns similar to QR codes in images. We print a calibration target on overheard transparencies and mount this pattern to a 3D printed calibration jig that is placed in the glass cylinder.
This jig fits tightly in the cylinder and is registered with a set of detents that fit into recesses in a registration plate that is glued to the inside of the capture cylinder. The marker pattern that you see in the photo above is also registered to a set of registration tabs. We have a calibrated pattern on the front of the target as shown above, but also on the back.
When a camera takes an image of this jig after placing it into the capture domain (filled with water), an image similar to the following is obtained, although generally with far less blur due to condensation.
CalTag can then give us the corners of the marker patterns, which can be interpolated to associate with every image pixel, the corresponding 3D point on the calibration target that it 'sees'. We then rotate the target 180 degrees to face away from the camera and take a picture of an identical and carefully aligned target on the back side of the jig, giving another 3D point for each pixel. Connecting the points gives a ray in 3D space inside the cylinder, without having to account for any optical interactions between the interior liquid and cylinder.
We do this for every camera, by mounting the capture domain on a rotation stage, which is again controlled by an Arduino. An automated calibration procedure rotates the stage and triggers each camera to image the front calibration plane, then rotates an additional 180 degrees to repeat the process. The whole mess is controlled by a python script using pySerial, including the strobes, the rotation stage and the embedded camera controller.
This gives us the needed calibration data to express our scene as a tomographic inverse problem. Here we look for the scene content that would reproduce the measurements (videos) we obtained, given a physical model for the scene. In this capture case, the scene is simply emissivity adding up along a ray-path, so we get a linear inverse problem, that we solve using our new Stochastic Tomography algorithm. The result is volumetric 3D fields that you can animate, inspect and slice through and re-render however you like, as seen below in the submission video.
Stochastic Tomography and its Applications in 3D Imaging of Mixing Fluids from al. et Hullin et al. on Vimeo.
### 3-Axis CNC Controller
In a previous post, I showed the single axis stepper driver boards that I sent out to be made by OSH Park. These seemed to be electrically fine, although it was tricky to properly test without the connectors and other components. After a quick order from DigiKey, I had the bits I needed.
I'm pleased to say that these work as expected, allowing the microstep mode to be chosen by DIP switch, breaking out all inputs and outputs with screw terminals, and providing the connections needed for high and low limit switches. I've assembled three of these and screwed them to a piece of MDF to serve as the basis for a 3-Axis CNC controller board based on an Arduino Uno and GRBL.
The start of this board is shown above. Before it's complete I need to add the power connections for the high-power side, along with the limit switches. I have the GRBL firmware flashed onto the Arduino and have connected a few motors to this setup and everything works great!
Shown below is a closeup of the boards. The screw terminals in the front connect the limit switches for the high and low endstops. These have pulldown resistors and are connected to two of the screw-terminal positions on the logic side of the board (the two un-wired stops). The remaining pulldown resistors are connected to the microstep selection pins, which are set by the red DIP switch. On the right side of the board are the motor connections (the 4-position terminal block) and the motor power connections (the two position terminals). All connections are with 3.5mm terminal blocks, which actually meet the power requirements for multi-amp 24V operation. They also allow multiple connections to be made which allows the daisy-chain type wiring shown above. The low-power side also has these connections since even though they are not needed it's nice to only need one screwdriver to do the wiring.
I'm quite pleased with my first attempt at getting a board made. It worked first try, the quality of the boards is excellent and I think these drivers can form the basis of a good many other projects. | 2017-02-26 16:52:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44094231724739075, "perplexity": 2865.822816920585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172018.95/warc/CC-MAIN-20170219104612-00203-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/if-p-is-a-prime-number-greater-than-3-find-the-remainder-when-p-315856.html?sort_by_oldest=true | GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 30 Mar 2020, 18:24
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If p is a prime number greater than 3, find the remainder when p^2 + 1
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 62353
If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink]
### Show Tags
06 Feb 2020, 08:23
00:00
Difficulty:
15% (low)
Question Stats:
81% (00:56) correct 19% (01:36) wrong based on 54 sessions
### HideShow timer Statistics
If p is a prime number greater than 3, find the remainder when p^2 + 17 is divided by 12.
A. 8
B. 7
C. 6
D. 1
E. 0
_________________
GMAT Tutor
Joined: 16 Sep 2014
Posts: 420
Location: United States
GMAT 1: 780 Q51 V45
GRE 1: Q170 V167
If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink]
### Show Tags
06 Feb 2020, 08:45
1
Bunuel wrote:
If p is a prime number greater than 3, find the remainder when p^2 + 17 is divided by 12.
A. 8
B. 7
C. 6
D. 1
E. 0
Test any number. Let p = 5 for example. $$p^2 + 17 = 42$$. 42 divided by 12 has a remainder of 6.
Ans: C
For the proof of why the remainder is always 6:
Let n be any prime number greater than 3. We can write n as $$n = E+ 1$$ where E is even.
Then $$n^2 = (E + 1)^2 = E^2 + 2E + 1$$.
The claim is $$E^2 + 2E$$ will always have a factor of 12. We need to prove it has a factor of both 4 and 3.
The factor of 4 is easier to prove. Since $$E^2 + 2E = E*(2+E)$$, both E and (2+E) have a factor of 2 so the product must have a factor of 4.
For the factor of 3, recall E + 1 is a prime number greater than 3 so it cannot be divisible by 3. Thus either E or E + 2 must be divisible by 3. Both are factors of the product, therefore in any case $$E^2 + 2E$$ must be divisible by 3.
Finally, we proved $$E^2 + 2E$$ is divisible by 12, then $$n^2 = E^2 + 2E + 1$$ divided by 12 will have a remainder of 1 and $$n^2 + 17$$ must have a remainder of 6.
_________________
Source: We are an NYC based, in-person and online GMAT tutoring and prep company. We are the only GMAT provider in the world to guarantee specific GMAT scores with our flat-fee tutoring packages, or to publish student score increase rates. Our typical new-to-GMAT student score increase rate is 3-9 points per tutoring hour, the fastest in the world. Feel free to reach out!
Intern
Joined: 16 Sep 2019
Posts: 2
Re: If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink]
### Show Tags
08 Feb 2020, 09:17
P^2+17/12, Remainder R=?
P^2+17=12a+R
P^2+17-12a=R
P is a prime number and greater than 3 and 12a could be any multiple of 12
Let's use a trial and error method
If P=5 and a=1,2,3
25+17-12=30 (Substituting a = 1)
25+17-24=18 (Substituting a = 2)
25+17-36=6 (Substituting a = 3)
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 9903
Location: United States (CA)
Re: If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink]
### Show Tags
08 Feb 2020, 12:30
Bunuel wrote:
If p is a prime number greater than 3, find the remainder when p^2 + 17 is divided by 12.
A. 8
B. 7
C. 6
D. 1
E. 0
We see that p can be any prime number greater than 3, so it could be 5, 7, 11, 13, etc. Let’s choose the smallest possible value for p.
If p = 5, we have:
25 + 17 = 42
42/12 = 3 remainder 6
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
197 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Manager
Joined: 14 Sep 2019
Posts: 220
Re: If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink]
### Show Tags
09 Feb 2020, 02:15
Prime number greater than 3 = 5, 7, 11, 13………..
Suppose, P= 5,
P^2 + 17 = 5^2 + 17 = 25 + 17 = 42
Now 42/ 12 = 3 + 6/12, Remainder = 6(C)
Re: If p is a prime number greater than 3, find the remainder when p^2 + 1 [#permalink] 09 Feb 2020, 02:15
Display posts from previous: Sort by | 2020-03-31 02:24:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47145605087280273, "perplexity": 1558.8632126147509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00011.warc.gz"} |
http://zhzyx.me/2018/02/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0-%E5%90%B4%E6%81%A9%E8%BE%BE-%E8%AF%BE%E7%A8%8B%E7%AC%94%E8%AE%B0-%E5%85%AD-%E5%BC%82%E5%B8%B8%E6%A3%80%E6%B5%8B/ | # Anomaly detection
• Problem Motivation
• Review of Gaussian distribution
• Density estimation
• Evaluation
• Multivariate Gaussian distribution
# Example
• Fraud detection:
1. $x^{(i)}$ = features of user $i$’s activities
2. Model $p(x)$ from data
3. Identify unusual users by checking which have $p(x)\lt\epsilon$
• Manufacturing
• Monitoring computers in a data center
# Gaussian (Normal) distribution
$$X\sim N(\mu,\sigma^2)$$
$$P(x;\mu,\sigma^2)=\frac{1}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]}$$
$$\mu=\frac{1}{m}\sum^m_{i=1}x^{(i)}, \sigma^2=\frac{1}{m}\sum_{i=1}^m\left(x^{(i)}-\mu\right)^2$$
# Algorithm
## Density estimation
Training set: ${x^{(1)}, …, x^{(m)}}$
Each example is $x\in\mathbb{R}^n$
$$P(x)=\prod_{j=1}^nP(x_j;\mu_j,\sigma_j^2)$$
## Anomaly detection algorithm
1. Choose features $x_i$ that you think might be indicative of anomalous examples.
2. Fit parameters $\mu_1, …, \mu_n, \sigma_1^2, …, \sigma_2^2$
• $\mu=\frac{1}{m}\sum^m_{i=1}x^{(i)}$
• $\sigma^2=\frac{1}{m}\sum_{i=1}^m\left(x^{(i)}-\mu\right)^2$ ($\frac{1}{m-1}$ also ok, but in machine learning usually use this formular)
3. Given new example $x$, compute $p(x)$:
• $P(x)=\prod_{j=1}^nP(x_j;\mu_j,\sigma_j^2)=\prod_{j=1}^n\frac{1}{\sqrt{2\pi}\sigma_j}\exp{\left[-\frac{(x_j-\mu_j)^2}{2\sigma_j^2}\right]}$
• Anomaly if $P(x)\lt\epsilon$
# Evaluation
Assume there some labeled data, of anomalous and non-anomalous examples. ($y=0$ for normal, $y=1$ for anomalous)
Training set: $x^{(i)}, i=1,2,…,m_{train}$
Cross validation set: $(x^{(i)}{cv}, y^{(i)}{cv}), i=1,2,…,m_{cv}$
Test set: $(x^{(i)}{test}, y^{(i)}{test}), i=1,2,…,m_{test}$
Possible evaluation metrics:
• True positive, false positive, false negative, true negative
• Precision/Recall
• $F_1$-score
Do not use classification accuracy for the dataset is very skewed.
Can also use cross validation set to choose parameter $\epsilon$
# Anomaly detection vs. Supervised learning
• Anomaly detection
• Very small number of positive examples
• Large number of negative examples
• Many different “types” of anomalous. Hard for any algorithm to learn from positive examples what the anomalous look like
• Future anomalous may look nothing like any of the anomalous examples we’ve seen so far
• Supervised learning
• Large number of positive and negative examples.
• Enough positive examples for algorithm to get a sense of what positive examples are like, future positive examples likely to be similar to ones in training set
# Feature selection
## Non-gaussian features
• try $\log(x)$ or $\exp(x)$ or $x^a, a\in\mathbb{R}$
• $\log(x+c)$ is also ok
## Error analysis for anomaly detection
Want:
• $P(x)$ large for normal examples $x$
• $P(x)$ small for anormalous examples $x$
Most common problem:
• $P(x)$ is comparable (both large) for normal and anomalous examples.
Try to find a new feature to distinguish anormalous examples. Choose features that might take on unusually large or small values in the event of an anomaly.
# Multivariate Gaussian distribution
Model $P(x)$ all in one go. ($x\in\mathbb{R}^n, \mu\in\mathbb{R}^n, \Sigma\in\mathbb{R}^{n\times n} (\mathrm{cov})$)
$$P(x;\mu,\Sigma)=\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp{\left[-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right]}$$
$$\mu=\frac{1}{m}\sum_{i=1}^mx^{(i)}, \Sigma=\frac{1}{m}\sum_{i=1}^m(x^{(i)}-\mu)(x^{(i)}-\mu)^T$$
## Algorithm
1. Fit model $P(x)$ by setting $\mu, \Sigma$
2. Given a new example $x$, compute $P(x)$, Flag an anomaly if $P(x)\lt\epsilon$
## Original model vs. Multivariate Gaussian distribution
• Original model: $P(x)=\prod_{j=1}^nP(x_j;\mu_j,\sigma_j^2)$
• Manually create features to capture anomalies where $x_1, x_2$ take unusual combinations of values
• Computationally cheaper (works well with large number of features $n$)
• Ok even if $m$ (training set size) is small
• Multivariate Gaussian distribution: $P(x;\mu,\Sigma)=\frac{1}{(2\pi)^{n/2}|\Sigma|^{1/2}}\exp{\left[-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right]}$ (Original model is a special case of multivariate Gaussian model when $\Sigma$ is a scalar matrix)
• Automatically captures correlations between features
• Computationally more expensive (need to compute $\Sigma$)
• Must have $m\gt n$ (in mathematics, in pratice only when $m\geq 10\times n$), or else $\Sigma$ is non-invertible | 2019-09-19 23:05:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7202649116516113, "perplexity": 6010.765683965488}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00281.warc.gz"} |
https://www.semanticscholar.org/topic/Micrococcus-sp.-Z0G/6974675 | You are currently offline. Some features of the site may not work correctly.
# Micrococcus sp. Z0G
National Institutes of Health
## Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2013
Highly Cited
2013
• IEEE Transactions on Power Electronics
• 2013
• Corpus ID: 24512692
Advantages such as parameter insensitivity and high robustness to system structure uncertainty make the sliding-mode observer… Expand
Highly Cited
2013
Highly Cited
2013
• 2013
• Corpus ID: 45746403
The formation of the first stars is an exciting frontier area in astronomy. Early redshifts (z ∼ 20) have become observationally… Expand
Highly Cited
2010
Highly Cited
2010
• IEEE Transactions on Biomedical Engineering
• 2010
• Corpus ID: 551698
A novel wavelet-based algorithm for real-time detection of epileptic seizures using scalp EEG is proposed. In a moving-window… Expand
2009
2009
Combining positron emission tomography (PET) and MRI necessarily involves an engineering tradeoff as the equipment needed for the… Expand
2006
2006
• 2006
• Corpus ID: 59429742
We study the impacts of nonuniversal $Z^{\prime}$ model, providing flavor changing neutral current at tree level, on the… Expand
Review
2005
Review
2005
• Methods
• 2005
• Corpus ID: 31911385
Sphingolipids are a highly diverse category of compounds that serve not only as components of biologic structures but also as… Expand
2005
2005
The AC impedance response of mixed ionic and electronic conductors (MIECs) is derived from first principles and quantitatively… Expand
Highly Cited
2004
Highly Cited
2004
• 2004
• Corpus ID: 5745049
The holographic interpretation is a useful tool to describe 5D field theories in a 4D language. In particular it allows one to… Expand
2001
2001
• 2001
• Corpus ID: 17113457
Abstract:A family of localized solutions of Brittingham's type is constructed for different cylindric coordinates. We use method… Expand
Highly Cited
1961
Highly Cited
1961
ZusammenfassungMit dieser kurzen Übersicht sollte der Beweis erbracht werden, daß die Kolbenmethode, d. h. die Verbrennung… Expand | 2021-10-18 00:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38780477643013, "perplexity": 13823.884171788768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585186.33/warc/CC-MAIN-20211018000838-20211018030838-00507.warc.gz"} |
https://unblockedgamesforschool.com/maritime-issues-essay/latent-heat-of-water-j-kg-essay.php | Latent heat of water j kg essay
# Latent heat of water j kg essay
Pssst… Main Sidebar
Boosting publishing competencies considering the fact that 2002
Quality Insights, Analysis Items, Tutorial Ingredients, Films together with Mmorpgs
8 writers online
A material may well really exist with some states: decent, the liquid not to mention air. Some transform throughout assert via decent towards aqueous in heating up is called blend or possibly fading.
### Popular Essays
When ever numerous high temperature is without a doubt provided for you to the stable, the application will begin shedding mainly because this reaches demanded warmth and typically the high temperature keeps continual until finally totally the actual solid melts. This specific continuous temps might be termed reduction level associated with the good.
On the similar process, concerning cooling a good water, it variations into decent because the idea seems to lose all the an adequate amount of amount from warm up in addition to this approach is usually known as solidification or simply very cold.
While in it solidification, latent temperatures connected with liquid l kg essay temp continues as frequent that is getting stuck purpose. Any abnormally cold factor is certainly numerous right from varied beverages. In common crystalline toxins currently have same exact fading and additionally icing point yet during non-crystalline great melting and additionally snowy are actually not necessarily similar.
Meant for some sort of example: butter contains reduing level 30oCand getting stuck point Twenty two oC.
Latent raise the temperature of of the ingredient is certainly this degree of heat demanded in order to transformation the particular condition from a new appliance mass fast with this drug from great towards fruit juice or perhaps from aqueous for you to fumes without changing this heat range.
## Putting the following altogether:
At the time of home heating typically the substance, all the high temperature includes long been utilised in place within modifying that state; climate staying the actual exact until finally all the say about all the general drug features developed. This particular power might be applied up throughout removing that elements a greater distance besides not to mention seeing that operate northwestern mba essays be conducted to protect against all the recommended that will change all the talk about involving an important substance.
Which usually is usually as to why warm up vigor wasted within any alter associated with state is termed latent temperatures i.e. heat which often the scarlet correspondence do importance essay hidden coming from your thermometer.
## Latent Temperatures of Combination Essay
Latent high temperature can be showed from l and additionally is normally proper within Jkg-1 and calg-1. Truth be told there are usually only two sorts in latent heat up i.e. latent raise the temperature of connected with combination plus latent warm associated with vaporization.
Latent Warm of fusion
Latent warm with blend is characterized simply because unemployment rate during portugal content essay volume for raise the temperature of needed towards switch some equipment majority involving your chemical substance right from any great say that will the liquid state for a good regular temperatures.
### Type some completely new keyword(s) as well as advertising Enter in order to search
The actual latent temperature from fusion with the rocks is actually characterized while that number for high temperature necessary to make sure you transformation 1 r from snow with 0to mineral water for the particular same exact environment. To get cool, it has the benefit is 3.36 105 j Kg-1 around SI-units as well as 80cal g-1 inside CGS-system.
Latent Heat up connected with Vaporization
When a fabulous solution might be heated, them sets out cooking food along with might be replaced towards your vapour status.
Throughout all the practice, the actual temp associated with the water stay regular together with the actual amount of money involving heat up electric power is actually put into use for you to change its' express simply i.e. as a result of a the liquid condition to vapour say which in turn is without a doubt generally known as latent temperatures about vaporization.
Latent raise the temperature of involving vaporization about dissolved is certainly identified while the actual sum in heat needed to help shift dracula characterization essay product huge latent high temperature associated with mineral water l kg essay the liquid from cooking food issue in to fumes from the actual comparable heat range.
### Latent Heat up of Fusion
For that reason, latent temperatures of vapor is actually defined as all the volume in warm required that will switch your item bulk of fluids from100to water vapor in any similar latent raise the temperature of with standard water m kg essay. All the latent heat up with vaporization is without a doubt 2.26 × 106 Jkg-1or 540 calg-1.
When heating is without a doubt furnished to some sort of piece with snowing conditions involving 1 r on -10oC, your transform around temps might be demonstrated around that determine.
All the climate of snow 1st enhances from -10oC for you to 0 oCand that melts absolutely once Forty high fat calories with heat might be presented so that you can the item, warmth continue being identical simply because 0oC. In order to transfer the normal water in to all the sauna, 540 ama thesis of temperatures electricity is definitely supplied.
#### Determination involving Latent Temperature with Combination with Ice as a result of this Procedure associated with Mixture
Following really are any procedures to be able to ascertain typically the latent warm of blend associated with ice:
1. A calorimeter with the help of stirrer is normally obtained and this is normally weighted.
2. Some fluids is normally put in within to help you calorimeter afterward all the mass fast and also temp about a new calorimeter, stirrer and additionally fluids is definitely taken.
3. The little piece involving the rocks is taken together with the actual temperature taken into account claim (0 oC).
4. The section for snow will be lowered inside articles about the particular outcomes in dancehall audio essay calorimeter.
5. The latent warmth involving standard water n kg essay will be stirred until eventually almost all all the ice is without a doubt dissolved therefore the particular very last temperatures as well as bulk with concoction is taken.
Let,
m1= muscle mass fast about calorimeter together with stirrer
m2 = muscle size connected with calorimeter, stirrer and also water
mw = m1 - m2 = huge latent heat up of h2o m kg essay waters only
m3 = mass of calorimeter, stirrer, mineral water together with ice
m= m3 -- m2 = standard about the rocks only
= environment publish works meant for money standard water, stirrer plus calorimeter
= high temperature connected with winter snow storms, liquid and also calorimeter
= heat range with mixture
sc = specified heating power of some calorimeter
sw = unique heat total capacity with water
lf = Latent high temperature from blend = ?
Heat impairment by way of waters together with calorimeter, $$= m_ws_w(\theta_1 : \theta) + m_1s_c(\theta_1 - \theta)$$
$$=(m_ws_w + m_1s_c)(\theta_1 : \theta)$$
Heat secure as a result of ice any time it again differences as a result of 00 snow in order to 00 liquid in addition to 00 h2o to be able to θ0 waters $$= meters l_f +m_1s_w(\theta_1 -0)$$
From principle involving calorimetry
\begin{align*} \text {Heat gain} &= \text{Heat loss} \\(m l_f +m_1s_w(\theta_1 -0) &= m_ws_w(\theta_1 - \theta) + m_1s_c (\theta_1 - \theta) \\m l_f &= (m_ws_w + m_1s_c)(\theta_1 : \theta)- ms_w(\theta_1 -0) \\\therefore l_f &= newspaper article on bermuda triangle essay {(m_ws_w + m_1s_c)(\theta_1 - \theta)- ms_w \theta_1 }{m} \\ \end{align*}
Latent Heating with Vaporization about Drinking water Paper
### Student’S Designate. University. Tutorials Computer code. Instructor’S Company name.
100% plagiarism free
Sources and citations are provided
## Related essays
Final Exam Past Paper Essay
The actual latent raise the temperature of regarding blend in ice cubes is actually identified as all the volume connected with warm up needed to make sure you alter 1 h involving snowing conditions coming from 0to the water in typically the same exact temperature. For the rocks, the country's benefit can be 3.36 10 5 n Kg -1 throughout SI-units as well as 80cal grams -1 through CGS-system.
The Harmonious Multi-Racial Country Essay
In order to get competent that will have an understanding of which will the particular specified latent heat up involving a compound is definitely the volume from electricity required to alter your talk about connected with a particular kilogram about the actual material with the help of no improve through temperature; To help you delight in your significant difference relating to this exact latent heating from combination together with the actual ohydrates pecific latent warm for vaporisation.
Might vs. Right Essay
Latent Temperature connected with Vaporization regarding Fluids Cardstock Concern through calibrating effort was ±0. 01s in accordance to the particular stopwatch however even though calculating muscle size most people possess so that you can first of all glimpse on your moment around stopwatch together with next any large around the actual electronic digital debt plus considering mankind simply cannot respond automatically the application is usually determined that will possibly be ±1s.
Latent Warm up regarding Vaporization involving Normal water Document Doubt in approximately time was initially ±0. 01s based to help the stopwatch but at the same time calibrating size a person own that will first search with the particular time throughout stopwatch along with after that typically the majority through the actual digital sense of balance and also simply because humans are not able to take action quickly the idea is without a doubt predicted for you to become ±1s.
Essay topic ideas
To help end up being able to help you figure out in which your specified latent heating regarding an important substance is definitely the actual number in electrical power important for you to switch typically the talk about from you kilogram from the actual substance together with not any shift with temperature; Towards treasure all the impact in between the actual specified latent heating associated with blend along with this vertisements pecific latent high temperature from vaporisation.
Middle Ages Essay
Latent Heat up of Vaporization involving Standard water Newspaper Skepticism in weighing point in time had been ±0. 01s matching to help the stopwatch however whilst measuring mass an individual experience to be able to 1st take a look by the particular moment on stopwatch not to mention then the actual huge through that electric powered harmony together with for the reason that mankind are unable to reply promptly this can be believed to always be ±1s.
Hawthorne Essay
Physics regarding Idiot's stories which will latent high temperature is usually all the temperatures wanted to reason a shift during period for kilogram.[1] Any legal requirements regarding efficiency involving electricity states that energy source will be regulations generated not destroyed.[2] As a substitute, electric power will be transferred; this earliest Regularions about Thermodynamics is without a doubt inspected.
Anne carson the glass essay
Latent Warm up from H2o. In cases where aqueous h2o on 100 to k is normally improved towards water, that high temperature applied (the latent temperatures from vaporization) is 540 calorie consumption just for just about every gram about standard water. If perhaps sauna for 100 to k is actually developed in to waters by 100 i f 540 consumption of calories regarding all gram from sauna must often be subtracted. In case ice cubes in 0 a h is without a doubt developed straight into nectar h2o during 0 to t
Phil 2200. Ethics Essay
Distinct latent temperatures is certainly the place that sum regarding strength (in joules) desired to make sure you modification a state from 1Kg for any chemical is without a doubt generally known as her targeted latent about warmth. Most people could work out a quantity about energy source required using this equation: energy source (J) = unique latent heat (J/Kg) Times mass (Kg) MaterialSpecific warm up regarding combination (J/Kg)Specific latent high temperature with vaporisation (J/Kg) Water3340002260000.
Teens Essays
Special latent warm up is actually exactly where all the level connected with vigor (in joules) wanted to help modify the actual point out about 1Kg about a compound is usually known as the nation's unique latent connected with temperature. Everyone can certainly determine all the volume regarding power called for working with a equation: vitality (J) = precise latent temperature (J/Kg) By large (Kg) MaterialSpecific high temperature of combination (J/Kg)Specific latent warm for vaporisation (J/Kg) Water3340002260000.
Higher growth Essay
That latent temperature involving combination connected with ice-cubes is usually identified because your amount from warm demanded for you to adjust 1 g connected with winter snow storms via 0to water for your exact same heat. Intended for cool, it has the price is actually 3.36 10 5 t Kg -1 with SI-units or maybe 80cal h -1 during CGS-system.
Banana Peel Paper Essay
Feb . 18, 2018 · Latent Heat in Fusion. Fit this interior tumbler along with that stirrer plus incredibly hot the water on the inside a out of doors cup from all the calorimeter, utilizing all the insulating engagement ring. Covers all the calorimeter, in addition to subsequently after a fabulous tiny comes with flushed, beginning creating all the temperature around the particular calorimeter. Subsequently after forty-five mere seconds, increase a couple or even a couple of snowing conditions cubes, to make sure you this within container together with heated water.
Yup Essay
Scar 11, 2017 · All the exact latent heating associated with vaporisation in the water serious from a action is actually 2.40 back button 106 m kg-1. Unique Latent Warm up Example Concerns With Methods Some sort of plastic carrier filled with 0.80 kg of soup at 38°C is normally set towards the actual fridge drawer in some sort of icebox.
Feb . 18, 2018 · Latent Heating about Fusion. Placed your throughout container using all the stirrer together with warm waters in just all the external drink of a calorimeter, applying all the insulation band. Insure all the calorimeter, along with subsequent to an important minute has got approved, start off logging this high temperature during the actual calorimeter. After forty-five a few seconds, add more a couple of and a few snow cubes, for you to typically the inside of cup with popular waters.
The Pros of Apec Essay
Any latent warmth in combination about winter snow storms is without a doubt characterized mainly because that number involving warmth expected so that you can transform 1 g connected with snow because of 0to the water within typically the same exact temps. For the purpose of snowing conditions, it has the price is 3.36 10 5 n Kg -1 through SI-units or 80cal you have g -1 through CGS-system.
Frankendstein Essay
February Teen, 2018 · Latent Warm from Blend. Put typically the inside of drink through the particular stirrer and additionally sizzling hot the water in your out of mug involving the calorimeter, utilizing the insulation call. Go over the calorimeter, and additionally subsequent to a fabulous tiny seems to have handed, start off producing typically the heat range around this calorimeter. Subsequent to forty-five no time, bring several or some its polar environment cubes, that will all the interior glass by using hot drinking water.
Jackson Essay
Founded with some of our try your latent warm up connected with vaporization in normal water is without a doubt 2478630 L/kg, which often might be shut to be able to a meal table value; 2300000 J/kg. By simply this specific we all may well gauge the fact that some of our final result is actually 7. 3% large than your likely family table benefits amount.
Essay on the principle of population
Physics for the purpose of Idiot's reports the fact that latent heating is definitely the actual warmth wanted to be able to produce the improve within cycle for each kilogram.[1] This regularions with efficiency from energy levels says which usually electrical power might be none produced or destroyed.[2] On the other hand, electric power is actually transferred; the very first Regularions connected with Thermodynamics is normally analyzed.
Introduction to Skinneras Psychology Essay
Mar 11, 2017 · a certain latent warm about vaporisation connected with fluids concluded by means of that hobby will be 2.40 x 106 t kg-1. Targeted Latent Warm up Instance Concerns Having Remedies An important vinyl bag formulated with 0.80 kg associated with soup by 38°C is without a doubt position towards a refrigerator vehicle connected with your wine refrigerator.
Vampire History Essay
All the latent heat from fusion for cool is actually determined for the reason that all the quantity regarding raise the temperature of required in order to switch 1 grams for ice cubes as a result of 0to liquid on the particular equal heat range. With regard to cool, the benefit will be 3.36 10 5 m Kg -1 for SI-units or simply 80cal r -1 within CGS-system.
Literture and emotions Essay
Latent Temperature involving Drinking water. Whenever fluid normal water within 100 u t is without a doubt transformed within steam, any temperatures put in (the latent warm regarding vaporization) is normally 540 calorie consumption pertaining to just about every single gram with water. In cases where vapor at 100 to c is definitely evolved directly into liquid within 100 u Chemical, 540 fat laden calories just for all gram about water vapor have to possibly be deducted. If winter snow storms within 0 u c is actually developed directly into liquid mineral water in 0 u m
In class essay
Feb . Seventeen, 2018 · Latent High temperature involving Blend. Fit all the inside drink together with that stirrer along with sizzling normal water within just your outside drink about all the calorimeter, by using a insulation band. Take care of your calorimeter, together with immediately after your min provides went by, get started tracking typically the climate throughout your calorimeter. Just after forty-five no time, include a pair of or maybe some snow cubes, in order to the inside of pot along with awesome the water.
Past and Present Slavery Essay
Scar 11, 2017 · The exact latent heating about vaporisation regarding h2o concluded by the exercise will be 2.40 by 106 n kg-1. Distinct Latent Warm Case study Problems By using Remedies A good clear plastic back pack featuring 0.80 kg in soup at 38°C is actually decide to put straight into that fridge freezer compartment from the wine fridge.
Levels of Security Essay
Established on our try out typically the latent warmth connected with vaporization in waters might be 2478630 L/kg, in which might be close towards a kitchen table value; 2300000 J/kg. By simply the most people can certainly determine who this conclusion is certainly 7. 3% large when compared with all the desired desk importance selection.
Starbucks vs. Peets Coffee Essay
Mar 11, 2017 · This unique latent heat up with vaporisation connected with waters determined from that recreation can be 2.40 by 106 j kg-1. Exact Latent Raise the temperature of Case Concerns With Solutions Some naff carrier containing 0.80 kg in soup at 38°C is certainly get right into that freezer cooler drawer associated with some sort of family fridge.
unblockedgamesforschool.com uses cookies. By continuing we’ll assume you board with our cookie policy. | 2020-11-27 22:20:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42983493208885193, "perplexity": 4912.945433034109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00300.warc.gz"} |
https://jenspetit.de/2020/09/master_thesis.html | # Improving Efficiency in X-Ray Computed Tomography
2020-09-29
The final project of my studies was the master’s thesis. It has the rather long title “Improving Efficiency in X-Ray Computed Tomography Using Compile-Time Programming and Heterogeneous Computing”.
Specifically, my task involved the open source software elsa which is developed at the computational imaging and inverse problems group at the Technical University of Munich. As I have enjoyed my involvement in Free and Open Source Software before, most notably with MoveIt, I was happy to again contribute to a larger public project.
My work was divided into two interrelated tasks both concerned with improving the efficiency in elsa:
Before diving deeper into the programming part, I will give a short primer on computed tomography.
## What is computed tomography?
Due to their high energy nature, X-rays are able to penetrate objects which are opaque to the human eye. However, they not only traverse the object but also interact with its matter. This interaction is what enables X-ray imaging. Equivalent to photography, X-ray imaging captures the differences in modulation leading to results like the famous first radiograph of Röntgen’s wife hand shown in the Figure below. The bones of the hand exhibit stronger absorption compared to the soft tissue surrounding it.
The main drawback from basic X-ray imaging is its projective nature: The measured intensity corresponds to the accumulated absorption after the ray has completely traversed the object. Therefore, all depth information is lost. This limitation is what computed tomography overcomes and achieved through taking multiple measurements of the same object at different angles. In practice, it is realized through a rotating X-ray source and detector setup as shown in the picture.
As one can imagine, combining those multiple measurements into a single 3D model is challenging. It is classified as an ill-posed inverse problem where one wants to reconstruct the causes (absorption coefficients in the object) from the effects (pixelwise intensity measurements). The inverse problem of X-ray CT falls into the category of ill-posed problems as the solution might not always be unique and small measurement errors can cause large deviations in the solution.
After discretization, the problem to solve can be expressed as simple linear equation $A x = y$ where $$y$$ is the vector of measurements and $$x$$ is the linearized vector of absorption coefficients of the object. The matrix $$A$$ is called the system matrix and incorporates all the information about the geometry of the source as well as the detector.
At first this seems like a very easy problem: simply compute $x = A^{-1} y$ and be done. However, the dimensions of $$x$$, $$y$$ and $$A$$ make this prohibitively expensive. A typical 3D reconstruction volume has $$n = 1024^3$$ elements. All of the projections taken together are in the same order of magnitude, resulting in $$m = 1024^3$$ measurements. This means $$A$$ has $$m × n = 2^{60}$$ entries, requiring multiple exa-bytes of memory for storing it.
Due to the problem size, computational efficiency is a core requirement for computed tomography. The programming technique of Expression Templates (ETs) is one building block for this.
## Expression Templates
ETs are a compile-time programming technique to avoid intermediate results while still allowing an intuitive mathematical syntax. Consequently, all major C++ linear algebra libraries implement it. ETs are best understood following a practical example, assuming a Vector class as a wrapper around a raw array. We can define custom functions for each specific calculation as shown below:
class Vector {
...
};
Vector saxpy(float a, const Vector& X, const Vector& Y)
{
Vector result(X.size());
for(size_t i = 0; i < result.size(); ++i)
result[i] = a * X[i] + Y[i];
return result;
}
// ...
auto result = saxpy(a, X, Y);
Now, instead of having to write and call a function for each mathematical operation, we want to define operators like $$+$$, $$-$$ and so on for intuitive notation. In C++, this is easily doable with operator overloading:
Vector operator+(Vector const& lhs, Vector const& rhs) {
Vector result(lhs);
for (int i = 0; i < lhs.getSize(); ++i) {
result[i] += rhs[i];
}
return result;
}
...
Vector operator*(float lhs, Vector const& rhs);
...
float a;
Vector X, Y, A;
A = a * X + Y
However, this comes with two deficits: speed and memory. C++ implements eager evaluation which means that each operator will be evaluated on its own as soon as possible. For example in the saxpy computation, this means that two complete for loops through all elements will be executed and one intermediate result from A * x stored.
Overcoming these shortcomings is the purpose of ETs. The idea is to save the computations to be done as a type of a lightweight Expression object at compile-time. This object can then be evaluated in a single pass and only when needed.
Let’s look at an example again:
template <Left_Operand, Right_Operand, Operation>
class Expression
{ ... };
Expression expression = a * X + Y
What is now the type of expression? It’s easier to first start with a * X which has the type
Expression<float, Vector, Mult>
and consequently the combined term has the type
Expression< Expression<...>, // a * X
Vector,
Plus>
### Expression Templates for elsa
In elsa, a custom DataContainer class is used which wraps an Eigen matrix. The implementation is heavily based on the talk given by Bowie Owens at CppCon 2019. He presents ETs using modern C++17.
Internally, Eigen already implements ETs. Consequently, the task for my thesis was leveraging the Eigen internal ETs while still maintaining the elsa interface with the DataContainer class.
### Results
After having implemented ETs, it is time to evaluate them. For this, incrementally longer mathematical expressions are computed with elsa. They are:
1. $$x = x * y$$ (a)
2. $$x = x * y + y$$ (b)
3. $$x = x * y + y / z$$ (c)
4. $$x = x * y + y / z + x$$ (d)
The results shown in the figure below clearly indicate the efficiency gains through using ETs compared to operator overloading. As the expressions get longer, the difference increases. The results also show that there are only slight performance penalties compared to directly using the Eigen implementation. Great!
## GPU Computing for elsa
The second aspect of my master’s thesis concerned utilizing GPUs. As explained before, computed tomography deals with solving a huge linear system of equations. For this, large arrays of numbers have to be manipulated. This is the perfect task for GPUs, as they can do parallel processing in their hundreds of computational cores.
In elsa, the abstraction with the DataContainer comes in handy: depending on the user and available hardware, it can be created either in main memory (using Eigen) or on the GPU (using CUDA). However, the interface and available operations stay fixed. Therefore, the user can be as little as possible concerned with where exactly the computations happen. For him/her the DataContainer behaves the same. One challenge was that CUDA, opposed to Eigen, does not provide ETs. Therefore, my task was first coming up with CUDA enabled ETs and then integrating them into elsa as done with Eigen before.
Again, I took inspiration from the talk given by Bowie Owens. But in this case, some difficulties came up: first of all, CUDA does not fully support metaprogramming although this is a crucial feature for ETs. Second, for the ETs to be evaluated on the GPU, they need to be transfered to the GPU. Both steps required a lot of tinkering and a deep understanding of how CUDA device code works. For a more thorough discussion, you can take a look at this Gitlab issue.
### Results
The results show impressive performance for the saxpy (single precision a times x plus y) computation: on the one hand, its around 30 times faster than Eigen and on the other hand, compared to directly using a saxpy kernel, the overhead of using the DataContainer with its flexible and intuitive notation is negligible.
## Conclusion
Both goals I set out to solve for my thesis where achieved: elsa now includes ETs using the Eigen internal ETs as well as CUDA with the ETs implemented by myself. It was a lot of fun working together my supervisor Tobias plus other (PhD) students on elsa. I definitely got a deeper understanding of C++, especially the metaprogramming aspects.
If you are interested in further details, you can download my full thesis.
Back to home | 2021-01-17 03:06:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4711165130138397, "perplexity": 1354.213917984975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00413.warc.gz"} |
https://zbmath.org/authors/?q=ai%3Aahmad.shair | Compute Distance To:
Documents Indexed: 88 Publications since 1966, including 3 Books 2 Contributions as Editor Biographic References: 1 Publication Co-Authors: 15 Co-Authors with 59 Joint Publications 248 Co-Co-Authors
all top 5
### Co-Authors
23 single-authored 28 Lazer, Alan C. 8 Stamova, Ivanka Milkova 6 Tineo, Antonio R. 4 Jami, A. Rehman 4 Montes de Oca, Francisco 3 Ambrosetti, Antonio 3 Rao, M. Rama Mohana 2 Ali, Kashif 2 Granados, Bertha 2 Le, Dung 2 Salazar, Jorge A. 2 Stamov, Gani Trendafilov 2 Sufyan, M. 1 Aas, Z. 1 Ata-ur-Rahman 1 Benharbit, Abdelali M. 1 Hadi, F. 1 Keener, Marvin S. 1 Mughal, Muhammad Zahid 1 Munir, R. 1 Paul, Jerome L. 1 Rizvi, Syed Tahir Raza 1 Seadawy, Aly R. 1 Travis, Curtis C. 1 Vatsala, Aghalaya S. 1 Younis, Muhammad 1 Zahid, U.
all top 5
### Serials
9 Proceedings of the American Mathematical Society 8 Nonlinear Analysis. Theory, Methods & Applications 7 Nonlinear Analysis. Real World Applications 6 Journal of Mathematical Analysis and Applications 6 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 3 Modern Physics Letters A 3 SIAM Journal on Mathematical Analysis 2 Applicable Analysis 2 Applied Mathematics and Computation 2 Funkcialaj Ekvacioj. Serio Internacia 2 Pacific Journal of Mathematics 2 Rendiconti dell’Istituto di Matematica dell’Università di Trieste 2 Bulletin of the American Mathematical Society 2 Unitext 1 International Journal of Modern Physics B 1 American Mathematical Monthly 1 Houston Journal of Mathematics 1 Journal of Mathematical and Physical Sciences 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 Annales Polonici Mathematici 1 Duke Mathematical Journal 1 Indiana University Mathematics Journal 1 Journal of Differential Equations 1 Mathematical Systems Theory 1 Michigan Mathematical Journal 1 Rendiconti del Seminario Matematico della Università di Padova 1 Transactions of the American Mathematical Society 1 Revista de Matemáticas Aplicadas 1 Bulletin de l’Académie Polonaise des Sciences. Série des Sciences Mathématiques 1 Bollettino della Unione Matemàtica Italiana. Serie VI. B 1 Nonlinear World 1 Discrete and Continuous Dynamical Systems 1 Nonlinear Studies 1 Journal of Applied Mechanics and Technical Physics 1 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 1 Mathematical Sciences Research Journal 1 International Journal of Geometric Methods in Modern Physics 1 Journal of Combinatorial Theory 1 Communications in Theoretical Physics 1 Advances in Nonlinear Analysis 1 De Gruyter Series in Mathematics and Life Sciences 1 De Gruyter Textbook
all top 5
### Fields
60 Ordinary differential equations (34-XX) 22 Biology and other natural sciences (92-XX) 14 Partial differential equations (35-XX) 5 Dynamical systems and ergodic theory (37-XX) 5 Fluid mechanics (76-XX) 4 Integral equations (45-XX) 4 Relativity and gravitational theory (83-XX) 2 General and overarching topics; collections (00-XX) 2 General topology (54-XX) 2 Mechanics of particles and systems (70-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Number theory (11-XX) 1 Group theory and generalizations (20-XX) 1 Real functions (26-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Numerical analysis (65-XX) 1 Quantum theory (81-XX) 1 Astronomy and astrophysics (85-XX)
### Citations contained in zbMATH Open
69 Publications have been cited 1,273 times in 844 Documents Cited by Year
On the nonautonomous Volterra-Lotka competition equations. Zbl 0848.34033
1993
Elementary critical point theory and perturbations of elliptic boundary value problems at resonance. Zbl 0351.35036
Ahmad, S.; Lazer, A. C.; Paul, J. L.
1976
Average conditions for global asymptotic stability in a nonautonomous Lotka-Volterra system. Zbl 0955.34041
2000
Convergence and ultimate bounds of solutions of the nonautonomous Volterra-Lotka competition equations. Zbl 0648.34037
1987
Asymptotic stability of competitive systems with delays and impulsive perturbations. Zbl 1153.34044
2007
Extinction of species in nonautonomous Lotka-Volterra systems. Zbl 0924.34040
1999
Asymptotic behaviour of solutions of periodic competition diffusion system. Zbl 0686.35060
1989
Necessary and sufficient average growth in a Lotka-Volterra system. Zbl 0934.34037
1998
Almost periodic solutions of $$N$$-dimensional impulsive competitive systems. Zbl 1162.34349
2009
Asymptotic stability of an $$N$$-dimensional impulsive competitive system. Zbl 1152.34342
2007
Global exponential stability for impulsive cellular neural networks with time-varying delays. Zbl 1151.34061
2008
On almost periodic solutions of the competing species problems. Zbl 0668.34042
1988
Extinction in nonautonomous $$T$$-periodic competitive Lotka-Volterra system. Zbl 0906.92024
Ahmad, Shair; Montes de Oca, Francisco
1998
Almost necessary and sufficient conditions for survival of species. Zbl 1080.34035
2004
Multiple nontrivial solutions of resonant and nonresonant asymptotically linear problems. Zbl 0634.35029
1986
Asymptotically periodic solutions of $$N$$-competing species problem with time delays. Zbl 0818.45004
Ahmad, Shair; Rama Mohana Rao, M.
1994
On almost periodic processes in impulsive competitive systems with delay and impulsive perturbations. Zbl 1170.45004
2009
A resonance problem in which the nonlinearity may grow linear. Zbl 0562.34011
1984
Average growth and extinction in a competitive Lotka–Volterra system. Zbl 1088.34047
2005
An existence theorem for periodically perturbed conservative systems. Zbl 0294.34029
1973
Partial persistence and extinction in $$N$$-dimensional competitive systems. Zbl 1071.34046
2005
Average growth and total permanence in a competitive Lotka-Volterra System. Zbl 1162.34329
2006
An elementary approach to traveling front solutions to a system of $$N$$ competition-diffusion equations. Zbl 0737.35029
1991
Dynamical systems of characteristic $$0^+$$. Zbl 0176.20402
1970
On nonautonomous $$n$$-competing species problems. Zbl 0859.34033
1995
An $$n$$-dimensional extension of the Sturm separation and comparison theory to a class of nonselfadjoint systems. Zbl 0409.34029
1978
On species extinction in an autonomous competition model. Zbl 0846.34043
1996
Nonselfadjoint resonance problems with unbounded perturbations. Zbl 0599.35069
1986
On the components of extremal solutions of second order systems. Zbl 0384.34021
1977
A nonstandard resonance problem for ordinary differential equations. Zbl 0719.34062
1991
Strong attraction and classification of certain continuous flows. Zbl 0218.34042
1971
Cycle structure of automorphisms of finite cyclic groups. Zbl 0169.34101
1969
Critical point theory and a theorem of Amaral and Pera. Zbl 0603.34036
1984
On the oscillatory behavior of a class of linear third order differential equations. Zbl 0167.07903
1969
Global and blow up solutions to cross diffusion systems. Zbl 1322.35067
2015
Lotka-Volterra and related systems. Recent developments in population dynamics. Zbl 1264.37001
2013
On nth-order Sturmian theory. Zbl 0444.34036
1980
A new generalization of the Sturm comparison theorem to selfadjoint systems. Zbl 0379.34006
1978
Oscillation criteria for second-order differential systems. Zbl 0389.34026
1978
On an extension of Sturm’s comparison theorem to a class of nonselfadjoint second-order systems. Zbl 0454.34031
1980
On a property of nonautonomous Lotka-Volterra competition model. Zbl 0930.34038
1999
Stability of Volterra diffusion equations with time delays. Zbl 0967.35068
Ahmad, Shair; Rao, M. Rama Mohana
1998
Average growth and extinction in a two dimensional Lotka-Volterra system. Zbl 1081.34513
Ahmad, Shair; Montes de Oca, Francisco
2002
Asymptotic properties of linear fourth order differential equations. Zbl 0339.34050
1976
On Sturmian theory for second order systems. Zbl 0517.34026
1983
Traveling waves for a system of equations. Zbl 1153.34325
Ahmad, Shair; Lazer, Alan C.; Tineo, Antonio
2008
Component properties of second order linear systems. Zbl 0362.34007
1976
On positivity of solutions and conjugate points of nonselfadjoint systems. Zbl 0433.34028
1979
Survival and extinction in competitive systems. Zbl 1144.34343
2008
Separated solutions of logistic equation with nonperiodic harvesting. Zbl 1366.34020
2017
On a property of a generalized Kolmogorov population model. Zbl 1270.34040
2013
Stability criteria for impulsive Kolmogorov-type systems of nonautonomous differential equations. Zbl 1267.34095
2012
On nonwandering continuous flows. Zbl 0445.34009
1978
On an extension of the Sturm comparison theorem. Zbl 0479.34015
1981
Conjugate points and second order systems. Zbl 0494.34020
1981
Stability criteria for $$N$$-competing species problem with time delays. Zbl 0798.92022
Ahmad, Shair; Rao, M. Rama Mohana
1994
On the oscillation of solutions of a class of linear fourth order differential equations. Zbl 0188.40102
1970
Positive operators and Sturmian theory of nonselfadjoint second-order systems. Zbl 0454.34029
1978
Three-dimensional population systems. Zbl 1154.34356
2008
On tridiagonal predator-prey systems and a conjecture. Zbl 1200.34059
2010
Some oscillation properties of third order linear homogeneous differential equations. Zbl 0339.34027
1975
Almost periodic solutions of second order systems. Zbl 0883.34051
1996
On existence of periodic solutions for nonlinearly perturbed conservative systems. Zbl 0599.34048
1980
On the role of Hopf’s maximum principle in elliptic Sturmian theory. Zbl 0424.35032
1979
Comparison results of reaction-diffusion equations with delay in abstract cones. Zbl 0499.35010
1981
On the existence of nonconstant periodic solutions. I. Zbl 0543.34031
Ahmad, Shair; Montes de Oca, Francisco
1983
On Ura’s axioms and local dynamical systems. Zbl 0184.26803
1969
A textbook on ordinary differential equations. Zbl 1288.34001
2014
A textbook on ordinary differential equations. 2nd edition. Zbl 1337.34001
2015
Separated solutions of logistic equation with nonperiodic harvesting. Zbl 1366.34020
2017
Global and blow up solutions to cross diffusion systems. Zbl 1322.35067
2015
A textbook on ordinary differential equations. 2nd edition. Zbl 1337.34001
2015
A textbook on ordinary differential equations. Zbl 1288.34001
2014
Lotka-Volterra and related systems. Recent developments in population dynamics. Zbl 1264.37001
2013
On a property of a generalized Kolmogorov population model. Zbl 1270.34040
2013
Stability criteria for impulsive Kolmogorov-type systems of nonautonomous differential equations. Zbl 1267.34095
2012
On tridiagonal predator-prey systems and a conjecture. Zbl 1200.34059
2010
Almost periodic solutions of $$N$$-dimensional impulsive competitive systems. Zbl 1162.34349
2009
On almost periodic processes in impulsive competitive systems with delay and impulsive perturbations. Zbl 1170.45004
2009
Global exponential stability for impulsive cellular neural networks with time-varying delays. Zbl 1151.34061
2008
Traveling waves for a system of equations. Zbl 1153.34325
Ahmad, Shair; Lazer, Alan C.; Tineo, Antonio
2008
Survival and extinction in competitive systems. Zbl 1144.34343
2008
Three-dimensional population systems. Zbl 1154.34356
2008
Asymptotic stability of competitive systems with delays and impulsive perturbations. Zbl 1153.34044
2007
Asymptotic stability of an $$N$$-dimensional impulsive competitive system. Zbl 1152.34342
2007
Average growth and total permanence in a competitive Lotka-Volterra System. Zbl 1162.34329
2006
Average growth and extinction in a competitive Lotka–Volterra system. Zbl 1088.34047
2005
Partial persistence and extinction in $$N$$-dimensional competitive systems. Zbl 1071.34046
2005
Almost necessary and sufficient conditions for survival of species. Zbl 1080.34035
2004
Average growth and extinction in a two dimensional Lotka-Volterra system. Zbl 1081.34513
Ahmad, Shair; Montes de Oca, Francisco
2002
Average conditions for global asymptotic stability in a nonautonomous Lotka-Volterra system. Zbl 0955.34041
2000
Extinction of species in nonautonomous Lotka-Volterra systems. Zbl 0924.34040
1999
On a property of nonautonomous Lotka-Volterra competition model. Zbl 0930.34038
1999
Necessary and sufficient average growth in a Lotka-Volterra system. Zbl 0934.34037
1998
Extinction in nonautonomous $$T$$-periodic competitive Lotka-Volterra system. Zbl 0906.92024
Ahmad, Shair; Montes de Oca, Francisco
1998
Stability of Volterra diffusion equations with time delays. Zbl 0967.35068
Ahmad, Shair; Rao, M. Rama Mohana
1998
On species extinction in an autonomous competition model. Zbl 0846.34043
1996
Almost periodic solutions of second order systems. Zbl 0883.34051
1996
On nonautonomous $$n$$-competing species problems. Zbl 0859.34033
1995
Asymptotically periodic solutions of $$N$$-competing species problem with time delays. Zbl 0818.45004
Ahmad, Shair; Rama Mohana Rao, M.
1994
Stability criteria for $$N$$-competing species problem with time delays. Zbl 0798.92022
Ahmad, Shair; Rao, M. Rama Mohana
1994
On the nonautonomous Volterra-Lotka competition equations. Zbl 0848.34033
1993
An elementary approach to traveling front solutions to a system of $$N$$ competition-diffusion equations. Zbl 0737.35029
1991
A nonstandard resonance problem for ordinary differential equations. Zbl 0719.34062
1991
Asymptotic behaviour of solutions of periodic competition diffusion system. Zbl 0686.35060
1989
On almost periodic solutions of the competing species problems. Zbl 0668.34042
1988
Convergence and ultimate bounds of solutions of the nonautonomous Volterra-Lotka competition equations. Zbl 0648.34037
1987
Multiple nontrivial solutions of resonant and nonresonant asymptotically linear problems. Zbl 0634.35029
1986
Nonselfadjoint resonance problems with unbounded perturbations. Zbl 0599.35069
1986
A resonance problem in which the nonlinearity may grow linear. Zbl 0562.34011
1984
Critical point theory and a theorem of Amaral and Pera. Zbl 0603.34036
1984
On Sturmian theory for second order systems. Zbl 0517.34026
1983
On the existence of nonconstant periodic solutions. I. Zbl 0543.34031
Ahmad, Shair; Montes de Oca, Francisco
1983
On an extension of the Sturm comparison theorem. Zbl 0479.34015
1981
Conjugate points and second order systems. Zbl 0494.34020
1981
Comparison results of reaction-diffusion equations with delay in abstract cones. Zbl 0499.35010
1981
On nth-order Sturmian theory. Zbl 0444.34036
1980
On an extension of Sturm’s comparison theorem to a class of nonselfadjoint second-order systems. Zbl 0454.34031
1980
On existence of periodic solutions for nonlinearly perturbed conservative systems. Zbl 0599.34048
1980
On positivity of solutions and conjugate points of nonselfadjoint systems. Zbl 0433.34028
1979
On the role of Hopf’s maximum principle in elliptic Sturmian theory. Zbl 0424.35032
1979
An $$n$$-dimensional extension of the Sturm separation and comparison theory to a class of nonselfadjoint systems. Zbl 0409.34029
1978
A new generalization of the Sturm comparison theorem to selfadjoint systems. Zbl 0379.34006
1978
Oscillation criteria for second-order differential systems. Zbl 0389.34026
1978
On nonwandering continuous flows. Zbl 0445.34009
1978
Positive operators and Sturmian theory of nonselfadjoint second-order systems. Zbl 0454.34029
1978
On the components of extremal solutions of second order systems. Zbl 0384.34021
1977
Elementary critical point theory and perturbations of elliptic boundary value problems at resonance. Zbl 0351.35036
Ahmad, S.; Lazer, A. C.; Paul, J. L.
1976
Asymptotic properties of linear fourth order differential equations. Zbl 0339.34050
1976
Component properties of second order linear systems. Zbl 0362.34007
1976
Some oscillation properties of third order linear homogeneous differential equations. Zbl 0339.34027
1975
An existence theorem for periodically perturbed conservative systems. Zbl 0294.34029
1973
Strong attraction and classification of certain continuous flows. Zbl 0218.34042
1971
Dynamical systems of characteristic $$0^+$$. Zbl 0176.20402
1970
On the oscillation of solutions of a class of linear fourth order differential equations. Zbl 0188.40102
1970
Cycle structure of automorphisms of finite cyclic groups. Zbl 0169.34101
1969
On the oscillatory behavior of a class of linear third order differential equations. Zbl 0167.07903
1969
On Ura’s axioms and local dynamical systems. Zbl 0184.26803
1969
all top 5
### Cited by 938 Authors
33 Teng, Zhi-dong 31 Ahmad, Shair 29 Chen, Fengde 14 Chen, Lansun 14 Lisena, Benedetta 14 Stamov, Gani Trendafilov 14 Stamova, Ivanka Milkova 12 Lazer, Alan C. 12 Li, Zhong 12 Liu, Zhijun 12 Tineo, Antonio R. 11 Mawhin, Jean L. 10 Alzabut, Jehad O. 10 Zanolin, Fabio 9 Hou, Zhanyuan 9 Nie, Linfei 9 Schechter, Martin 9 Wang, Ke 9 Xia, Yonghui 8 Din, Qamar 8 Han, Zhiqing 8 Knight, Ronald A. 8 Liu, Shengqiang 8 Liu, Zijian 8 Muroya, Yoshiaki 8 Papageorgiou, Nikolaos S. 7 Liu, Meng 7 Nieto Roig, Juan Jose 7 Zhang, Long 6 Cao, Jinde 6 Elaydi, Saber Nasr 6 Fonda, Alessandro 6 He, Mengxin 6 Hirano, Norimichi 6 Jebelean, Petru 6 Meng, Xinzhu 6 Montes de Oca, Francisco 6 Omari, Pierpaolo 6 Ortega, Rafael 6 Peng, Jigen 6 Regenda, Ján 6 Shen, Wenxian 6 Zhao, Jiandong 5 Gao, Shujing 5 Gasiński, Leszek 5 Han, Maoan 5 Hu, Lin 5 Le, Dung 5 Lin, Guo 5 Ma, Ruyun 5 Shao, Yuanfu 5 Tang, Chun-Lei 5 Yan, Jurang 5 Yin, Jingxue 5 Zhang, Yutian 5 Zhong, Shou-Ming 5 Zhou, Li 4 Ackleh, Azmy S. 4 Agarwal, Ravi P. 4 Fan, Meng 4 Feng, Wei 4 Fu, Shengmao 4 Jiang, Jifa 4 Lu, Xin 4 Luo, Zhenguo 4 Rumbos, Adolfo J. 4 Sanchez, Luís 4 Serban, Calin-Constantin 4 Shen, Zuhe 4 Shi, Chunling 4 Su, Jiabao 4 Sun, Jiebao 4 Takeuchi, Yasuhiro 4 Tan, Ronghua 4 Wang, Yifu 4 Ward, James Robert jun. 4 Wu, Ruihua 4 Xu, Rui 4 Yao, Zhijian 3 Abbas, Syed 3 Bai, Chuanzhi 3 Bereanu, Cristian 3 Cao, Junfei 3 Cappelletti Montano, Mirella 3 Chaplain, Mark A. J. 3 Chen, Lijuan 3 Da Silva, Edcarlos Domingos 3 Deng, Keng 3 Drábek, Pavel 3 El Amrouss, Abdel Rachid 3 Gaudenzi, Marcellino 3 Habets, Patrick 3 Hu, Jing 3 Issa, Tahir Bachar 3 Jiang, Haijun 3 Jones, Gary D. 3 Kaul, Saroop K. 3 Khan, Abdul Qadeer 3 Kuo, Chungcheng 3 Li, Jinxian ...and 838 more Authors
all top 5
### Cited in 162 Serials
75 Journal of Mathematical Analysis and Applications 65 Nonlinear Analysis. Real World Applications 55 Nonlinear Analysis. Theory, Methods & Applications 50 Applied Mathematics and Computation 48 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 31 Journal of Differential Equations 30 Proceedings of the American Mathematical Society 22 Advances in Difference Equations 18 Abstract and Applied Analysis 15 Discrete Dynamics in Nature and Society 14 Computers & Mathematics with Applications 14 Mathematical and Computer Modelling 12 Discrete and Continuous Dynamical Systems. Series B 11 Journal of Computational and Applied Mathematics 11 Boundary Value Problems 10 Applied Mathematical Modelling 9 Applicable Analysis 9 Chaos, Solitons and Fractals 9 Applied Mathematics Letters 9 Communications in Nonlinear Science and Numerical Simulation 9 Journal of Applied Mathematics and Computing 8 Bulletin of the Australian Mathematical Society 8 Mathematical Methods in the Applied Sciences 8 Rocky Mountain Journal of Mathematics 8 Advanced Nonlinear Studies 7 Czechoslovak Mathematical Journal 7 Journal of Dynamics and Differential Equations 7 Nonlinear Dynamics 7 Journal of Applied Mathematics 7 Journal of Biological Dynamics 7 International Journal of Biomathematics 6 Annali di Matematica Pura ed Applicata. Serie Quarta 6 Mathematica Slovaca 6 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 6 Journal of Inequalities and Applications 5 Proceedings of the Edinburgh Mathematical Society. Series II 5 Applied Mathematics. Series B (English Edition) 5 Differential Equations and Dynamical Systems 5 Mediterranean Journal of Mathematics 5 International Journal of Differential Equations 4 Results in Mathematics 4 Discrete and Continuous Dynamical Systems 4 Mathematical Problems in Engineering 3 Journal of the Franklin Institute 3 Journal of Mathematical Biology 3 Mathematical Biosciences 3 Physica A 3 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 3 Journal of Functional Analysis 3 Mathematics and Computers in Simulation 3 Mathematical Systems Theory 3 Mathematische Zeitschrift 3 Transactions of the American Mathematical Society 3 Acta Applicandae Mathematicae 3 Japan Journal of Industrial and Applied Mathematics 3 Designs, Codes and Cryptography 3 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 3 Finite Fields and their Applications 3 NoDEA. Nonlinear Differential Equations and Applications 3 Complexity 3 Journal of Difference Equations and Applications 3 International Journal of Nonlinear Sciences and Numerical Simulation 3 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 3 Nonlinear Analysis. Hybrid Systems 3 Journal of Nonlinear Science and Applications 2 Mathematische Nachrichten 2 Ricerche di Matematica 2 Physica D 2 Acta Mathematicae Applicatae Sinica. English Series 2 Neural Networks 2 Communications in Partial Differential Equations 2 Journal de Mathématiques Pures et Appliquées. Neuvième Série 2 Calculus of Variations and Partial Differential Equations 2 Computational and Applied Mathematics 2 Acta Mathematica Sinica. English Series 2 Journal of Dynamical and Control Systems 2 Methodology and Computing in Applied Probability 2 Qualitative Theory of Dynamical Systems 2 The ANZIAM Journal 2 Mathematical Biosciences and Engineering 2 ISRN Mathematical Analysis 2 Bulletin of Computational Applied Mathematics 2 Open Mathematics 1 Analysis Mathematica 1 Archive for Rational Mechanics and Analysis 1 Discrete Mathematics 1 International Journal of Control 1 Indian Journal of Pure & Applied Mathematics 1 International Journal of Theoretical Physics 1 Mathematical Notes 1 Periodica Mathematica Hungarica 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica 1 Annales Polonici Mathematici 1 Archiv der Mathematik 1 Boletim da Sociedade Brasileira de Matemática 1 Bulletin de la Société Mathématique de France 1 Duke Mathematical Journal 1 Fuzzy Sets and Systems 1 Inventiones Mathematicae ...and 62 more Serials
all top 5
### Cited in 38 Fields
501 Ordinary differential equations (34-XX) 407 Biology and other natural sciences (92-XX) 223 Partial differential equations (35-XX) 83 Dynamical systems and ergodic theory (37-XX) 74 Operator theory (47-XX) 50 Global analysis, analysis on manifolds (58-XX) 42 Systems theory; control (93-XX) 40 Difference and functional equations (39-XX) 38 Probability theory and stochastic processes (60-XX) 30 Calculus of variations and optimal control; optimization (49-XX) 19 General topology (54-XX) 16 Numerical analysis (65-XX) 14 Integral equations (45-XX) 6 Number theory (11-XX) 6 Real functions (26-XX) 6 Mechanics of particles and systems (70-XX) 6 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 5 Harmonic analysis on Euclidean spaces (42-XX) 5 Functional analysis (46-XX) 4 Combinatorics (05-XX) 4 Computer science (68-XX) 4 Mechanics of deformable solids (74-XX) 3 Abstract harmonic analysis (43-XX) 2 Group theory and generalizations (20-XX) 2 Measure and integration (28-XX) 2 Algebraic topology (55-XX) 2 Fluid mechanics (76-XX) 2 Operations research, mathematical programming (90-XX) 1 General and overarching topics; collections (00-XX) 1 General algebraic systems (08-XX) 1 Field theory and polynomials (12-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Special functions (33-XX) 1 Sequences, series, summability (40-XX) 1 Manifolds and cell complexes (57-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Information and communication theory, circuits (94-XX) | 2022-05-16 12:02:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.399638295173645, "perplexity": 3088.813267758052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00592.warc.gz"} |
http://mathhelpforum.com/discrete-math/33787-cardinality.html | # Math Help - cardinality
1. ## cardinality
Please help me with these questions! Any ideas or hints or anything would be great. We've had exactly one lecture on this, and the book has no proofs, so I'm struggling with how to formulate proofs for this type of thing. Here are the problems and my ideas so far:
1. Show that for real numbers $a$ and $b$ with $a has the same cardinality of $\Re$.
I know this interval is uncountably infinite like $\Re$, and I can use Cantor's diagonalization argument to show that. Is this all I need to do?
2. Suppose $A$ and $B$ are sets such that card $A$ $\leq$ card $B$. Prove there exists a set $C \subseteq B$ such that card $C$= card $A$.
I'm trying to use the regular existence proof idea, in the sense that I'm trying to think of a "Consider this set C and the bijection f:A to C" and so on, but I can't think of anything that applies to an abstract set. A could have its elements as numbers, functions, subsets, etc. and all the things I can think of require something to be known about the set.
3. Suppose that $A, B$, and $C$ are sets such that card $A$ < card $B$ and card $A =$ card $C$. Prove that card $C$ < card $B$.
I'm totally lost on this one... It seems obvious, so I think I'm having trouble separating the properties of finite sets with the properties of infinite sets.
Thank you!!
2. Each of these three questions is just an exercise is applying the definitions.
Each involves finding a function that applies.
In #1 can you give a bijection between $\Re$ and $(a,b)$?
Think tangent and arctangent.
What is the meaning of card(A)<card(B)? The image of a subset of A is a subset of B.
What do you know about the composition of functions? | 2014-04-16 20:56:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7151174545288086, "perplexity": 399.88569933759794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.eoht.info/page/Ideal%20gas | In thermodynamics, ideal gas refers to a body of particles (atoms or molecules) in a state of gas particles, which are said to abide by the Boltzmann chaos assumption, i.e. have non-correlations of velocities, and to obey the ideal gas law:
$PV = nRT\,$
where P is the pressure, V the volume, n the number of particles , R the gas constant, and T the temperature of the body of gas. [1] Deviations in the behavior of gases, from the ideal gas law, occur at extreme pressures and temperatures, at which point the gas no longer is considered "ideal".
Perfect gas | Ideal perfect gas
The prefix term "ideal" is a modern etymological evolution stemming from the phrase "perfect gas" in the context of making a "perfect" vacuum via explosion in early gunpowder engine and other vacuum engine prototypes, such as is found in the works of Denis Papin and Christiaan Huygens.
In the early 20th century, terms such as “ideal perfect gas” (Preston, 1904) (Ѻ), were being employed.
References
1. Perrot, Pierre. (1988). A to Z of Thermodynamics (section: Ideal gas, pgs. 144-48). Oxford University Press. | 2022-08-09 20:39:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6014734506607056, "perplexity": 1609.0824083609277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00028.warc.gz"} |
http://mathhelpforum.com/pre-calculus/28327-tangent-slope-print.html | # Tangent slope
• Feb 15th 2008, 08:26 AM
imthatgirl
Tangent slope
If anyone can help with any of these questions please post
1) y= √(16-x) , where y= 5
2) y= √(x-7) , at x= 16
3) y= 8/(√(x+11)) , at x=5
• Feb 15th 2008, 10:17 AM
wingless
Do you know that slope of the tangent of the curve $y(x)$ is $y'(x)$?
Do you know how to apply Chain rule and Quotient rule?
• Feb 15th 2008, 10:24 AM
imthatgirl
i'm suppose to use
[f(a+h) - f(a)] / h
• Feb 15th 2008, 10:37 AM
topsquark
Quote:
Originally Posted by imthatgirl
1) y= √(16-x)
Ah, the long way. All right, I'll do the first then you can try the other two. They follow the same pattern.
$y^{\prime}(x) = \lim_{h \to 0} \frac{y(x + h) - y(x)}{h}$
$= \lim_{h \to 0} \frac{\sqrt{16 - (x + h)} - \sqrt{16 - x}}{h}$
This next step will likely go against every simplification rule you ever learned in Algebra. Then again, this is Calculus, not Algebra.
We're going to rationalize the numerator. Why? Because it works.
$= \lim_{h \to 0} \frac{\sqrt{16 - (x + h)} - \sqrt{16 - x}}{h} \cdot \frac{\sqrt{16 - (x + h)} + \sqrt{16 - x}}{\sqrt{16 - (x + h)} + \sqrt{16 - x}}$
$= \lim_{h \to 0} \frac{(16 - (x + h)) - (16 - x)}{h(\sqrt{16 - (x + h)} + \sqrt{16 - x})}$
$= \lim_{h \to 0} \frac{16 - x - h -16 + x}{h(\sqrt{16 - (x + h)} + \sqrt{16 - x})}$
$= \lim_{h \to 0} \frac{-h}{h(\sqrt{16 - (x + h)} + \sqrt{16 - x})}$
Now divide:
$= \lim_{h \to 0} \frac{-1}{\sqrt{16 - (x + h)} + \sqrt{16 - x}}$
Now take the limit:
$= \frac{-1}{\sqrt{16 - x} + \sqrt{16 - x}}$
$= \frac{-1}{2\sqrt{16 - x}}$
Now go ahead and put the x value in.
-Dan | 2018-02-24 09:11:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494082093238831, "perplexity": 2826.7366061619337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815500.61/warc/CC-MAIN-20180224073111-20180224093111-00439.warc.gz"} |
https://8ch.net/tech/res/914720.html | [ / / / / / / / / / / / / / ]
/tech/ - Technology
Name Email Select/drop/paste files here (Randomized for file and post deletion; you may also set your own.) * = required field [▶ Show post options & limits]Confused? See the FAQ.
Flag None2XOS9frontAbsolute LinuxAlpine LinuxALT LinuxAmigaAndroidAndroid-x86AntergosantiXApartheid LinuxAPODIOAppleApple (Classic)ArchBangArch LinuxArtistXAsteriskNOWAtariAzure Cloud SwitchBlackberry OSBodhi LinuxCentOSChakraChrome OSChromium OSClonezillaCloverOSCommodoreCrunchBangCRUXDebianDOSDragonFly BSDDreamlinuxEdubuntuelementary OSEliveEmacsEvolve OSFedoraFirefox OSForesight LinuxFreeBSDFreeNASFrugalware LinuxFuduntuFuntooGeeXboXGentooGhostBSDgNewSenseGNUGNU/LinuxGNU HurdGuixGuixSDHaikuIBMIllumosKali LinuxKnoppixKororaKubuntuLinuxLinux MintLubuntuMageiaMandrivaManjaro LinuxMEPISMINIXMooOSMorphOSMythbuntuNetBSDNexentaStorNimbleXNuBSDOLPC OSOpenBSDopenSUSEParabola GNU/Linux-librePardus LinuxPC-BSDPCLinuxOSPinguy OSPlan 9 from Bell LabsPuppy LinuxReactOSRed HatSabayonSailfishSalix OSScientific LinuxSlackwareSlaxSliTaz GNU/LinuxSolarisSymbianTailsTempleOSTiny Core LinuxTizenToleranUXTrisquelUbuntuUbuntu GNOMEUbuntu MATEUbuntu StudioUltimate EditionUtutoVectorLinuxViperrVoid LinuxWindowsWindows (Classic)Windows 7Windows VistaWindows XPXubuntuZenwalkZorin OS Show oekaki applet (replaces files and can be used instead) Do not bump(you can also write sage in the email field)Spoiler images(this replaces the thumbnails of your images with question marks) Allowed file types:jpg, jpeg, gif, png, webm, mp4, pdfMax filesize is 16 MB.Max image dimensions are 15000 x 15000. You may upload 3 per post.
File: 9d45e90ecab6bf9⋯.png (172.68 KB, 1280x820, 64:41, DMZ_network_diagram_2_fire….png)
No.914720
Yes, I know this is ridiculous, overkill and borderline insane, but so am I and my paranoia won't let me sleep at night if I don't build something like this. I will also let you know I am no network guy, so if you hear something ridiculous there is that.
The situation:
>I have 7 devices at home: 3 wireless shits and wireless ones. I also have two routers and a ONT.
>I fear at least 5 of those devices could be compromised now (doubtful) or in the future (probable), including one of the routers, which is a shitty ISP-provided one full of backdoors
>Long story short, two of those devices are operated exclusively by me, the rest are operated by my family as well.
>My trusted devices are plugged onto my trusted router, and the others are connected to the other.
>When I am using my devices, I plug my trusted router onto the ONT, and then unplug the untrusted router. The trusted router and the untrusted router have never been in the same network.
<I suspect some hypotetical hyper potent strain of cyberAIDS the untrusted devices could be capable of attacking and compromising the sadly untimely updated trusted router, regardless of VLAN setups, strong passwords, etc.
<I also suspect some of the websites my family could be visiting (think fishy Facebook advanced clickbait trash) could attempt to scan my local network for possible holes to inject the digital gonorrhea in my beloved machines
<What I want to do is to completely isolate the untrusted devices (even between them, so they can't conspire against me) on a physical level via some sort of hardware firewall/router/layer 3 switch that is capable of routing their connections through Tor or some VPN
<This magic box should be connected to a trusted router for WAN access, and reject any and all direct connections to itself or the trusted router except if coming from a special administration network interface/port
What I am attempting to do isn't very hard. Networks like pic related or bastion hosts are very standard and similar to what I want to do. The real problem is implementation. Basically, I require some sort of computer capable of:
>Low energy consumption (probably some ARM shit) because electricity be expensive here yo
>Having two or more Ethernet ports, preferably Gigabit Ethernet, because otherwise isolation will be pure placebo
>Be capable of running a modern distro in it (thought about NixOS because I will probably end up adding more subnetworks with similar needs and being able to deploy changes to all devices at once will be a fucking godsend)
>Be able to VLAN tag packets because my ONT is a bitch
>Preferably cheap. I can blow up some hundreds in this setup but I am sure it can be achieved with way less so I had rather not to
>Obviously this excludes (((CISCO))) shit
Any hardware or software suggestions, or tips I should take into account in my most retarded yet summer venture?
Also, stupid overengineered setups general.
No.914722
>>914720
>I have 7 devices at home: 3 wireless shits and wireless ones.
What?
No.914731
>>914722
Yeah, I accidentally the sentence there. I meant "3 wireless shits and 4 wired ones".
No.914749
That doesn't sound overengineered to me. Looks like you'd like stuff like soekris (expensive) or pcengines. Install OpenBSD on it. I'm not sure how easily available it is outside Europe and what the alternatives are.
No.914773
install gentoo
No.914783
>>914749
When I mentioned the setup to a network engineer friend of mine, he told me I was insane. Certainly not a conventional home network setup, but oh well.
I have heard of PC Engines. Pretty cool machines overall, but it seems they don't go down from 100 euros. I probably won't find anything else in that price range that isn't complicated as all fuck to install something in it and has 3 (!!!) RJ45 connectors and is basically a fully featured PC, but I was still hoping for something cheaper, like an Orange Pi R1 (maybe too cheap) or a Pine A64 with a USB network adapter; wouldn't be half as cool and powerful as an APU 2c2. Do you happen to know any trustworthy website in which I can order one of those? 2C2 seem to cost around 130 euros (and who knows how much does it cost shipping to my third world European country), but then there is this place where they sell the 2C2 for 100 euros, but then they also sell a 3C2 which doesn't even exist on PC Engines' website.
https://www.landashop.com/cmp-apu-3c2.html
No.914803
>>914783
The 3c2 is an actual model, it's just not listed on the official website for some reason.
https://www.pcengines.ch/apu3c2.htm
Also, according to https://www.pcengines.ch/newshop.php?c=2 , 2c2 are shipping in a week, and that distributor is charging pretty much the amount it costs to them. It's the other distributors that charge almost half as much as it costs them. relly maeks u think
No.914864
>>914783
>When I mentioned the setup to a network engineer friend of mine, he told me I was insane
He sounds like a filthy casual.
The PC Engines units look interesting, but you need to seriously question if you're going to be using it long enough to offset $100 or whatever of electricity, which is a lot. My fileserver is an ancient Core 2 system with a lot of spinning platters, and I've estimated it only eats$10/month of electricity, which makes replacing it not really worth it.
I'd recommend finding an old low-end desktop and using that to start, then upgrade to some dedicated hardware after you're satisfied with how it turns out. If electricity is really that expensive, or you don't have access to free hardware, consider the rock64 (https://www.pine64.org/?page_id=7147), since it has gigabit + 100Mb/s, and should do the trick.
My advice is to use pfsense, since it makes it easy to get started, and is just freebsd underneath, so you can do more advanced things as you need.
No.914881
You should be using vlans.
Wireless on one, wired on another, server on both.
Let the upstream devices deal with firewalling and segmentation.
From there, the server can act as a secure mediator of wlan and lan, as well as host whatever the fuck you want.
No.914938
>>914881
VLANs aren't really pointful here, because if you can't trust the device on the other end, you have to stick them on a dedicated tagged port, which means either you have to have a switch that supports VLANs, which is more expensive than bunch of random $5 NICs, or you need dedicated NICs on the firewall/router, at which point you might as well just use isolated LANs, skip the virtual. No.914943 File: 9a1e6566abf3d63⋯.png (32.1 KB, 800x600, 4:3, vlan.png) >>914938 >VLAN >Expensive >Special hardware 2006 called to tell you you're a faggot. DDWRT, OpenWRT, and Tomato support VLANs. Consumer switches don't give a shit about VLAN tagged frames. Linux and BSD can do software vlan tagging, and everyone under the sun supports vlan tagged frames, even relshit. Because you're not doing any layer 3 routing, nearly anything can handle a network that size. You only tag the frames on whatever port your sever occupies, which you already trust, everything else gets untagged traffic. As for security, just block infrastructure access from whatever vlans you don't want. If you want a service on your botnet vlan, spin it up in a vm and setup a firewall on the infrastructure and server to lock it down. vlans are step #1 in securing a lan. No.914959 I once lived in a dorm for a few months. The shitty administrator there kept fucking up the wifi, so I ran a network cable from the modem to one of my laptops. But I also wanted to use my other laptop and I didn't have another network cable with me. So I made a hotspot on my android phone and connected both laptops to it. Then I SSHed into the one which was connected to internet and made a socks proxy (-D <port>), and then socksified my system with tsocks. For some reason I still stick with this system even when I have a proper working wireless router. I guess I'm too lazy to remove all the shit I set up for it to all go through the SSH connection. Whenever I take my computer elsewhere, I replace the internal IP of the internet-connected laptop with its external IP and everything still works (it functions like a VPN then). It would work great with tor I think. No.914961 >>914943 >because if you can't trust the device on the other end, you have to stick them on a dedicated tagged port, lrn2read. What's to stop the botnet in blue to decide to tag themselves into green's VLAN? VS for no real extra expense, stick a$5 broadcom NIC into the server, and you have a router.
>Because you're not doing any layer 3 routing, nearly anything can handle a network that size.
2 problems with that statement. First, if you're not doing layer 3, the VLANs aren't going to be able to communicate. Second, nearly everything can do a gigabit of routing. You can do ~4Gb/s per thread on any decent hardware from this decade.
>vlans are step #1 in securing a lan.
Lol. Step 1 is starting from the bottom and understanding what you're doing, not jumping straight to some cargo-cult security of "VLANs will fix everything". Start with the basics, like setting up a meaningful firewall, organizing your network and getting control of DHCP, and locking down your server. VLANs come much later.
There's plenty of purposes for VLANs, but for security, a config error doesn't accidentally bridge 2 separate ethernet cables, but it will knock out your VLANs, so for the beginner, buy a $5 NIC and skip the pitfalls. t. my job title is senior systems administrator No.915032 File: e38cc95d3cd668a⋯.png (Spoiler Image, 273.14 KB, 946x657, 946:657, e38cc95d3cd668ac2bd1fbee96….png) File: 40a0f1a39dc83a0⋯.jpg (Spoiler Image, 21.69 KB, 474x473, 474:473, a079ed27eb7ea9e1ccb3591d14….jpg) File: db39f32d5bf72df⋯.png (Spoiler Image, 247 KB, 346x427, 346:427, dr kekyll.png) >>914961 >What's to stop the botnet in blue to decide to tag themselves into green's VLAN? <First, if you're not doing layer 3, the VLANs aren't going to be able to communicate. >t. my job title is senior systems administrator >t. I can't even read what I write >t. my dad works at nintendo Go buy a$5 NIC and skip the pitfalls.
so for the beginner, buy a $5 NIC and skip the pitfalls. No.915040 >>914961 >t. my job title is senior systems administrator if that's true then you need to relearn some networking dude. Layer 2 can still talk between each other, you just need a router to do so. > What's to stop the botnet in blue to decide to tag themselves into green's VLAN? I don't even get this, do you setup VLANS where all ports are tagged with all VLANs? An elcheapo computer running 2 NICs plus a L2 switch will do what you want WITH more control. No.915070 >>915040 Routers are Layer 3 by definition. Fuck, there are Layer 3 switches and most network guys will tell you that's just a fancy name for a stripped down router. No.915088 >>915070 What does this have to do with the core network concept that anything on the same VLAN can communicate with each other without a piece of Layer 3 hardware? >>915040 was saying: >talk between each other Educate us on the difference between 192.168.1.33/24 and 393.129.2.69, then apply that same concept to VLANs. I'll save you some time and post the answer: A transmission crossing networks requires an intermediary, like a server connected to both networks, or a route between the networks. Data can't cross a VLAN unless there's an intermediary in the form of a server connected to both networks, or a route between the networks. No.915095 >>915040 >Layer 2 can still talk between each other, you just need a router to do so. Thus defeating the entire point of having the VLANs in the first place. Put the server between the networks, and you have the exact same logical layout, but is much simpler for a beginner, which is what I've been saying from the beginning. >>915032 >>What's to stop the botnet in blue to decide to tag themselves into green's VLAN? ><First, if you're not doing layer 3, the VLANs aren't going to be able to communicate. Think before you post. First, re-read your post. You did nothing to isolate the secure side of the network from the insecure, and I'm not sure what tagging a single port to a single computer is supposed to accomplish. Second, while yes you can just bridge the VLANs, you just joined the broadcasts domains and all the computers now have the same subnet, which makes ARP hijacking possible, means there's no easy way to differentiate between the secure and insecure sides in your NAT and firewall configs. So you're going to want to route between the subnets. >>915040 >I don't even get this, do you setup VLANS where all ports are tagged with all VLANs? I think I read the other guy's tagging plans backwards, but, if your plan is to use an old router as a switch that can do VLANs, most only passively support it IIRC, so frame tagging has to be done in software, and that's going to drop your throughput to a crawl. If your plan is to rely on random devices supporting VLANs, I'm just going to sit here and laugh at you. Also, you're relying on a random shitty consumer router being secure, which is pretty silly. No.915099 >>915095 So for OP's situation, I'd suggest the following: Split the secure and insecure sides of the network and have each on a separate port on the server. If your internet is DSL, you can have it come into the secure side and set the modem/router combo into bridged mode and pass the PPP traffic to the server and run pppd there. If not, you just have the ISP supplied modem on a third port of your server. Now all you need to do is to set the server as the gateway for both subnets, set up NAT and a firewall on the server, and lock it down. You can get all this started with an old desktop and a couple of cheap NICs. Power consumption will be a bit high, but that's offset by the low capital cost (would be$10 in my country's monopoly money). From there, once you're satisfied with how it works, find yourself a nice cheap low powered miniITX board and a couple SSDs and you should be able to do this in < 20W
No.915113
File: 3dd8c5f71634c43⋯.jpg (69.75 KB, 553x559, 553:559, 3dd8c5f71634c43d23677fdf2d….jpg)
File: db39f32d5bf72df⋯.png (247 KB, 346x427, 346:427, dr kekyll.png)
File: 626e2be916431a2⋯.jpg (648.1 KB, 675x900, 3:4, 1462064515253.jpg)
>>915095
>so frame tagging has to be done in software, and that's going to drop your throughput to a crawl.
Nope
>You did nothing to isolate the secure side of the network from the insecure
<data can magically transverse vlans that have no route to each other, where hosts have no way of accessing tagged traffic
>tagging a single port to a single computer is supposed to accomplish
<you have to tag all frames on wire to use vlans
<you can't use anything other than hosts on vlan designated ports
<vlans can't span multiple ports
>I think I read the other guy's tagging plans backwards, but,
THANKS FOR LETTING US KNOW YOU DIDN'T NEED TO POST AGAIN
>Also, you're relying on a random shitty consumer router being secure, which is pretty silly.
<Open source software isn't secure even when locked down and thoroughly tested for 10+ years in production on known hardware
>Hive is a multi-platform CIA malware suite that can be specifically utilized against states. “The project provides customizable implants for Windows, Solaris, MikroTik (used in internet routers) and Linux platforms and a Listening Post (LP)/Command and Control (C2) infrastructure to communicate with these implants.”
>>915099
>listening to pajeet
>believing pajeet
Keep posting, I need someone to keep making a fool of themselves so I can post more laughing sluts.
The infrastructure portion of my network example can be implemented for < \$50 with no loss of network performance using commodity hardware, or if you want, old data center surplus.
>>>/g/
>>>/4chan/
>>>/india/
>>>/designated/
No.915119
>>915113
>Nope
Care to back that with something that doesn't smell like your down syndrome ass? Most consumer switching hardware can barely handle a gigabit of throughput, and now you're saying that a router-on-a-stick arrangement + passing half the frames to the processor for tagging is going to have no performance impact? Try the fuck again.
>you have to tag all frames on wire to use vlans
Yeah you fucking do, unless you plan on either trusting the untrusted devices to do their own vlans (or even support vlans for that matter), so you're stuck tagging everything to in one of the 2 nets to the appropriate VLAN.
>inb4 hurr just leave the insecure devices untagged so they can access all the switchports and are indistinguishable from traffic that hasn't yet been tagged.
This is supposed to be a secure network setup, not a kafkaesque autism simulator. What I said was not that it's impossible to do this with VLANs, but that it's needlessly complicated to do so, and here you are proving me right.
>>I think I read the other guy's tagging plans backwards, but,
>THANKS FOR LETTING US KNOW YOU DIDN'T NEED TO POST AGAIN
>Look mom, he admitted making a minor mistake, therefore everything I said is right!
I almost had to take your post seriously, but then I saw that you recycled a reaction image. Better luck next time kiddo.
><Open source software isn't secure even when locked down and thoroughly tested for 10+ years in production on known hardware
>DDWrt can totally paper over hardware security flaws/backdoors in random chinese routers
>And if you think otherwise, you must be a cisco shill
What the fuck. The Party is displeased with your work Zhang, next time don't be so obvious to the western devils. You get plastic rice in your ration today.
Also, I find this doubly ironic, since you promote surplus business grade hardware a few lines down in your post. At least attempt some consistency you cock gargling moron.
And last but not least:
>OP mentions having expensive euro electricity
>datacenter surplus
Go find an imageboard that's in whatever 3rd world bark-speak your brain operates in, and come back when you're literate in english. Or just not at all preferably.
No.915382
File: 4139c7b75d9273e⋯.jpg (132.16 KB, 1000x667, 1000:667, poo.jpg)
>>915119
>Most consumer switching hardware can barely handle a gigabit of throughput
What century are you from? You can max out gigE on 4 year old netgear or dlink unmanaged switches.
5 year old asus and linksys routers can do the tagging just fine with plenty of room to do actual routing.
You shit talking software tagging reeks of "industry" retards attempting to shut down software raid.
>and now you're saying that a router-on-a-stick arrangement + passing half the frames to the processor for tagging is going to have no performance impact?
Correct.
Just a few lines later:
>Yeah you fucking do tag all frames on wire to use vlans
No you don't.
Search: VLAN access port
Search: VLAN trunk port
You only tag trunks.
You put a switch on your access port, or add in more access ports.
Furthermore, if you do what >>915040 suggested, you don't have to tag anything on wire.
>tagging traffic on untrusted ports
>tagging traffic on non-infrastructure ports
>so you're stuck
VLANs make it so you don't have to be stuck.
>DDWrt can totally paper over hardware security flaws/backdoors in random chinese routers
<libreboot that strips intel me can't paper over hardware security flaws/backdoors in random chinese computers
Give a list of non-chinese, uncompromised vendors you pajeet "sysadmin"
>datacenter surplus
<power efficient hardware doesn't exist in a datacenter
You can even do this on a low end laptop with an express card slot.
Have a picture of your mom so I can get some more laughs.
No.915436
File: 8f7e724c5bed465⋯.png (158.9 KB, 816x1056, 17:22, APU pajeetnet.png)
Why not do it like pic related?
No.915443
>>915382
Ok, since you clearly don't understand how VLANs/switching/routing/english works, here's a breakdown of what you're proposing: (assumption is that this is all consumer hardware)
A frame comes in on the access port from one of the isolated segments, and is destined for the router (this is probably 90% of traffic)
It needs a VLAN tag added so it can go out the trunk to the router, so it makes one trip through the switch to go to the management plane, gets a tag added
It then goes through the switch again to go from the management plane to the trunk port and is sent to the router
So it has to make twice as many trips through the switching hardware.
Additionally, if the internet connection comes into the switch in question, it has to make 2 more trips through the switch to get the VLAN tag removed
>You can max out gigE on 4 year old netgear or dlink unmanaged switches.
Yes, that's what a gigabit of throughput means. The problems start when you have 2 or more streams trying to use that gigabit of throughput, you're not going to get a gigabit on each port. Hell, mid-range commercial switches will only have ~5Gb/s of throughput for a 12 port gigabit switch. Or less if you're enough of a sucker to buy HP
So once you get the amplification effect of having to pass frames to the management plane and/or making a round trip out to the router, you start to eat up the throughput rather quickly.
><libreboot that strips intel me can't paper over hardware security flaws/backdoors in random chinese computers
You have options, like using discrete NICs, removing the ME, using an AMD CPU that predates the PSP, or the raspberry pi, since broadcom can't into following the GPL.
Explain what you're going to do about the vendor-provided stage 1 bootloader that can talk to the NICs on your consumer router.
><power efficient hardware doesn't exist in a datacenter
>t. have never bought datacenter surplus.
It's efficient, yes, but it also has a very long service life, so what you're buying surplus was considered power efficient in 2010. It's also large scale, so while you can get systems with a good perf/watt, the power consumption is still high. So it's efficient if you need all that performance, not so much if your performance needs can be met with an 5 year old embedded celery. | 2018-05-26 06:03:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2284252941608429, "perplexity": 4438.543901157047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867311.83/warc/CC-MAIN-20180526053929-20180526073929-00555.warc.gz"} |
http://openstudy.com/updates/4dc47f87a5918b0b2bc98120 | ## anonymous 5 years ago Please solve this step by step. You dont have to explain it in detail. (16^1/4)^3
1. anonymous
$(16^{1/4})^{3}$ Sorry I dont know how to write fractions properly.
2. anonymous
$16^{1/4}*16^{1/4}*16^{1/4}=16^{1/4*1/4*1/4}=16^{1/4*4*4}=16^{1/64}$
3. anonymous
$(\sqrt[4]{16})^{3}$ | 2017-01-21 02:13:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40541157126426697, "perplexity": 1549.441045455373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00033-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/674864-vehicle-steering/ | # Vehicle Steering
This topic is 885 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I want to make the vehicle steer to look at the player, so if the player is on the vehicle right side, I should call "vehicle->SteerRight()", if the player is on the left I should call "vehicle->SteerLeft()"
Given (D3DXVECTOR3 PlayerPosition and D3DXMATRIX VehicleWorldMatrix) how do I find out if the vehicle should steer right or left?
Edited by Medo3337
##### Share on other sites
If PlayerPosition is in world space then you just need to do inverse transform of it's position by vehicle matrix. After that sign of Y vector will tell you if he is on the left or right side.
##### Share on other sites
@BoredEngineer: Can you please provide code example?
##### Share on other sites
Another way of doing this, that I personally find more intuitive, is to put a plane that runs through the vehicle and essentially cuts it in a left and a right half. Then you can just detect which side of the plane the player is on, which is usually done through a signed distance check against the plane. Positive might mean left and negative right, depending on which side of the plane you consider "up".
Edited by GuyWithBeard
##### Share on other sites
If you have a forward vector of the vehicle, you could also project playerpos, vehiclepos and forward_vector to the 2D ground plane and do:
(playerpos - vehiclepos) dot forward_vector
and check the sign.
Projecting to 2D ground plane just means removing the y coordinate from all coordinates. (or z, or whatever is "up" in your world)
Might not work like you want if you want to be driving on walls and ceilings, but if you don't need that, it should work.
Edited by Olof Hedman
##### Share on other sites
I'm not sure if this is correct, but here is what I'm doing now and it seems that it works:
D3DXVECTOR3 view( -VehicleWorldMatrix.m[2][0], -VehicleWorldMatrix.m[2][1], -VehicleWorldMatrix.m[2][2] );
D3DXVec3Normalize(&view, &view);
D3DXVECTOR3 toTarget = PlayerPosition - VehiclePosition;
D3DXVECTOR3 cross;
D3DXVec3Cross(&cross, &toTarget, &view);
if (cross.y > 0)
{
// Steer right
} else {
// Steer left
}
If you think there is something wrong, please let me know.
Edited by Medo3337
##### Share on other sites
view would be what I called "forward_vector".
Your method should work fine in most situations, with the same limitations as what I posted.
just a bit more unnecessary calculations compared to my suggestion
BoredEngineers solution is the most general purpose and would work regardless of the orientation of the vehicle. (assuming you want "right of" to mean from the vehicles perspective, which from an outsides observer would be "above" or "below" the vehicle, if it was turned on its side)
In your and my solution, the roll of the vehicle (or rotation around the length of the vehicle, or whatever you want to call it) will not affect what is "right of" and "left of" the vehicle.
No need to normalize the "view" vector btw, since you are only interested in the sign of y.
Edited by Olof Hedman
##### Share on other sites
@Olof Hedman:
BoredEngineers solution is the most general purpose and would work regardless of the orientation of the vehicle. (assuming you want "right of" to mean from the vehicles perspective, which from an outsides observer would be "above" or "below" the vehicle, if it was turned on its side)
D3DXMatrixInverse is taking FLOAT* pDeterminant,
I have: D3DXVECTOR3 PlayerPosition; and D3DXMATRIX VehicleWorldMatrix
How do I call D3DXMatrixInverse() ?
##### Share on other sites
Read the docs. "Pointer to a FLOAT value containing the determinant of the matrix. If the determinant is not needed, set this parameter to NULL."
1. 1
2. 2
Rutin
19
3. 3
4. 4
5. 5
• 14
• 12
• 9
• 12
• 37
• ### Forum Statistics
• Total Topics
631425
• Total Posts
3000015
× | 2018-06-24 19:12:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3330965042114258, "perplexity": 1789.4399431262455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867050.73/warc/CC-MAIN-20180624180240-20180624200240-00556.warc.gz"} |
https://plainmath.net/4544/describe-a-difference-between-exponential-growth-and-logistic-growth | Question
Describe a difference between exponential growth and logistic growth.
Exponential growth and decay
The exponential growth shows that the quantity grows at the rate that is directly proportional to the size of the model that describes the exponential growing entity that is $$A= A_{o}e^{kt}$$ .The logistic growth is different from the exponential one as there is no thing that can grow owt exponentially. at The mathematical model of logistic function is $$f (t) = c/(1+ae^{-bst})$$ So as the value of the time increase the value the entity $$ae^{bt}$$ tends to reduces to 0 and f $$(t)=c$$ that is the limiting size. | 2021-07-31 03:31:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8779985308647156, "perplexity": 313.3579704971713}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00567.warc.gz"} |
https://cs.stackexchange.com/questions/84307/what-are-the-rules-for-positive-recursive-types-in-dependent-type-theory | # What are the rules for positive recursive types in dependent type theory?
I've recently started independently learning type theory, using a combination of papers found online and ncatlab.org (but have not worked with category theory), and am about to start reading TAPL.
I'm interested in understanding how recursive types can be substituted for inductive types in dependent type theory (we assume dependent sums, dependent products, identity types and finite types 0, 1, 2 in our background theory), however I am unable to find a definition for recursive types that doesn't use unrestricted fixpoint operators and hence lack strong normalisation.
I've attempted to do this myself, but the dependent eliminator seems slightly too weak, and a restriction on if b then x else y such that the boolean expression is evaluated before x and y can be reduced seems to be required for normalisation.
$$\frac{A:\text{Type} \vdash F(A):\text{Type} \\ A\text{ is positive in }F(A)}{\mu A.F(A):\text{Type}}$$
$$\frac{a:F(\mu A.F(A))}{S(A):\mu A.F(A)}$$
$$\frac{B:\text{Type}, b:B \vdash C(b,B):\text{Type} \\ B:\text{Type}, e: \prod_{b:B}C(b,B))\vdash R(B,e) : \prod_{f:F(B)}C(f,F(B))}{rec(R):\prod_{a:\mu A.F(A)}C(a, \mu A.F(A))}$$
$$\frac{B:\text{Type}, b:B \vdash C(b,B):\text{Type} \quad a:\mu A.F(A)\\ B:\text{Type}, e: \prod_{b:B}C(b,B))\vdash R(B,e) : \prod_{f:F(B)}C(f,F(B))}{rec(R)S(a)=R(\mu A.F(A),rec(R))a:C(a, \mu A.F(A))}$$
Where A is positive in F(A) precisely when A does not occur in any type indexing any dependent product types in F(A).
Is my formulation correct? What is the correct way to formulate rules for normalising recursive types, and are there any references that I could look at that expand on these?
• Ok, next thing in order to say is that you should not ask the same question twice. This is a duplicate of cstheory.stackexchange.com/questions/39592/… – Andrej Bauer Nov 22 '17 at 12:42
• Deleted. However, I fail to see how this is a duplicate? There are formulations of (strictly positive) recursive types that are weakly normalising, so (with a reduction strategy) that doesn't seem to be a valid argument for why inductive types are generally preferred? – Jem Nov 22 '17 at 13:34
• Could you please change the title of the question so that it refers to "(non-strictly) positive recursive types" or some such? – Andrej Bauer Nov 22 '17 at 14:32
## 2 Answers
### Why are recursive types seldomly seen in dependent type theory?
The point of inductive types is precisely that you get normalization. Unrestricted recursive types simply lead to non-normalizing terms.
Given any type $A$, we may inhabit $A$ with a non-normalizing term as follows. Consider the recursive type $$D = D \to A.$$ The term $\omega \mathrel{{:}{=}} \lambda d : D . d \; d$ has type $D \to A$ and it also has type $D$, since they are equal. Therefore $\omega \; \omega$ has type $A$, and in addition $\omega \; \omega$ reduces to itself, giving a non-normalizing term.
If instead of $D = D \to A$ we have just $D \cong D \to A$ then an easy adaptation of the above argument leads to the same conclusion. You just have to coerce $d$ to have type $D \to A$.
Is this bad? If you're using type theory to write programs then it probably isn't that bad. It's actually kind of cool. But if you're using type theory to do reasoning, i.e., you want ot use types as propositions, then it's bad because every type has an inhabitant and so every propostion has a proof. It just depends on what you want from type theory.
### What are the rules for recursive types?
$\newcommand{\Type}{\mathsf{Type}}$ Let us try to formulate the rules for recursive types. A reasonable attempt goes along the same lines as what you've suggested: $$\frac{\Gamma, X : \Type \vdash F(X) : \Type}{\Gamma\vdash \mu X . F(X) : \Type}$$ $$\frac{\Gamma \vdash e : F(\mu X . F(X))}{\Gamma \vdash S(e) : \mu X . F(X)}$$ $$\frac{\Gamma, y : \mu X . F(X) \vdash C(y) : \Type \qquad \Gamma, x : F(\mu X . F(X)) \vdash f(x) : C(S(x)) \quad \Gamma \vdash u : \mu X . F(X) }{\Gamma \vdash \mathsf{rec}_F([x . f(x)], u) : C(u)}$$ We shouldn't forget to ask about equations that we expect to hold. The $\beta$-rule would be $$\mathsf{rec}_F([x . f(x)], S(e)) = f(e)$$ and let us throw in an $\eta$-rule $$S(\mathsf{rec}_F([x . x], u)) = u.$$ The $\eta$-rule says that if we take apart $u : \mu X . F(X)$ and put it back together, we will get $u$. So far we have a map $S : F(\mu X . F(X)) \to \mu X . F(X)$, but we also want $R : \mu X . F(X) \to F(\mu X . F(X))$ together with $$S(R(u)) = u \qquad\text{and}\qquad R(S(e)) = e \tag{1}$$ These say that $S$ and $R$ form an isomorphism between $F(\mu X . F(X))$ and $\mu X . F(X)$. We can get such an $R$, namely $$R \mathbin{{:}{=}} \lambda y : F(\mu X . F(X)) . \mathsf{rec}_F([x . x], y).$$ Indeed, we have by the $\beta$-rule $$R(S(e)) = \mathsf{rec}_F([x . x], S(e)) = e$$ and by the $\eta$-rule $$S(R(u)) = S(\mathsf{rec}_F([x . x], u) = u.$$
We could have started with having $S$ and $R$ satisfying equations (1), and that would allow us to derive the eliminator as $$\mathsf{rec}_F ([x . f(x)], u) = f(R(u)).$$ Such an eliminator satisfies the $\beta$-rule and the $\eta$-rule, quite obviously. This should give us pause. We discovered that our rules say precisely that $\mu X . F(X)$ is isomorphic to $F(\mu X . F(X))$ and nothing else. This is not good, because the rules should fix the meaning of $\mu X . F(X)$ up to equivalence of types.
To see that the rules are no good, consider the case $F(X) = X$. Every type is a fixed point of $F$, and our rules say nothing about which one $\mu X . X$ should be. Indeed, we can take an arbitrary type $A$ and set $R$ and $S$ to be the identity maps. That will satisfy all the rules for $\mu X . X$.
We should somehow state which fixed point of $F$ we have in mind when we write down $\mu X . F(X)$. a reasonable choice would be to ask for the smallest or initial one. But what does that mean? In category theory it means that there is a unique homomorphism from the algebra $S : F(\mu X . F(X)) \to \mu X . F(X)$ to any other $F$-algebra. Here we rely on the fact that $F$ is a functor so that we can reasonably talk about $F$ applied to a map. But our $F$ is not going to be a functor in general because it might break covariance (try to extend $F(X) = X \to X$ so that it takes $f : A \to B$ to some $F(f) : F(A) \to F(B)$ and you will see what the problem is). We are stuck, as we have no good criterion for picking one of possible fixed points of $F$!
The way out is to decompose $F : \Type \to \Type$ according to covariant and contravariant arguments, $$F : \Type \times \Type^{\mathrm{op}} \to \Type.$$ For example $F(X) = X \to X$ becomes $F(X_1, X_2) = X_2 \to X_1$. But now it's not clear what a fixed point of $F$ might be. All this leads to Freyd's notion of algebraically compact categories and a beautiful theory surrounding them. The theory can be applied to categorical models of programming languages to explain how recursive types work there. When we throw in dependent types, then as far as I know, things break and nobody knows what to do. But I would love to be proved wrong!
The sort of recursive types we spoke about here goes under the name isorecursive types. The other option are equirecursive types, see the same link.
• Thank you for your answer, however I'm interested in recursive types that are positive i.e. where $A$ does not occur on the left of any dependent product types, and these do not generally have this property. I've changed the question to clarify this. – Jem Nov 22 '17 at 13:24
• Can you give an example of such a type constructor which is not an inductive type? – Andrej Bauer Nov 22 '17 at 14:31
• Thank you for your answer. I lack the category theoretic background to understand most of what goes on when you introduce the category-theoretic notions, however, being unable to determine what the recursive type is using the constructors and eliminators, due to not being able to express the required isomorphisms is insightful to me, so thank you for your answer. – Jem Nov 22 '17 at 14:45
• Inductive types are just a syntactic way of stating what it means to have strict positivity. Strict positivity is used for two purposes: (1) to make sure we know what it means to have an algebra, so that we can say that the inductive type is the initial algebra (although this is not how people in type theory talk, it is in essence what is going on), and (2) to have a not-too-complicated proof that such algebras exist. – Andrej Bauer Nov 22 '17 at 14:54
• Your syntactic restriction seems similar to strict positivity, but not quite the same. For example, you could ask for a recursiv type given by $F(X) = \mathsf{nat} + \sum_{x : X} \mathsf{Id}_X(x,x)$. That would be weird. – Andrej Bauer Nov 22 '17 at 14:54
A pretty comprehensive reference is Peter Dybjer's Inductive Families paper that presents a very general class of inductive types (I've rarely seen them called "recursive types").
Note that positivity is subsumed by the condition: there exists a morphism
$$\mathrm{map}:\Pi_{T\ U:\mathrm{Type}} (T\rightarrow U)\rightarrow F\ T\rightarrow F\ U$$
and it's reasonable to ask that every such family define an inductive family. I believe this is the view taken by Ralph Matthes in Monotone Fixed-Point Types And Strong Normalization. | 2021-07-27 22:11:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7738103270530701, "perplexity": 264.25932914757584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153491.18/warc/CC-MAIN-20210727202227-20210727232227-00703.warc.gz"} |
https://labs.tib.eu/arxiv/?author=Guangming%20Zhang | • ### Electronic phase separation in iron selenide (Li, Fe)OHFeSe superconductor system(1805.00688)
May 2, 2018 cond-mat.supr-con
The phenomenon of phase separation into antiferromagnetic (AFM) and superconducting (SC) or normal-state regions has great implication for the origin of high-temperature (high-Tc) superconductivity. However, the occurrence of an intrinsic antiferromagnetism above the Tc of (Li, Fe)OHFeSe superconductor is questioned. Here we report a systematic study on a series of (Li, Fe)OHFeSe single crystal samples with Tc up to ~41 K. We observe an evident drop in the static magnetization at Tafm ~125 K, in some of the SC (Tc < ~38 K, cell parameter c < ~9.27 {\AA}) and non-SC samples. We verify that this AFM signal is intrinsic to (Li, Fe)OHFeSe. Thus, our observations indicate mesoscopic-to-macroscopic coexistence of an AFM state with the normal (below Tafm) or SC (below Tc) state in (Li, Fe)OHFeSe. We explain such coexistence by electronic phase separation, similar to that in high-Tc cuprates and iron arsenides. However, such an AFM signal can be absent in some other samples of (Li, Fe)OHFeSe, particularly it is never observed in the SC samples of Tc > ~38 K, owing to a spatial scale of the phase separation too small for the macroscopic magnetic probe. For this case, we propose a microscopic electronic phase separation. It is suggested that the microscopic static phase separation reaches vanishing point in high-Tc (Li, Fe)OHFeSe, by the occurrence of two-dimensional AFM spin fluctuations below nearly the same temperature as Tafm reported previously for a (Li, Fe)OHFeSe (Tc ~42 K) single crystal. A complete phase diagram is thus established. Our study provides key information of the underlying physics for high-Tc superconductivity.
• ### Discovery of a bi-critical point between antiferromagnetic and superconducting phases in pressurized single crystal Ca0.73La0.27FeAs2(1603.05740)
Nov. 24, 2016 cond-mat.supr-con
One of the most strikingly universal features of the high temperature superconductors is that the superconducting phase emerges in the close proximity of the antiferromagnetic phase, and the interplay between these two phases poses a long standing challenge. It is commonly believed that,as the antiferromagnetic transition temperature is continuously suppressed to zero, there appears a quantum critical point, around which the existence of antiferromagnetic fluctuation is responsible for the development of the superconductivity. In contrast to this scenario, we report the discovery of a bi-critical point identified at 2.88 GPa and 26.02 K in the pressurized high quality single crystal Ca0.73La0.27FeAs2 by complementary in situ high pressure measurements. At the critical pressure, we find that the antiferromagnetism suddenly disappears and superconductivity simultaneously emerges at almost the same temperature, and that the external magnetic field suppresses the superconducting transition temperature but hardly affects the antiferromagnetic transition temperature.
• ### Spin correlations and colossal magnetoresistance in HgCr$_2$Se$_4$(1610.00556)
Nov. 9, 2016 cond-mat.mes-hall
This study aims to unravel the mechanism of colossal magnetoresistance (CMR) observed in n-type HgCr$_2$Se$_4$, in which low-density conduction electrons are exchange-coupled to a three-dimensional Heisenberg ferromagnet with a Curie temperature $T_C\approx$ 105 K. Near room temperature the electron transport exhibits an ordinary semiconducting behavior. As temperature drops below $T^*\simeq2.1T_C$, the magnetic susceptibility deviates from the Curie-Weiss law, and concomitantly the transport enters an intermediate regime exhibiting a pronounced CMR effect before a transition to metallic conduction occurs at $T<T_C$. Our results suggest an important role of spin correlations not only near the critical point, but also for a wide range of temperatures ($T_C<T<T^*$) in the paramagnetic phase. In this intermediate temperature regime the transport undergoes a percolation type of transition from isolated magnetic polarons to a continuous network when temperature is lowered or magnetic field becomes stronger.
• ### Proximity induced superconductivity within the insulating (Li$_{0.84}$Fe$_{0.16}$)OH layers in (Li$_{0.84}$Fe$_{0.16}$)OHFe$_{0.98}$Se(1602.06240)
Feb. 19, 2016 cond-mat.supr-con
The role played by the insulating intermediate (Li$_{0.84}$Fe$_{0.16}$)OH layer on magnetic and superconducting properties of (Li$_{0.84}$Fe$_{0.16}$)OHFe$_{0.98}$Se was studied by means of muon-spin rotation. It was found that it is not only enhances the coupling between the FeSe layers for temperatures below $\simeq 10$ K, but becomes superconducting by itself due to the proximity to the FeSe ones. Superconductivity in (Li$_{0.84}$Fe$_{0.16}$)OH layers is most probably filamentary-like and the energy gap value, depending on the order parameter symmetry, does not exceed 1-1.5 meV.
• ### (Li0.84Fe0.16)OHFe0.98Se superconductor: Ion-exchange synthesis of large single crystal and highly two-dimensional electron properties(1502.04688)
June 16, 2015 cond-mat.supr-con
A large and high-quality single crystal (Li0.84Fe0.16)OHFe0.98Se, the optimal superconductor of newly reported (Li1-xFex)OHFe1-ySe system, has been successfully synthesized via a hydrothermal ion-exchange technique. The superconducting transition temperature (Tc) of 42 K is determined by magnetic susceptibility and electric resistivity measurements, and the zero-temperature upper critical magnetic fields are evaluated as 79 and 313 Tesla for the field along the c-axis and the ab-plane, respectively. The ratio of out-of-plane to in-plane electric resistivity,\r{ho}c/\r{ho}ab, is found to increases with decreasing temperature and to reach a high value of 2500 at 50 K, with an evident kink occurring at a characteristic temperature T*=120 K. The negative in-plane Hall coefficient indicates that electron carriers dominate in the charge transport, and the hole contribution is significantly reduced as the temperature is lowered to approach T*. From T* down to Tc, we observe the linear temperature dependences of the in-plane electric resistivity and the magnetic susceptibility for the FeSe layers. Our findings thus reveal that the normal state of (Li0.84Fe0.16)OHFe0.98Se becomes highly two-dimensional and anomalous prior to the superconducting transition, providing a new insight into the mechanism of high-Tc superconductivity.
• ### Phase evolution of the two-dimensional Kondo lattice model near half-filling(1506.01525)
June 4, 2015 cond-mat.str-el
Within a mean-field approximation, the ground state and finite temperature phase diagrams of the two-dimensional Kondo lattice model have been carefully studied as functions of the Kondo coupling $J$ and the conduction electron concentration $n_{c}$. In addition to the conventional hybridization between local moments and itinerant electrons, a staggered hybridization is proposed to characterize the interplay between the antiferromagnetism and the Kondo screening effect. As a result, a heavy fermion antiferromagnetic phase is obtained and separated from the pure antiferromagnetic ordered phase by a first-order Lifshitz phase transition, while a continuous phase transition exists between the heavy fermion antiferromagnetic phase and the Kondo paramagnetic phase. We have developed a efficient theory to calculate these phase boundaries. As $n_{c}$ decreases from the half-filling, the region of the heavy fermion antiferromagnetic phase shrinks and finally disappears at a critical point $n_{c}^{*}=0.8228$, leaving a first-order critical line between the pure antiferromagnetic phase and the Kondo paramagnetic phase for $n_{c}<n_{c}^{* }$. At half-filling limit, a finite temperature phase diagram is also determined on the Kondo coupling and temperature ($J$-$T$) plane. Notably, as the temperature is increased, the region of the heavy fermion antiferromagnetic phase is reduced continuously, and finally converges to a single point, together with the pure antiferromagnetic phase and the Kondo paramagnetic phase. The phase diagrams with such triple point may account for the observed phase transitions in related heavy fermion materials.
• ### Superconductivity emerging from suppressed large magnetoresistant state in WTe2(1502.00493)
Feb. 24, 2015 cond-mat.supr-con
The recent discovery of large and non-saturating magnetoresistance (LMR) in WTe2 provides a unique playground to find new phenomena and significant perspective for potential applications. Here we report the first observation of superconductivity near the proximity of suppressed LMR state in pressurized WTe2 through high-pressure synchrotron X-ray diffraction, electrical resistance, magnetoresistance, and ac magnetic susceptibility measurements. It is found that the positive magnetoresistance effect can be turned off at a critical pressure of 10.5 GPa without crystal structure change and superconductivity emerges simultaneously. The maximum superconducting transition temperature can be reached to 6.5 K at ~15 GPa and it decreases down to 2.6 K at ~25 GPa. In-situ high pressure Hall coefficient measurements at 10 K demonstrate that elevating pressure decreases hole carrier's population but increases electron carrier's population. Significantly, at the critical pressure, we observed a sign change in the Hall coefficient, indicating a possible Lifshitz-type quantum phase transition in WTe2.
• ### Phase diagram of (Li1-xFex)OHFeSe: a bridge between iron selenide and arsenide superconductors(1412.7236)
Dec. 23, 2014 cond-mat.supr-con
Previous experimental results have shown important differences between iron selenide and arsenide superconductors, which seem to suggest that the high temperature superconductivity in these two subgroups of iron-based family may arise from different electronic ground states. Here, we report the complete phase diagram of a newly synthesized superconducting (SC) system (Li1-xFex)OHFeSe with a similar structure to FeAs-based superconductors. In the non-SC samples, an antiferromagnetic (AFM) spin-density-wave (SDW) transition occurs at ~127 K. This is the first example to demonstrate such an SDW phase in FeSe-based superconductor system. Transmission electron microscopy (TEM) shows that a well-known sqrr(5) x sqrr(5) iron vacancy ordered state, resulting in an AFM order at ~ 500K in AyFe2-xSe2 (A = metal ions) superconductor system, is absent in both non-SC and SC samples, but a unique superstructure with a modulation wave vector q=1/2(1, 1, 0), identical to that seen in SC phase of KyFe2-xSe2, is dominant in the optimal SC sample (with an SC transition temperature Tc = 40 K). Hence, we conclude that the high-Tc superconductivity in (Li1-xFex)OHFeSe stems from the similarly weak AFM fluctuations as FeAs-based superconductors, suggesting a universal physical picture for both iron selenide and arsenide superconductors.
• ### Emergence of a coherent in-gap state in SmB6 Kondo insulator revealed by scanning tunneling spectroscopy(1403.0091)
We use scanning tunneling microscopy to investigate the (001) surface of cleaved SmB6 Kondo insulator. Variable temperature dI/dV spectroscopy up to 60 K reveals a gap-like density of state suppression around the Fermi level, which is due to the hybridization between the itinerant Sm 5d band and localized Sm 4f band. At temperatures below 40 K, a sharp coherence peak emerges within the hybridization gap near the lower gap edge. We propose that the in-gap resonance state is due to a collective excitation in magnetic origin with the presence of spin-orbital coupling and mixed valence fluctuations. These results shed new lights on the electronic structure evolution and transport anomaly in SmB6. | 2020-06-05 23:06:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6658229827880859, "perplexity": 2512.0677406648374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00438.warc.gz"} |