url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
http://nrich.maths.org/866/solution
Consecutive Numbers An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. 14 Divisors What is the smallest number with exactly 14 divisors? Summing Consecutive Numbers Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Arclets Stage: 3 Challenge Level: I received two superb solutions from Sheila Norrie, Shona Leenhouts and Alison Colvin, and Sarah Gibbs, Kathryn Husband and Gordon Ducan from Madras College, St. Andrews. It is so difficult to do justice to them before our publication date for this month. I will try to work on them over the next couple of weeks and publish them, or at least a flavour of them next month. Well done to all of you. A solution based on the one received from Andrei Lazanu from School 205, Bucharest, Romania is used below - but do watch this space next month! First, I looked at the figure with four nodes. The node is formed from $2/4$ of a circle, and the arc connecting two nodes of $1/4$ of a circle. The perimeter of this arclet is: $$4\pi \times \left(\frac{2}{4} + \frac{1}{4}\right) = 4 \pi \times \frac{3}{4} = 3\pi$$ Looking at the figure with three nodes, I observe the same pattern: the node contains $2/3$ of a circle, and the arc connecting two nodes $1/3$. The perimeter of the figure is: $$3\pi \times \left(\frac{2}{3} + \frac{1}{3}\right) = 3\pi$$ The circumference of a circle of radius $r$ being $2\pi r$, the perimeter ($R$) is in this case $6\pi r$. For $n = 4$, each node has at the end half a circle, and $4$ quarters join them. So, there are in total $3$ circles, with the same perimeter as before. Now, I observed a pattern: The node contains $2/n$ of a circle, and the arc connecting two nodes $1/n$. The perimeter of the figure is: $$n\pi \times \frac{3}{n} = 3\pi$$ When $n$ is very large the figure is very close to a circle. The radius of this circle is $3r$ (three times the radius of the circle whose arcs form the figure with "nodes",. from which the big figure is formed). This radius of the circle circumscribed to the figure is same for any n. In fact, it is rather difficult to prove this, and I saw it first for $n = 4$, where it could be observed directly that $R$ is formed by $3$ small radii, and then for large $n$. Maybe it would be necessary to use induction!. Then I tested the formula I obtained for $n= 5$. As I know that the figure stays inside a circle of radius $3r$, is it easy to draw it. Each node has at the end an arc of radius $r$, and of angle $144^{\circ}$. The obtained figure is below.
2015-10-08 18:09:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7290244102478027, "perplexity": 392.3804361112444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737898933.47/warc/CC-MAIN-20151001221818-00203-ip-10-137-6-227.ec2.internal.warc.gz"}
https://codereview.stackexchange.com/questions/270331/librarian-script-to-find-and-copy-a-file
# Librarian script to find and copy a file I wrote this script partly as an exercise in learning Bash and partly because I needed a tool to find specific versions of library files and copy them to a particular device. The script takes parameters that control its operation. First, it searches for the file specified by the user. Then it displays the search results, allowing the user to pick one of the files found, and (assuming a file was selected) copying it to the specified destination/path. I would most like feedback on whether the script could be written more clearly or perhaps whether there are better ways to approach its tasks. #!/bin/bash # libr.sh (librarian): find the file you need, and put it where you want. #-------------------------process parameters if [ $# -eq 0 ] || [$# -gt 3 ] then echo "Usage: libr <FILENAME> [DESTINATION] [IP]" echo "The librarian will find FILENAME in the current directory or below" echo "and will then copy it to the DESTINATION path on the unit at IP." echo "If multiple files are found, the user may choose which to copy." exit fi FILE=$1 DEST=${2:-/usr/lib/} IP=${3:-160.48.199.99} #-------------------------find the file FULLRESULTS=$(find -name $FILE 2> /dev/null) echo "File \"$FILE\" was found in the following directories:" if [ -z "$FULLRESULTS" ] then echo " None!" exit fi # find returns the path plus filename for each file found # but we just want the directories, for ease of reading RESULTS=$(dirname $FULLRESULTS) #-------------------------display results and get the source path PS3="Which one do you want to copy ? " select SRCPATH in$RESULTS do if [ ! -z "$SRCPATH" ] then #-----------------copy the file CMD="scp$SRCPATH/$FILE root@$IP:$DEST" echo "Executing:$CMD" \$CMD fi # we don't actually want to loop break done The biggest problem I see with this is dealing with filenames that haves spaces in them. Your select line will almost certainly break. The find command will let you get the results null-terminated using -print0, but I'm not sure how to get the select to work with that. Hopefully in your use-case you can avoid filename with spaces and avoid fixing this. • Move the thens up on the same line as the if with ; then. • Rather than put the scp command into a variable just so you can display it you could cut on tracing with set -x and then cut it back off right after with set +x.
2021-12-02 10:36:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5180326104164124, "perplexity": 2269.679493914612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00056.warc.gz"}
https://www.physicsforums.com/threads/comparison-test-for-series.756537/
# Comparison test for series I'm trying to find if this series converges or diverges using the comparison test: My problem is, Im not sure how to go from 1/2^(n+1) to 1/2(1/2)^n. can you please explain that to me #### Attachments • 6.3 KB Views: 367 • 9.3 KB Views: 378 $$\frac{1}{2^{n+1}}=\frac{1}{2\cdot 2^n}=\frac{1}{2}\frac{1}{2^n}=\frac{1}{2}\left(\frac{1}{2}\right)^n$$
2020-10-20 14:35:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76856929063797, "perplexity": 1784.0771956510366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872746.20/warc/CC-MAIN-20201020134010-20201020164010-00530.warc.gz"}
http://www.lastfm.es/user/themorningson/library/music/Noel+Gallagher's+High+Flying+Birds/_/The+Death+Of+You+And+Me?sortBy=date&sortOrder=asc&setlang=es
# Colección Música » Noel Gallagher's High Flying Birds » ## The Death of You and Me 192 scrobblings | Ir a la página del tema Temas (192) Tema Álbum Duración Fecha The Death of You and Me 3:29 25 Jul 2011, 14:38 The Death of You and Me 3:29 25 Jul 2011, 14:42 The Death of You and Me 3:29 25 Jul 2011, 14:45 The Death of You and Me 3:29 25 Jul 2011, 14:48 The Death of You and Me 3:29 25 Jul 2011, 14:52 The Death of You and Me 3:29 25 Jul 2011, 14:55 The Death of You and Me 3:29 25 Jul 2011, 15:02 The Death of You and Me 3:29 25 Jul 2011, 15:05 The Death of You and Me 3:29 25 Jul 2011, 15:09 The Death of You and Me 3:29 25 Jul 2011, 15:12 The Death of You and Me 3:29 25 Jul 2011, 15:16 The Death of You and Me 3:29 25 Jul 2011, 15:19 The Death of You and Me 3:29 25 Jul 2011, 15:22 The Death of You and Me 3:29 25 Jul 2011, 15:27 The Death of You and Me 3:29 25 Jul 2011, 15:31 The Death of You and Me 3:29 25 Jul 2011, 15:34 The Death of You and Me 3:29 25 Jul 2011, 15:37 The Death of You and Me 3:29 25 Jul 2011, 15:41 The Death of You and Me 3:29 25 Jul 2011, 15:44 The Death of You and Me 3:29 25 Jul 2011, 22:00 The Death of You and Me 3:29 25 Jul 2011, 22:04 The Death of You and Me 3:29 25 Jul 2011, 22:07 The Death of You and Me 3:29 25 Jul 2011, 22:11 The Death of You and Me 3:29 25 Jul 2011, 22:14 The Death of You and Me 3:29 25 Jul 2011, 22:17 The Death of You and Me 3:29 25 Jul 2011, 22:21 The Death of You and Me 3:29 25 Jul 2011, 22:24 The Death of You and Me 3:29 25 Jul 2011, 22:52 The Death of You and Me 3:29 25 Jul 2011, 22:55 The Death of You and Me 3:29 25 Jul 2011, 22:59 The Death of You and Me 3:29 25 Jul 2011, 23:06 The Death of You and Me 3:29 25 Jul 2011, 23:10 The Death of You and Me 3:29 26 Jul 2011, 17:49 The Death of You and Me 3:29 26 Jul 2011, 17:52 The Death of You and Me 3:29 26 Jul 2011, 18:06 The Death of You and Me 3:29 26 Jul 2011, 18:13 The Death of You and Me 3:29 27 Jul 2011, 0:05 The Death of You and Me 3:29 27 Jul 2011, 0:08 The Death of You and Me 3:29 27 Jul 2011, 0:12 The Death of You and Me 3:29 27 Jul 2011, 0:15 The Death of You and Me 3:29 27 Jul 2011, 0:19 The Death of You and Me 3:29 27 Jul 2011, 0:22 The Death of You and Me 3:29 27 Jul 2011, 0:26 The Death of You and Me 3:29 27 Jul 2011, 0:29 The Death of You and Me 3:29 27 Jul 2011, 0:33 The Death of You and Me 3:29 27 Jul 2011, 0:36 The Death of You and Me 3:29 27 Jul 2011, 0:39 The Death of You and Me 3:29 27 Jul 2011, 15:45 The Death of You and Me 3:29 27 Jul 2011, 22:36 The Death of You and Me 3:29 27 Jul 2011, 22:39 The Death of You and Me 3:29 27 Jul 2011, 22:42 The Death of You and Me 3:29 27 Jul 2011, 22:45 The Death of You and Me 3:29 27 Jul 2011, 22:48 The Death of You and Me 3:29 27 Jul 2011, 22:51 The Death of You and Me 3:29 27 Jul 2011, 22:55 The Death of You and Me 3:29 27 Jul 2011, 22:57 The Death of You and Me 3:29 27 Jul 2011, 23:00 The Death of You and Me 3:29 27 Jul 2011, 23:04 The Death of You and Me 3:29 27 Jul 2011, 23:07 The Death of You and Me 3:29 27 Jul 2011, 23:09 The Death of You and Me 3:29 27 Jul 2011, 23:12 The Death of You and Me 3:29 27 Jul 2011, 23:15 The Death of You and Me 3:29 27 Jul 2011, 23:19 The Death of You and Me 3:29 27 Jul 2011, 23:22 The Death of You and Me 3:29 27 Jul 2011, 23:26 The Death of You and Me 3:29 27 Jul 2011, 23:29 The Death of You and Me 3:29 27 Jul 2011, 23:32 The Death of You and Me 3:29 27 Jul 2011, 23:35 The Death of You and Me 3:29 27 Jul 2011, 23:38 The Death of You and Me 3:29 27 Jul 2011, 23:42 The Death of You and Me 3:29 27 Jul 2011, 23:45 The Death of You and Me 3:29 27 Jul 2011, 23:48 The Death of You and Me 3:29 27 Jul 2011, 23:51 The Death of You and Me 3:29 27 Jul 2011, 23:54 The Death of You and Me 3:29 27 Jul 2011, 23:58 The Death of You and Me 3:29 28 Jul 2011, 0:01 The Death of You and Me 3:29 28 Jul 2011, 0:05 The Death of You and Me 3:29 28 Jul 2011, 20:11 The Death of You and Me 3:29 28 Jul 2011, 20:13 The Death of You and Me 3:29 28 Jul 2011, 20:16 The Death of You and Me 3:29 28 Jul 2011, 20:20 The Death of You and Me 3:29 28 Jul 2011, 20:23 The Death of You and Me 3:29 28 Jul 2011, 20:26 The Death of You and Me 3:29 28 Jul 2011, 20:34 The Death of You and Me 3:29 28 Jul 2011, 20:37 The Death of You and Me 3:29 28 Jul 2011, 20:45 The Death of You and Me 3:29 28 Jul 2011, 23:04 The Death of You and Me 3:29 28 Jul 2011, 23:07 The Death of You and Me 3:29 28 Jul 2011, 23:10 The Death of You and Me 3:29 28 Jul 2011, 23:43 The Death of You and Me 3:29 28 Jul 2011, 23:46 The Death of You and Me 3:29 28 Jul 2011, 23:50 The Death of You and Me 3:29 28 Jul 2011, 23:53 The Death of You and Me 3:29 28 Jul 2011, 23:57 The Death of You and Me 3:29 29 Jul 2011, 0:00 The Death of You and Me 3:29 29 Jul 2011, 0:03 The Death of You and Me 3:29 29 Jul 2011, 0:07 The Death of You and Me 3:29 29 Jul 2011, 15:32 The Death of You and Me 3:29 30 Jul 2011, 17:55 The Death of You and Me 3:29 30 Jul 2011, 20:47 The Death of You and Me 3:29 30 Jul 2011, 20:52 The Death of You and Me 3:29 31 Jul 2011, 1:20 The Death of You and Me 3:29 31 Jul 2011, 2:52 The Death of You and Me 3:29 31 Jul 2011, 4:08 The Death of You and Me 3:29 31 Jul 2011, 13:33 The Death of You and Me 3:29 31 Jul 2011, 16:08 The Death of You and Me 3:29 31 Jul 2011, 21:57 The Death of You and Me 3:29 31 Jul 2011, 22:00 The Death of You and Me 3:29 31 Jul 2011, 22:02 The Death of You and Me 3:29 31 Jul 2011, 22:06 The Death of You and Me 3:29 31 Jul 2011, 22:08 The Death of You and Me 3:29 31 Jul 2011, 22:12 The Death of You and Me 3:29 1 Ago 2011, 16:49 The Death of You and Me 3:29 1 Ago 2011, 16:52 The Death of You and Me 3:29 1 Ago 2011, 16:55 The Death of You and Me 3:29 1 Ago 2011, 16:58 The Death of You and Me 3:29 1 Ago 2011, 17:02 The Death of You and Me 3:29 1 Ago 2011, 17:04 The Death of You and Me 3:29 1 Ago 2011, 17:07 The Death of You and Me 3:29 9 Ago 2011, 16:20 The Death of You and Me 3:29 9 Ago 2011, 16:22 The Death of You and Me 3:29 9 Ago 2011, 16:25 The Death of You and Me 3:29 9 Ago 2011, 20:27 The Death of You and Me 3:29 9 Ago 2011, 20:31 The Death of You and Me 3:29 9 Ago 2011, 20:34 The Death of You and Me 3:29 9 Ago 2011, 20:38 The Death of You and Me 3:29 9 Ago 2011, 21:39 The Death of You and Me 3:29 9 Ago 2011, 23:57 The Death of You and Me 3:29 10 Ago 2011, 0:10 The Death of You and Me 3:29 10 Ago 2011, 0:36 The Death of You and Me 3:29 10 Ago 2011, 15:36 The Death of You and Me 3:29 10 Ago 2011, 15:55 The Death of You and Me 3:29 10 Ago 2011, 15:57 The Death of You and Me 3:29 10 Ago 2011, 16:11 The Death of You and Me 3:29 11 Ago 2011, 21:54 The Death of You and Me 3:29 11 Ago 2011, 21:58 The Death of You and Me 3:29 11 Ago 2011, 22:01 The Death of You and Me 3:29 11 Ago 2011, 22:04 The Death of You and Me 3:29 11 Ago 2011, 22:17 The Death of You and Me 3:29 11 Ago 2011, 22:20 The Death of You and Me 3:29 11 Ago 2011, 22:24 The Death of You and Me 3:29 11 Ago 2011, 22:31 The Death of You and Me 3:29 11 Ago 2011, 22:39 The Death of You and Me 3:29 11 Ago 2011, 22:46 The Death of You and Me 3:29 11 Ago 2011, 22:54 The Death of You and Me 3:29 14 Ago 2011, 3:33 The Death of You and Me 3:29 14 Ago 2011, 3:35 The Death of You and Me 3:29 14 Ago 2011, 3:39 The Death of You and Me 3:29 14 Ago 2011, 3:57 The Death of You and Me 3:29 15 Ago 2011, 0:54 The Death of You and Me 3:29 15 Ago 2011, 1:01 The Death of You and Me 3:29 15 Ago 2011, 1:06 The Death of You and Me 3:29 26 Ago 2011, 0:21 The Death of You and Me 3:29 26 Ago 2011, 0:25 The Death of You and Me 3:29 27 Ago 2011, 19:01 The Death of You and Me 3:29 27 Ago 2011, 19:04 The Death of You and Me 3:29 27 Ago 2011, 19:12 The Death of You and Me 3:29 27 Ago 2011, 19:20 The Death of You and Me 3:29 27 Ago 2011, 19:27 The Death of You and Me 3:29 27 Ago 2011, 19:35 The Death of You and Me 3:29 27 Ago 2011, 19:43 The Death of You and Me 3:29 30 Ago 2011, 21:05 The Death of You and Me 3:29 31 Ago 2011, 16:08 The Death of You and Me 3:29 3 Sep 2011, 20:37 The Death of You and Me 3:29 3 Sep 2011, 21:35 The Death of You and Me 3:29 5 Sep 2011, 21:57 The Death of You and Me 3:29 5 Sep 2011, 22:03 The Death of You and Me 3:29 10 Sep 2011, 0:40 The Death of You and Me 3:29 10 Sep 2011, 0:46 The Death of You and Me 3:29 10 Sep 2011, 0:50 The Death of You and Me 3:29 10 Sep 2011, 0:53 The Death of You and Me 3:29 10 Sep 2011, 21:30 The Death of You and Me 3:29 29 Sep 2011, 0:34 The Death of You and Me 3:29 29 Sep 2011, 23:58 The Death of You and Me 3:29 4 Oct 2011, 15:24 The Death of You and Me 3:29 4 Oct 2011, 16:00 The Death of You and Me 3:29 4 Oct 2011, 18:05 The Death of You and Me 3:29 7 Oct 2011, 20:00 The Death of You and Me 3:29 10 Oct 2011, 20:02 The Death of You and Me 3:29 16 Oct 2011, 2:41 The Death of You and Me 3:29 19 Oct 2011, 22:54 The Death of You and Me 3:29 22 Oct 2011, 21:58 The Death of You and Me 3:29 24 Oct 2011, 16:09 The Death of You and Me 3:29 7 Nov 2011, 4:24 The Death of You and Me 3:29 13 Nov 2011, 0:58 The Death of You and Me 3:29 5 Dic 2011, 15:36 The Death of You and Me 3:29 9 Ene 2012, 21:39 The Death of You and Me 3:29 12 Ene 2012, 12:01 The Death of You and Me 3:29 12 Ene 2012, 12:07 The Death of You and Me 3:29 6 Mar 2012, 10:18 The Death of You and Me 3:29 4 Jul 2012, 1:06 The Death of You and Me 3:29 10 Ene 2014, 17:02
2015-05-25 02:28:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9404587149620056, "perplexity": 10351.172632846763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928114.23/warc/CC-MAIN-20150521113208-00218-ip-10-180-206-219.ec2.internal.warc.gz"}
https://learn.microsoft.com/en-us/dotnet/standard/asynchronous-programming-patterns/consuming-the-task-based-asynchronous-pattern
# Consuming the Task-based Asynchronous Pattern When you use the Task-based Asynchronous Pattern (TAP) to work with asynchronous operations, you can use callbacks to achieve waiting without blocking. For tasks, this is achieved through methods such as Task.ContinueWith. Language-based asynchronous support hides callbacks by allowing asynchronous operations to be awaited within normal control flow, and compiler-generated code provides this same API-level support. ## Suspending Execution with Await You can use the await keyword in C# and the Await Operator in Visual Basic to asynchronously await Task and Task<TResult> objects. When you're awaiting a Task, the await expression is of type void. When you're awaiting a Task<TResult>, the await expression is of type TResult. An await expression must occur inside the body of an asynchronous method. (These language features were introduced in .NET Framework 4.5.) Under the covers, the await functionality installs a callback on the task by using a continuation. This callback resumes the asynchronous method at the point of suspension. When the asynchronous method is resumed, if the awaited operation completed successfully and was a Task<TResult>, its TResult is returned. If the Task or Task<TResult> that was awaited ended in the Canceled state, an OperationCanceledException exception is thrown. If the Task or Task<TResult> that was awaited ended in the Faulted state, the exception that caused it to fault is thrown. A Task can fault as a result of multiple exceptions, but only one of these exceptions is propagated. However, the Task.Exception property returns an AggregateException exception that contains all the errors. If a synchronization context (SynchronizationContext object) is associated with the thread that was executing the asynchronous method at the time of suspension (for example, if the SynchronizationContext.Current property is not null), the asynchronous method resumes on that same synchronization context by using the context's Post method. Otherwise, it relies on the task scheduler (TaskScheduler object) that was current at the time of suspension. Typically, this is the default task scheduler (TaskScheduler.Default), which targets the thread pool. This task scheduler determines whether the awaited asynchronous operation should resume where it completed or whether the resumption should be scheduled. The default scheduler typically allows the continuation to run on the thread that the awaited operation completed. When an asynchronous method is called, it synchronously executes the body of the function up until the first await expression on an awaitable instance that has not yet completed, at which point the invocation returns to the caller. If the asynchronous method does not return void, a Task or Task<TResult> object is returned to represent the ongoing computation. In a non-void asynchronous method, if a return statement is encountered or the end of the method body is reached, the task is completed in the RanToCompletion final state. If an unhandled exception causes control to leave the body of the asynchronous method, the task ends in the Faulted state. If that exception is an OperationCanceledException, the task instead ends in the Canceled state. In this manner, the result or exception is eventually published. There are several important variations of this behavior. For performance reasons, if a task has already completed by the time the task is awaited, control is not yielded, and the function continues to execute. Additionally, returning to the original context isn't always the desired behavior and can be changed; this is described in more detail in the next section. ### Configuring Suspension and Resumption with Yield and ConfigureAwait Several methods provide more control over an asynchronous method's execution. For example, you can use the Task.Yield method to introduce a yield point into the asynchronous method: public class Task : … { public static YieldAwaitable Yield(); … } This is equivalent to asynchronously posting or scheduling back to the current context. Task.Run(async delegate { for(int i=0; i<1000000; i++) { await Task.Yield(); // fork the continuation into a separate work item ... } }); You can also use the Task.ConfigureAwait method for better control over suspension and resumption in an asynchronous method. As mentioned previously, by default, the current context is captured at the time an asynchronous method is suspended, and that captured context is used to invoke the asynchronous method's continuation upon resumption. In many cases, this is the exact behavior you want. In other cases, you may not care about the continuation context, and you can achieve better performance by avoiding such posts back to the original context. To enable this, use the Task.ConfigureAwait method to inform the await operation not to capture and resume on the context, but to continue execution wherever the asynchronous operation that was being awaited completed: await someTask.ConfigureAwait(continueOnCapturedContext:false); ## Canceling an Asynchronous Operation Starting with .NET Framework 4, TAP methods that support cancellation provide at least one overload that accepts a cancellation token (CancellationToken object). A cancellation token is created through a cancellation token source (CancellationTokenSource object). The source's Token property returns the cancellation token that will be signaled when the source's Cancel method is called. For example, if you want to download a single webpage and you want to be able to cancel the operation, you create a CancellationTokenSource object, pass its token to the TAP method, and then call the source's Cancel method when you're ready to cancel the operation: var cts = new CancellationTokenSource(); … // at some point later, potentially on another thread cts.Cancel(); To cancel multiple asynchronous invocations, you can pass the same token to all invocations: var cts = new CancellationTokenSource(); // at some point later, potentially on another thread … cts.Cancel(); Or, you can pass the same token to a selective subset of operations: var cts = new CancellationTokenSource(); byte [] data = await DownloadDataAsync(url, cts.Token); await SaveToDiskAsync(outputPath, data, CancellationToken.None); … // at some point later, potentially on another thread cts.Cancel(); Important Cancellation requests may be initiated from any thread. You can pass the CancellationToken.None value to any method that accepts a cancellation token to indicate that cancellation will never be requested. This causes the CancellationToken.CanBeCanceled property to return false, and the called method can optimize accordingly. For testing purposes, you can also pass in a pre-canceled cancellation token that is instantiated by using the constructor that accepts a Boolean value to indicate whether the token should start in an already-canceled or not-cancelable state. This approach to cancellation has several advantages: • You can pass the same cancellation token to any number of asynchronous and synchronous operations. • The same cancellation request may be proliferated to any number of listeners. • The developer of the asynchronous API is in complete control of whether cancellation may be requested and when it may take effect. • The code that consumes the API may selectively determine the asynchronous invocations that cancellation requests will be propagated to. ## Monitoring Progress Some asynchronous methods expose progress through a progress interface passed into the asynchronous method. For example, consider a function that asynchronously downloads a string of text, and along the way raises progress updates that include the percentage of the download that has completed thus far. Such a method could be consumed in a Windows Presentation Foundation (WPF) application as follows: private async void btnDownload_Click(object sender, RoutedEventArgs e) { try { } } ## Using the Built-in Task-based Combinators The System.Threading.Tasks namespace includes several methods for composing and working with tasks. The Task class includes several Run methods that let you easily offload work as a Task or Task<TResult> to the thread pool, for example: public async void button1_Click(object sender, EventArgs e) { textBox1.Text = await Task.Run(() => { // … do compute-bound work here }); } Some of these Run methods, such as the Task.Run(Func<Task>) overload, exist as shorthand for the TaskFactory.StartNew method. This overload enable you to use await within the offloaded work, for example: public async void button1_Click(object sender, EventArgs e) { pictureBox1.Image = await Task.Run(async() => { return Mashup(bmp1, bmp2); }); } Such overloads are logically equivalent to using the TaskFactory.StartNew method in conjunction with the Unwrap extension method in the Task Parallel Library. Use the FromResult method in scenarios where data may already be available and just needs to be returned from a task-returning method lifted into a Task<TResult>: public Task<int> GetValueAsync(string key) { int cachedValue; return TryGetCachedValue(out cachedValue) ? GetValueAsyncInternal(); } private async Task<int> GetValueAsyncInternal(string key) { … } Use the WhenAll method to asynchronously wait on multiple asynchronous operations that are represented as tasks. The method has multiple overloads that support a set of non-generic tasks or a non-uniform set of generic tasks (for example, asynchronously waiting for multiple void-returning operations, or asynchronously waiting for multiple value-returning methods where each value may have a different type) and to support a uniform set of generic tasks (such as asynchronously waiting for multiple TResult-returning methods). Let's say you want to send email messages to several customers. You can overlap sending the messages so you're not waiting for one message to complete before sending the next. You can also find out when the send operations have completed and whether any errors have occurred: IEnumerable<Task> asyncOps = from addr in addrs select SendMailAsync(addr); This code doesn't explicitly handle exceptions that may occur, but lets exceptions propagate out of the await on the resulting task from WhenAll. To handle the exceptions, you can use code such as the following: IEnumerable<Task> asyncOps = from addr in addrs select SendMailAsync(addr); try { } catch(Exception exc) { ... } In this case, if any asynchronous operation fails, all the exceptions will be consolidated in an AggregateException exception, which is stored in the Task that is returned from the WhenAll method. However, only one of those exceptions is propagated by the await keyword. If you want to examine all the exceptions, you can rewrite the previous code as follows: Task [] asyncOps = (from addr in addrs select SendMailAsync(addr)).ToArray(); try { } catch(Exception exc) { foreach(Task faulted in asyncOps.Where(t => t.IsFaulted)) { … // work with faulted and faulted.Exception } } Let's consider an example of downloading multiple files from the web asynchronously. In this case, all the asynchronous operations have homogeneous result types, and it's easy to access the results: string [] pages = await Task.WhenAll( You can use the same exception-handling techniques we discussed in the previous void-returning scenario: Task<string> [] asyncOps = try { string [] pages = await Task.WhenAll(asyncOps); ... } catch(Exception exc) { foreach(Task<string> faulted in asyncOps.Where(t => t.IsFaulted)) { … // work with faulted and faulted.Exception } } You can use the WhenAny method to asynchronously wait for just one of multiple asynchronous operations represented as tasks to complete. This method serves four primary use cases: • Redundancy: Performing an operation multiple times and selecting the one that completes first (for example, contacting multiple stock quote web services that will produce a single result and selecting the one that completes the fastest). • Interleaving: Launching multiple operations and waiting for all of them to complete, but processing them as they complete. • Throttling: Allowing additional operations to begin as others complete. This is an extension of the interleaving scenario. • Early bailout: For example, an operation represented by task t1 can be grouped in a WhenAny task with another task t2, and you can wait on the WhenAny task. Task t2 could represent a time-out, or cancellation, or some other signal that causes the WhenAny task to complete before t1 completes. #### Redundancy Consider a case where you want to make a decision about whether to buy a stock. There are several stock recommendation web services that you trust, but depending on daily load, each service can end up being slow at different times. You can use the WhenAny method to receive a notification when any operation completes: var recommendations = new List<Task<bool>>() { }; if (await recommendation) BuyStock(symbol); Unlike WhenAll, which returns the unwrapped results of all tasks that completed successfully, WhenAny returns the task that completed. If a task fails, it's important to know that it failed, and if a task succeeds, it's important to know which task the return value is associated with. Therefore, you need to access the result of the returned task, or further await it, as this example shows. As with WhenAll, you have to be able to accommodate exceptions. Because you receive the completed task back, you can await the returned task to have errors propagated, and try/catch them appropriately; for example: Task<bool> [] recommendations = …; while(recommendations.Count > 0) { try { if (await recommendation) BuyStock(symbol); break; } catch(WebException exc) { recommendations.Remove(recommendation); } } Additionally, even if a first task completes successfully, subsequent tasks may fail. At this point, you have several options for dealing with exceptions: You can wait until all the launched tasks have completed, in which case you can use the WhenAll method, or you can decide that all exceptions are important and must be logged. For this, you can use continuations to receive a notification when tasks have completed asynchronously: foreach(Task recommendation in recommendations) { var ignored = recommendation.ContinueWith( t => { if (t.IsFaulted) Log(t.Exception); }); } or: foreach(Task recommendation in recommendations) { var ignored = recommendation.ContinueWith( t => Log(t.Exception), TaskContinuationOptions.OnlyOnFaulted); } or even: private static async void LogCompletionIfFailed(IEnumerable<Task> tasks) { { try { await task; } catch(Exception exc) { Log(exc); } } } … LogCompletionIfFailed(recommendations); Finally, you may want to cancel all the remaining operations: var cts = new CancellationTokenSource(); var recommendations = new List<Task<bool>>() { }; cts.Cancel(); if (await recommendation) BuyStock(symbol); #### Interleaving List<Task<Bitmap>> imageTasks = (from imageUrl in urls select GetBitmapAsync(imageUrl)).ToList(); { try { Bitmap image = await imageTask; } catch{} } You can also apply interleaving to a scenario that involves computationally intensive processing on the ThreadPool of the downloaded images; for example: List<Task<Bitmap>> imageTasks = (from imageUrl in urls select GetBitmapAsync(imageUrl) .ContinueWith(t => ConvertImage(t.Result)).ToList(); { try { Bitmap image = await imageTask; } catch{} } #### Throttling Consider the interleaving example, except that the user is downloading so many images that the downloads have to be throttled; for example, you want only a specific number of downloads to happen concurrently. To achieve this, you can start a subset of the asynchronous operations. As operations complete, you can start additional operations to take their place: const int CONCURRENCY_LEVEL = 15; Uri [] urls = …; int nextIndex = 0; while(nextIndex < CONCURRENCY_LEVEL && nextIndex < urls.Length) { nextIndex++; } { try { Bitmap image = await imageTask; } catch(Exception exc) { Log(exc); } if (nextIndex < urls.Length) { nextIndex++; } } #### Early Bailout Consider that you're waiting asynchronously for an operation to complete while simultaneously responding to a user's cancellation request (for example, the user clicked a cancel button). The following code illustrates this scenario: private CancellationTokenSource m_cts; public void btnCancel_Click(object sender, EventArgs e) { if (m_cts != null) m_cts.Cancel(); } public async void btnRun_Click(object sender, EventArgs e) { m_cts = new CancellationTokenSource(); btnRun.Enabled = false; try { { } } finally { btnRun.Enabled = true; } } private static async Task UntilCompletionOrCancellation( Task asyncOp, CancellationToken ct) { var tcs = new TaskCompletionSource<bool>(); using(ct.Register(() => tcs.TrySetResult(true))) return asyncOp; } This implementation re-enables the user interface as soon as you decide to bail out, but doesn't cancel the underlying asynchronous operations. Another alternative would be to cancel the pending operations when you decide to bail out, but not reestablish the user interface until the operations complete, potentially due to ending early due to the cancellation request: private CancellationTokenSource m_cts; public async void btnRun_Click(object sender, EventArgs e) { m_cts = new CancellationTokenSource(); btnRun.Enabled = false; try { } catch(OperationCanceledException) {} finally { btnRun.Enabled = true; } } Another example of early bailout involves using the WhenAny method in conjunction with the Delay method, as discussed in the next section. You can use the Task.Delay method to introduce pauses into an asynchronous method's execution. This is useful for many kinds of functionality, including building polling loops and delaying the handling of user input for a predetermined period of time. The Task.Delay method can also be useful in combination with Task.WhenAny for implementing time-outs on awaits. public async void btnDownload_Click(object sender, EventArgs e) { try { { pictureBox.Image = bmp; } else { pictureBox.Image = null; status.Text = "Timed out"; t => Trace("Task finally completed")); } } } The same applies to multiple downloads, because WhenAll returns a task: public async void btnDownload_Click(object sender, RoutedEventArgs e) { try { Task.WhenAll(from url in urls select GetBitmapAsync(url)); { } else { status.Text = "Timed out"; } } } ## Building Task-based Combinators Because a task is able to completely represent an asynchronous operation and provide synchronous and asynchronous capabilities for joining with the operation, retrieving its results, and so on, you can build useful libraries of combinators that compose tasks to build larger patterns. As discussed in the previous section, .NET includes several built-in combinators, but you can also build your own. The following sections provide several examples of potential combinator methods and types. ### RetryOnFault In many situations, you may want to retry an operation if a previous attempt fails. For synchronous code, you might build a helper method such as RetryOnFault in the following example to accomplish this: public static T RetryOnFault<T>( Func<T> function, int maxTries) { for(int i=0; i<maxTries; i++) { try { return function(); } catch { if (i == maxTries-1) throw; } } return default(T); } You can build an almost identical helper method for asynchronous operations that are implemented with TAP and thus return tasks: public static async Task<T> RetryOnFault<T>( Func<Task<T>> function, int maxTries) { for(int i=0; i<maxTries; i++) { try { return await function().ConfigureAwait(false); } catch { if (i == maxTries-1) throw; } } return default(T); } You can then use this combinator to encode retries into the application's logic; for example: // Download the URL, trying up to three times in case of failure string pageContents = await RetryOnFault( You could extend the RetryOnFault function further. For example, the function could accept another Func<Task> that will be invoked between retries to determine when to try the operation again; for example: public static async Task<T> RetryOnFault<T>( { for(int i=0; i<maxTries; i++) { try { return await function().ConfigureAwait(false); } catch { if (i == maxTries-1) throw; } await retryWhen().ConfigureAwait(false); } return default(T); } You could then use the function as follows to wait for a second before retrying the operation: // Download the URL, trying up to three times in case of failure, // and delaying for a second between retries string pageContents = await RetryOnFault( ### NeedOnlyOne Sometimes, you can take advantage of redundancy to improve an operation's latency and chances for success. Consider multiple web services that provide stock quotes, but at various times of the day, each service may provide different levels of quality and response times. To deal with these fluctuations, you may issue requests to all the web services, and as soon as you get a response from one, cancel the remaining requests. You can implement a helper function to make it easier to implement this common pattern of launching multiple operations, waiting for any, and then canceling the rest. The NeedOnlyOne function in the following example illustrates this scenario: public static async Task<T> NeedOnlyOne( params Func<CancellationToken,Task<T>> [] functions) { var cts = new CancellationTokenSource(); var tasks = (from function in functions select function(cts.Token)).ToArray(); cts.Cancel(); { var ignored = task.ContinueWith( t => Log(t), TaskContinuationOptions.OnlyOnFaulted); } return completed; } You can then use this function as follows: double currentPrice = await NeedOnlyOne( ct => GetCurrentPriceFromServer1Async("msft", ct), ct => GetCurrentPriceFromServer2Async("msft", ct), ct => GetCurrentPriceFromServer3Async("msft", ct)); ### Interleaved Operations There is a potential performance problem with using the WhenAny method to support an interleaving scenario when you're working with large sets of tasks. Every call to WhenAny results in a continuation being registered with each task. For N number of tasks, this results in O(N2) continuations created over the lifetime of the interleaving operation. If you're working with a large set of tasks, you can use a combinator (Interleaved in the following example) to address the performance issue: static IEnumerable<Task<T>> Interleaved<T>(IEnumerable<Task<T>> tasks) { var sources = (from _ in Enumerable.Range(0, inputTasks.Count) int nextTaskIndex = -1; { { var source = sources[Interlocked.Increment(ref nextTaskIndex)]; if (completed.IsFaulted) source.TrySetException(completed.Exception.InnerExceptions); else if (completed.IsCanceled) source.TrySetCanceled(); else source.TrySetResult(completed.Result); }, CancellationToken.None, } return from source in sources } You can then use the combinator to process the results of tasks as they complete; for example: IEnumerable<Task<int>> tasks = ...; { int result = await task; … } ### WhenAllOrFirstException In certain scatter/gather scenarios, you might want to wait for all tasks in a set, unless one of them faults, in which case you want to stop waiting as soon as the exception occurs. You can accomplish that with a combinator method such as WhenAllOrFirstException in the following example: public static Task<T[]> WhenAllOrFirstException<T>(IEnumerable<Task<T>> tasks) { var inputs = tasks.ToList(); var ce = new CountdownEvent(inputs.Count); var tcs = new TaskCompletionSource<T[]>(); { if (completed.IsFaulted) tcs.TrySetException(completed.Exception.InnerExceptions); if (ce.Signal() && !tcs.Task.IsCompleted) tcs.TrySetResult(inputs.Select(t => t.Result).ToArray()); }; foreach (var t in inputs) t.ContinueWith(onCompleted); } ## Building Task-based Data Structures In addition to the ability to build custom task-based combinators, having a data structure in Task and Task<TResult> that represents both the results of an asynchronous operation and the necessary synchronization to join with it makes it a powerful type on which to build custom data structures to be used in asynchronous scenarios. ### AsyncCache One important aspect of a task is that it may be handed out to multiple consumers, all of whom may await it, register continuations with it, get its result or exceptions (in the case of Task<TResult>), and so on. This makes Task and Task<TResult> perfectly suited to be used in an asynchronous caching infrastructure. Here's an example of a small but powerful asynchronous cache built on top of Task<TResult>: public class AsyncCache<TKey, TValue> { public AsyncCache(Func<TKey, Task<TValue>> valueFactory) { if (valueFactory == null) throw new ArgumentNullException("valueFactory"); _valueFactory = valueFactory; _map = new ConcurrentDictionary<TKey, Lazy<Task<TValue>>>(); } public Task<TValue> this[TKey key] { get { if (key == null) throw new ArgumentNullException("key"); } } } The AsyncCache<TKey,TValue> class accepts as a delegate to its constructor a function that takes a TKey and returns a Task<TResult>. Any previously accessed values from the cache are stored in the internal dictionary, and the AsyncCache ensures that only one task is generated per key, even if the cache is accessed concurrently. For example, you can build a cache for downloaded web pages: private AsyncCache<string,string> m_webPages = You can then use this cache in asynchronous methods whenever you need the contents of a web page. The AsyncCache class ensures that you're downloading as few pages as possible, and caches the results. private async void btnDownload_Click(object sender, RoutedEventArgs e) { try { txtContents.Text = await m_webPages["https://www.microsoft.com"]; } } ### AsyncProducerConsumerCollection You can also use tasks to build data structures for coordinating asynchronous activities. Consider one of the classic parallel design patterns: producer/consumer. In this pattern, producers generate data that is consumed by consumers, and the producers and consumers may run in parallel. For example, the consumer processes item 1, which was previously generated by a producer who is now producing item 2. For the producer/consumer pattern, you invariably need some data structure to store the work created by producers so that the consumers may be notified of new data and find it when available. Here's a simple data structure, built on top of tasks, that enables asynchronous methods to be used as producers and consumers: public class AsyncProducerConsumerCollection<T> { private readonly Queue<T> m_collection = new Queue<T>(); public void Add(T item) { TaskCompletionSource<T> tcs = null; lock (m_collection) { if (m_waiting.Count > 0) tcs = m_waiting.Dequeue(); else m_collection.Enqueue(item); } if (tcs != null) tcs.TrySetResult(item); } { lock (m_collection) { if (m_collection.Count > 0) { } else { var tcs = new TaskCompletionSource<T>(); m_waiting.Enqueue(tcs); } } } } With that data structure in place, you can write code such as the following: private static AsyncProducerConsumerCollection<int> m_data = …; … private static async Task ConsumerAsync() { while(true) { int nextItem = await m_data.Take(); ProcessNextItem(nextItem); } } … private static void Produce(int data) { } The System.Threading.Tasks.Dataflow namespace includes the BufferBlock<T> type, which you can use in a similar manner, but without having to build a custom collection type: private static BufferBlock<int> m_data = …; … private static async Task ConsumerAsync() { while(true) { int nextItem = await m_data.ReceiveAsync(); ProcessNextItem(nextItem); } } … private static void Produce(int data) { m_data.Post(data); } Note The System.Threading.Tasks.Dataflow namespace is available as a NuGet package. To install the assembly that contains the System.Threading.Tasks.Dataflow namespace, open your project in Visual Studio, choose Manage NuGet Packages from the Project menu, and search online for the System.Threading.Tasks.Dataflow package.
2023-03-24 13:14:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1855011284351349, "perplexity": 5253.952490233343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00126.warc.gz"}
https://physics.stackexchange.com/questions/427285/rule-of-thumb-for-identifying-dominant-quark-contribution-in-loop-calculations
# Rule of thumb for identifying dominant quark contribution in loop calculations I am trying to understand some calculations of $B$ meson decays and just stumbled upon the low energy effective weak Hamiltonian describing $\Delta S = 1$ $B$ decays: $$\mathcal{H}_\text{eff} = \frac{G_F}{\sqrt 2} \left[ V_{ub}^* V_{us} \left( \sum_1^2 c_i Q_i^{us} + \sum_3^{10} c_i Q_i^s \right) \; + \; V_{cb}^* V_{cs} \left( \sum_1^2 c_i Q_i^{cs} + \sum_3^{10} c_i Q_i^s \right) \right],$$ where $c_i$ are scale-dependent Wilson coefficients and the flavor structure of the various four-quark operators is $Q_{1,2}^{qs} \sim \overline{b}q\overline{q}s$, $Q_{3,\ldots,6}^s \sim \overline{b}s \sum \overline{q}{}'q'$, $Q_{7,\ldots,10}^s \sim \overline{b}s \sum e_{q'} \overline{q}{}'q'$ ($q' = u,d,s,c$). (cf. arXiv:hep-ph/0008292). The former part seems to be the tree and the latter the penguin contribution. However, I don't understand why there is no contribution coming from the $t$ quark? In my naive little world, such loops are dominated by the heaviest fermion and I am / was (?) quite sure that this is the case for box diagrams. Continuing reading I find that Gronau keeps arguing with these charm contribution and actual this is somehow a crucial point in this paper since this leads to some cancellation... My question: Is there a rule of thumb for identifying which quark dominates a loop calculation? Is it really the case, that for instance in the $b \to s$ transition the charm contribution is larger than the top contribution? Why isn't this the case for box diagrams (e.g. in $B^0$-$\overline{B}{}^0$ mixing)? • Minor comment to the post (v1): In the future please link to abstract pages rather than pdf files. – Qmechanic Sep 7 '18 at 16:51 In the meantime I found an answer: The top contribution wasn't neglected, but shifted by utilizing CKM's unitarity $V_{tb} V_{ts}^* = -V_{ub} V_{us}^* - V_{cb} V_{cs}^*$, i.e.: both parts of $\mathcal{H}_\text{eff}$ now have top contributions, specifically the former part now has tree and penguin contributions. • QCD penguins: contributions from $u$ and $c$ are larger than the contribution coming from the top quark • $b \to s$: $V_{ub} V_{us}^*$ is negligibly small; charm contribution is roughly two times larger than the top contribution (top-loop dominance is a myth in this case!) • $Z$ penguins and boxes: top loop is dominant (at least)
2019-07-18 23:46:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.741642415523529, "perplexity": 963.0107475440939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525863.49/warc/CC-MAIN-20190718231656-20190719013656-00506.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/128i1/C4oD4.8Q8.html
Copied to clipboard G = C4○D4.8Q8order 128 = 27 6th non-split extension by C4○D4 of Q8 acting via Q8/C4=C2 p-group, metabelian, nilpotent (class 3), monomial Series: Derived Chief Lower central Upper central Jennings Derived series C1 — C4 — C4○D4.8Q8 Chief series C1 — C2 — C22 — C2×C4 — C22×C4 — C2×C4○D4 — C2×C8○D4 — C4○D4.8Q8 Lower central C1 — C2 — C4 — C4○D4.8Q8 Upper central C1 — C22 — C2×C4○D4 — C4○D4.8Q8 Jennings C1 — C2 — C2 — C2×C4 — C4○D4.8Q8 Generators and relations for C4○D4.8Q8 G = < a,b,c,d,e | a4=c2=1, b2=d4=a2, e2=ad2, ab=ba, ac=ca, ad=da, eae-1=a-1, cbc=a2b, bd=db, be=eb, cd=dc, ce=ec, ede-1=a2d3 > Subgroups: 356 in 238 conjugacy classes, 172 normal (14 characteristic) C1, C2, C2, C4, C4, C4, C22, C22, C22, C8, C2×C4, C2×C4, C2×C4, D4, Q8, C23, C42, C22⋊C4, C4⋊C4, C4⋊C4, C2×C8, C2×C8, M4(2), C22×C4, C22×C4, C2×D4, C2×Q8, C4○D4, C4.Q8, C2.D8, C2.D8, C2×C4⋊C4, C42⋊C2, C4×D4, C4×Q8, C22×C8, C2×M4(2), C8○D4, C2×C4○D4, C2×C2.D8, C23.25D4, M4(2)⋊C4, C23.33C23, C2×C8○D4, C4○D4.8Q8 Quotients: C1, C2, C4, C22, C2×C4, D4, Q8, C23, C4⋊C4, C22×C4, C2×D4, C2×Q8, C24, C2×C4⋊C4, C23×C4, C22×D4, C22×Q8, C22×C4⋊C4, D4○D8, Q8○D8, C4○D4.8Q8 Smallest permutation representation of C4○D4.8Q8 On 64 points Generators in S64 (1 25 5 29)(2 26 6 30)(3 27 7 31)(4 28 8 32)(9 34 13 38)(10 35 14 39)(11 36 15 40)(12 37 16 33)(17 53 21 49)(18 54 22 50)(19 55 23 51)(20 56 24 52)(41 57 45 61)(42 58 46 62)(43 59 47 63)(44 60 48 64) (1 3 5 7)(2 4 6 8)(9 15 13 11)(10 16 14 12)(17 19 21 23)(18 20 22 24)(25 27 29 31)(26 28 30 32)(33 39 37 35)(34 40 38 36)(41 47 45 43)(42 48 46 44)(49 51 53 55)(50 52 54 56)(57 63 61 59)(58 64 62 60) (1 14)(2 15)(3 16)(4 9)(5 10)(6 11)(7 12)(8 13)(17 62)(18 63)(19 64)(20 57)(21 58)(22 59)(23 60)(24 61)(25 39)(26 40)(27 33)(28 34)(29 35)(30 36)(31 37)(32 38)(41 52)(42 53)(43 54)(44 55)(45 56)(46 49)(47 50)(48 51) (1 2 3 4 5 6 7 8)(9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64) (1 61 27 43)(2 60 28 42)(3 59 29 41)(4 58 30 48)(5 57 31 47)(6 64 32 46)(7 63 25 45)(8 62 26 44)(9 21 36 51)(10 20 37 50)(11 19 38 49)(12 18 39 56)(13 17 40 55)(14 24 33 54)(15 23 34 53)(16 22 35 52) G:=sub<Sym(64)| (1,25,5,29)(2,26,6,30)(3,27,7,31)(4,28,8,32)(9,34,13,38)(10,35,14,39)(11,36,15,40)(12,37,16,33)(17,53,21,49)(18,54,22,50)(19,55,23,51)(20,56,24,52)(41,57,45,61)(42,58,46,62)(43,59,47,63)(44,60,48,64), (1,3,5,7)(2,4,6,8)(9,15,13,11)(10,16,14,12)(17,19,21,23)(18,20,22,24)(25,27,29,31)(26,28,30,32)(33,39,37,35)(34,40,38,36)(41,47,45,43)(42,48,46,44)(49,51,53,55)(50,52,54,56)(57,63,61,59)(58,64,62,60), (1,14)(2,15)(3,16)(4,9)(5,10)(6,11)(7,12)(8,13)(17,62)(18,63)(19,64)(20,57)(21,58)(22,59)(23,60)(24,61)(25,39)(26,40)(27,33)(28,34)(29,35)(30,36)(31,37)(32,38)(41,52)(42,53)(43,54)(44,55)(45,56)(46,49)(47,50)(48,51), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,61,27,43)(2,60,28,42)(3,59,29,41)(4,58,30,48)(5,57,31,47)(6,64,32,46)(7,63,25,45)(8,62,26,44)(9,21,36,51)(10,20,37,50)(11,19,38,49)(12,18,39,56)(13,17,40,55)(14,24,33,54)(15,23,34,53)(16,22,35,52)>; G:=Group( (1,25,5,29)(2,26,6,30)(3,27,7,31)(4,28,8,32)(9,34,13,38)(10,35,14,39)(11,36,15,40)(12,37,16,33)(17,53,21,49)(18,54,22,50)(19,55,23,51)(20,56,24,52)(41,57,45,61)(42,58,46,62)(43,59,47,63)(44,60,48,64), (1,3,5,7)(2,4,6,8)(9,15,13,11)(10,16,14,12)(17,19,21,23)(18,20,22,24)(25,27,29,31)(26,28,30,32)(33,39,37,35)(34,40,38,36)(41,47,45,43)(42,48,46,44)(49,51,53,55)(50,52,54,56)(57,63,61,59)(58,64,62,60), (1,14)(2,15)(3,16)(4,9)(5,10)(6,11)(7,12)(8,13)(17,62)(18,63)(19,64)(20,57)(21,58)(22,59)(23,60)(24,61)(25,39)(26,40)(27,33)(28,34)(29,35)(30,36)(31,37)(32,38)(41,52)(42,53)(43,54)(44,55)(45,56)(46,49)(47,50)(48,51), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64), (1,61,27,43)(2,60,28,42)(3,59,29,41)(4,58,30,48)(5,57,31,47)(6,64,32,46)(7,63,25,45)(8,62,26,44)(9,21,36,51)(10,20,37,50)(11,19,38,49)(12,18,39,56)(13,17,40,55)(14,24,33,54)(15,23,34,53)(16,22,35,52) ); G=PermutationGroup([[(1,25,5,29),(2,26,6,30),(3,27,7,31),(4,28,8,32),(9,34,13,38),(10,35,14,39),(11,36,15,40),(12,37,16,33),(17,53,21,49),(18,54,22,50),(19,55,23,51),(20,56,24,52),(41,57,45,61),(42,58,46,62),(43,59,47,63),(44,60,48,64)], [(1,3,5,7),(2,4,6,8),(9,15,13,11),(10,16,14,12),(17,19,21,23),(18,20,22,24),(25,27,29,31),(26,28,30,32),(33,39,37,35),(34,40,38,36),(41,47,45,43),(42,48,46,44),(49,51,53,55),(50,52,54,56),(57,63,61,59),(58,64,62,60)], [(1,14),(2,15),(3,16),(4,9),(5,10),(6,11),(7,12),(8,13),(17,62),(18,63),(19,64),(20,57),(21,58),(22,59),(23,60),(24,61),(25,39),(26,40),(27,33),(28,34),(29,35),(30,36),(31,37),(32,38),(41,52),(42,53),(43,54),(44,55),(45,56),(46,49),(47,50),(48,51)], [(1,2,3,4,5,6,7,8),(9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64)], [(1,61,27,43),(2,60,28,42),(3,59,29,41),(4,58,30,48),(5,57,31,47),(6,64,32,46),(7,63,25,45),(8,62,26,44),(9,21,36,51),(10,20,37,50),(11,19,38,49),(12,18,39,56),(13,17,40,55),(14,24,33,54),(15,23,34,53),(16,22,35,52)]]) 44 conjugacy classes class 1 2A 2B 2C 2D ··· 2I 4A ··· 4H 4I ··· 4X 8A 8B 8C 8D 8E ··· 8J order 1 2 2 2 2 ··· 2 4 ··· 4 4 ··· 4 8 8 8 8 8 ··· 8 size 1 1 1 1 2 ··· 2 2 ··· 2 4 ··· 4 2 2 2 2 4 ··· 4 44 irreducible representations dim 1 1 1 1 1 1 1 2 2 2 4 4 type + + + + + + + + - + - image C1 C2 C2 C2 C2 C2 C4 D4 D4 Q8 D4○D8 Q8○D8 kernel C4○D4.8Q8 C2×C2.D8 C23.25D4 M4(2)⋊C4 C23.33C23 C2×C8○D4 C8○D4 C2×D4 C2×Q8 C4○D4 C2 C2 # reps 1 3 3 6 2 1 16 3 1 4 2 2 Matrix representation of C4○D4.8Q8 in GL6(𝔽17) 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 16 0 0 0 0 0 0 0 0 1 0 0 0 0 16 0 , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 16 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 16 0 , 16 0 0 0 0 0 0 16 0 0 0 0 0 0 0 0 0 1 0 0 0 0 16 0 0 0 0 16 0 0 0 0 1 0 0 0 , 6 15 0 0 0 0 10 11 0 0 0 0 0 0 14 3 0 0 0 0 14 14 0 0 0 0 0 0 14 3 0 0 0 0 14 14 , 9 10 0 0 0 0 2 8 0 0 0 0 0 0 0 0 14 14 0 0 0 0 14 3 0 0 14 14 0 0 0 0 14 3 0 0 G:=sub<GL(6,GF(17))| [1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,16,0,0,0,0,1,0,0,0,0,0,0,0,0,16,0,0,0,0,1,0],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,16,0,0,0,0,0,0,0,0,16,0,0,0,0,1,0],[16,0,0,0,0,0,0,16,0,0,0,0,0,0,0,0,0,1,0,0,0,0,16,0,0,0,0,16,0,0,0,0,1,0,0,0],[6,10,0,0,0,0,15,11,0,0,0,0,0,0,14,14,0,0,0,0,3,14,0,0,0,0,0,0,14,14,0,0,0,0,3,14],[9,2,0,0,0,0,10,8,0,0,0,0,0,0,0,0,14,14,0,0,0,0,14,3,0,0,14,14,0,0,0,0,14,3,0,0] >; C4○D4.8Q8 in GAP, Magma, Sage, TeX C_4\circ D_4._8Q_8 % in TeX G:=Group("C4oD4.8Q8"); // GroupNames label G:=SmallGroup(128,1645); // by ID G=gap.SmallGroup(128,1645); # by ID G:=PCGroup([7,-2,2,2,2,-2,2,-2,224,253,568,521,2804,172]); // Polycyclic G:=Group<a,b,c,d,e|a^4=c^2=1,b^2=d^4=a^2,e^2=a*d^2,a*b=b*a,a*c=c*a,a*d=d*a,e*a*e^-1=a^-1,c*b*c=a^2*b,b*d=d*b,b*e=e*b,c*d=d*c,c*e=e*c,e*d*e^-1=a^2*d^3>; // generators/relations ׿ × 𝔽
2023-03-21 10:09:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965773820877075, "perplexity": 2212.5979044535966}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00381.warc.gz"}
https://math.stackexchange.com/questions/2662196/characteristic-polynomial-of-restriction-to-invariant-subspace-divides-character
# Characteristic Polynomial of Restriction to Invariant Subspace Divides Characteristic Polynomial I am interested in finding a proof of the following property that does not make reference to bases, and ideally doesn't use facts about determinants that depend on the block structure of a matrix. Let $T \in L(V,V)$ be a linear operator on a finite-dimensional space $V$. Suppose $W \preccurlyeq V$ is a $T$-invariant subspace, that is, $T(W) \subset W$. Consider the restriction $T_W \in L(W,W)$ of $T$ to $W$. Then the characteristic polynomial of $T_W$ divides the characteristic polynomial of $T$. Let $p,p_W$ be the characteristic polynomials and $m,m_W$ be the minimal polynomials. It is easy to show "algebraically" that $m_W \mid m$ since $m$ annihilates $T_W$, so must be a multiple of the monic generator $m_W$. However, the only proofs I have seen that $p_W \mid p$ make use of basis expansions: • Let $\mathcal{B}=\{ v_1,\dots,v_n \}$ be a basis for $V$ such that $\mathcal{B}'=\{ v_1, \dots, v_r \}$ form a basis for $W$. • The matrix of $T$ with respect to $\mathcal{B}$ has the following block form, where $A \in F^{r \times r}$ is the matrix of $T_W$ with respect to $\mathcal{B'}$, $$[T]_{\mathcal{B}} = \begin{bmatrix} A & B \\ & C \end{bmatrix} \implies xI - [T]_{\mathcal{B}} = \begin{bmatrix} xI - A & B \\ & xI-C \end{bmatrix}$$ • Then $p = \det(xI - [T]_\mathcal{B}) = \det(xI-A)\det(xI-C)$ is a multiple of $p_W = \det(xI-A)$. The use of basis expansions and block matrices leaves something to be desired. Is there a "matrix-free" way to prove this? Assume we know about Cayley-Hamilton, if it helps. • I wonder if a matrix-free proof would generalize to infinite-dimensional vector spaces..? Jul 3 '20 at 15:39 The characteristic polynomial does not change if we extend the scalars. So we may assume that the basic field is algebraically closed. Fact: the exponent of $(x-\lambda)$ in $P_A(x)$ equals the dimension of the subspace $$V_{(\lambda)}\colon =\{v \in V \ | \ (A-\lambda I)^N v= 0 \text{ for some } N \}$$ (the generalized $\lambda$ eigenspace). Now, if $W\subset V$ is $A$-invariant then clearly $$W_{(\lambda)}\subset V_{(\lambda)}$$ That's enough to prove divisibility. In fact, if $0\to W \to V \to U\to 0$ is exact sequence of spaces with operator $A$, then $0\to W_{(\lambda)} \to V_{(\lambda)} \to U_{(\lambda)}\to 0$ is exact for all $\lambda$, so we get the product equality for characteristic polynomials in an extension. Fix $x$. Define $U=xI-T\in L(V,V)$. Since $W$ is a $T$-invariant subspace, we can define $U_W=xI-T_W\in L(W,W)$. Suppose $x$ is a root of the characteristic polynomial of $T_W$. Then $\det(xI-T_W)=0$, that is, $\det(U_W)=0$. This means that $U_W$ sends some nonzero subspace of $W$ to $0$. Since $U_W$ is the restriction of $U$ to $W$, we have that $U$ sends some nonzero subspace of $V$ to $0$. Therefore, $\det(U)=0$. Thus, $\det(xI-T)=0$, and $x$ is a root of the characteristic polynomial of $T$. Pass to $\Bbb C$ (or the algebraic closure of whatever field we're working in), if we weren't already in an algebraically closed field. A polynomial divides another polynomial iff all the roots of the first are roots of the second. Since every root of $p_W$ is a root of $p$, $~p_W$ divides $p$. • Oh, this doesn't quite work if there are repeated roots… May 27 '18 at 4:40
2021-09-19 02:56:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845105409622192, "perplexity": 84.87493006501056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00088.warc.gz"}
https://www.bionicturtle.com/forum/tags/r-programming/
What's new # r-programming 1. ### YouTube T5-03: Expected shortfall: approximating continuous, with code In our previous video, we showed how we retrieve expected shortfall under the simplest possible discrete case. That was a simple historical simulation, but that was discrete. In this video, we will review the expected shortfall when the distribution is continuous. Specifically, we will use the... In this video, I'm excited to share one approach to importing historical stock price series into R. I think there are different ways to do this. My approach here is inspired by the approach that is illustrated by Jonathan Regenstein and his excellent book, Reproducible Finance with R, which you... 3. ### YouTube R Programming: Introduction: ggplot for capital market line (CML, R Intro-08) In this video, I'd like to show you the bare minimum of what we need to know to render a visualization in ggplot. The bare minimum is that we need to use the three essential layers (three out of seven possible). Those three essential layers are data, aesthetics, and giome (short for geometries... 4. ### DataCamp | Data Science, Statistics and Machine Learning DataCamp offers interactive R, Python, Sheets, SQL and shell courses. All on topics in data science, statistics and machine learning. Learn from a team of expert teachers in the comfort of your browser with video lessons and fun coding challenges and projects. DataCamp offers many data science... 5. ### YouTube R Programming Tidyverse: readr package to import data (csv, tab-separated, fixed-width) (tidy-02) David introduces the package that's called readr, which is part of the tidy verse, and this is the package that we would use to import external files into our R environment as usable R objects. In the tidy verse those would be called Tibbles, but a Tibble is just an enhanced user-friendly... read.table() is the core function for loading files external file into R dataframes; it is part of the utils package which is automatically loaded when you start R. Aside from the header argument, the sep and quote arguments define the field separators. The read.csv() function is a wrapper... 7. ### YouTube R Programming Tidyverse: What is tidy data? Tidy data meets three conditions: 1. Each variable must have its own column; 2. each observation must have its own row; and 3. Each value must have its own cell. 8. ### YouTube R Programming: Introduction: How to subset (R intro-06) R has three subset operators: [, [[, and $. Given data frame df, both of these return a data frame: df["z"], df[3]. Given a data frame df, all three of these commands are identical and return a vector: df$z, df[["z"]], df[[3]]. David's script is here... 9. ### YouTube R Programming Introduction: Matrices (R intro-05) In R a matrix is an atomic vector with the dimension attribute. In this example, the correlation matrix is entered as a vector with sixteen elements: rho_v <-c(1.000, ...). Then the vector is translated into a matrix with rho <- matrix(rho_v, nrow = 4, ncol =4). Now it is a matrix because it has... 10. ### YouTube R Programming: Introduction: Factors (R Intro-04) Factors are categorical vectors. Specifically, they are (integer) vectors that store categorical values, or ordinal values. Ordinal values are *ranked* categories (but they are not intervals). Factors can only contain predefined values. A classic example of a factor are male/female. An example... 11. ### YouTube R Programming: Introduction: Data Frames (R Intro-03) Data frames are the most common structure in R. A data frame is a list of equal-length vectors; ie, it's a rectangle. Create a data frame with data.frame(). Single-brackets, stocks[1], returns a dataframe. Double-brackets, stocks[[1]], returns a vector and is equivalent to stocks\$ticker. We can... 12. ### YouTube R Programming: Introduction: List Data Structure (R Intro - 02) Unlike atomic vectors, list (vectors) are flexible: each element can be a different type (char, integer, numeric, logical or even a sub-list!). List returns the i-th element as a list, while list[] returns the element as a vector. If the element is named, then list[["name"]] = list[] =... 13. ### YouTube R Programming: Introduction (Atomic Vectors) (R Intro-01) Vectors are natural to R. David starts by creating a numeric vector with the command: three_dice <- c(2, 3, 6). He forgot to mention in the video that "c" here stands for "combine," as in we are "combining these three values as elements in a new vector called three_dice." We can think of this as...
2020-07-02 18:16:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21420566737651825, "perplexity": 4253.2256282890585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00064.warc.gz"}
https://tex.stackexchange.com/questions/287346/width-of-column-after-multicolumn-header
# Width of column after multicolumn header When multicolumn headers in tables use more horizontal space than the columns they are 'heading', then the additional width is entirely allocated to the rightmost column. The following MWE: \documentclass{article} \usepackage{booktabs} \begin{document} \begin{tabular}{lccc}\toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular} \end{document} Produces: I would like to allocate the same amount of horizontal space to column x,y, and z. I do realize that an easy fix would be to use fixed width columns for x, y, and z, but I am looking for a dynamic solution. So my questions are the following: 1) Is there an easy way to allocate the horizontal space equally to columns x,y, and z? 2) Assuming that the answer to 1) is no: What would be an elegant way to increase the width of a single variable-width column by a percentage, e.g., increase the width of column x to 150% of the default? • If you are happy to specify the overall width of the table, you could use tabularx and {lXXX} which would ensure the columns are equal width. Or you could calculate the width of the header text and then divide it into 3 and create the columns that way. But you need to know something at the point the column specification is given, I think. – cfr Jan 12 '16 at 23:21 A few possibilities: \documentclass{article} \usepackage{booktabs,tabularx} \begin{document} \setlength\parskip{1cm} \begin{tabular}{lccc}\toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular} \dotfill \begin{tabular*}{.5\textwidth} {l!{\extracolsep{\textwidth minus \textwidth}}lccc} \toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular*} \begin{tabular}{lc@{\hspace{4em}}c@{\hspace{4em}}c}\toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular} \begin{tabularx}{.5\textwidth}{l*{3}{>{\centering\arraybackslash}X}}\toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabularx} \begin{tabular}{lccc}\toprule & &\makebox[60pt]{Wide multicolumn cell}&\\ & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular} \end{document} Thank you for these suggestions. Prefering to stick to the tabular environment, I was attracted to the second suggestion with the @{\hspace{4em}}, but to expand the width of a column on both ends would require adding half of the space to the left and right of the column. The result was that the multicolumn cell did also include the additional space, which is not desired. In addition it does not seem to work well with underlining the multicolumn cell (\cline{2-4}). For the moment I prefer the following solution (even if it is probably an ugly hack...): \documentclass{article} \usepackage{booktabs} \begin{document} \newcommand{\cspace}{\hspace*{2.5em}} \begin{tabular}{lccc} & \cspace & \cspace & \cspace \\ [-2.5ex] \toprule & \multicolumn{3}{c}{Wide multicolumn cell}\\ \cline{2-4} & x & y & z \\ \midrule A & 1 & 2 & 3 \\ \bottomrule \end{tabular} \end{document} Result:
2020-09-27 05:08:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999576807022095, "perplexity": 550.3846808508996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00133.warc.gz"}
https://perry.alexander.name/eecs742/project/2019/03/13/Project-2.html
Index Blog Project 2 - Reaching analysis For this exercise you will generalize Project 1 by constructing a translator from arbitrary IMP programs to a Z3 reaching analysis model. Recall the factorial program from Project 1: Y := X; Z := 1; while 1<Y do Z := Z * Y; Y := Y - 1; Y := 0 You wrote a model Z3 model by hand that allowed checking a number of properties related to variable assignment. For this project you will generate that model automatically. The input to your solution should be an abstract syntax representation of an IMP program. You are welcome to write or use an existing parser to generate your abstract syntax, but this is not a requirement. Your abstract syntax should represent the following IMP language: aexp ::= Nat | aexp + aexp | aexp - aexp | Var bexp ::= aexp <= aexp | isZero aexp | bexp or bexp | not bexp com ::= skip | while bexp do com | if bexp then com else com | Var := aexp | com ; com If you choose you may add parenthesis as needed. Your first task is to identify IMP statements that impact variable values or sequence code. We know from class that assignment and while are examples of such statements, but there are others. Remember that only statements that impact variable values or flow of control matter in reaching analysis. Your second task is to identify templates for each statement in your model. In Project 1 you wrote custom Z3 definitions describing how each statement impacts reaching definitions. Generalize those and create templates for other statements identified as impacting reaching definitions. Your third task is to implement IMP as an AST and develop code that generates your Z3 model from it. I strongly encourage you to remember how you processed code in programming language and compilers courses. Specifically, defining a recursive interpreter that processes each AST element as one case in case or match statement. You are free to use any of the languages that support a Z3 interface. However, you do not need to do this. One solution is to walk your code a simply generate a text file containing Z3 syntax without making any calls to Z3 from your analysis tool. The output of your tool should be a valid Z3 program and a reaching definition model generated by Z3 using (get-model). While it should be easy to create a script that calls your tool and calls Z3 on the result, I am okay with calling Z3 by hand. Your project submission should include source code for your analysis tool, necessary documentation for running your tool, and a collection of test cases. Start with factorial since we know what the model looks like.
2021-05-10 21:39:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18970657885074615, "perplexity": 2280.196612358801}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00131.warc.gz"}
http://grandirdanslintegrite.com/5wbfg90q/page.php?7ff8da=arrange-the-group-2-elements-in-terms-of-their-electronegativity
Electronegativity is not a uniquely defined property and may depend on the definition. This page explores the trends in some atomic and physical properties of the Group 2 elements: beryllium, magnesium, calcium, strontium and barium. Therefore, metals are electropositive and non-metals are electronegative in nature. So, barium < strontium < calcium < magnesium If we ignore the inert gases and elements for which no stable isotopes are known, we see that fluorine ($$\chi = 3.98$$) is the most electronegative element and cesium is the least electronegative nonradioactive element ($$\chi = 0.79$$). The … Twitter shares tumble after site permanently bans Trump, Biden gets his 2nd dose of coronavirus vaccine, Trump faces a new challenge in his final days, House Dem tests positive for COVID-19 after Capitol riots, Marriott shuns lawmakers who balked at certification, Amid political tumult, Trump set to oversee 3 executions, Etsy removes 'disrespectful' Auschwitz shirt, Biden chooses veteran diplomat as CIA director, The full list of lawmakers calling for Trump's removal, Giuliani at risk of bar removal over Trump rally remarks. Highest to Lowest: 1. spontaneous combustion - how does it work? P is the second highest. The higher the electronegativity is the more it attracts the electrons towards it. The most electronegative elements are O, F and N. For the representative elements, electronegativity increases as you go across a period and decreases as you go down a group. Fluorine 4.0 EN. Please note that the elements do not show their natural relation towards each other as in the Periodic system. electronegativity in increasing order is given below: Sr $<$ Ge \$
2021-09-17 15:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4617879390716553, "perplexity": 6385.538524980096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00277.warc.gz"}
https://www.proofwiki.org/wiki/Binomial_Theorem
# Binomial Theorem ## Theorem ### Integral Index Let $X$ be one of the standard number systems $\N$, $\Z$, $\Q$, $\R$ or $\C$. Let $x, y \in X$. Then: $\ds \forall n \in \Z_{\ge 0}: \,$ $\ds \paren {x + y}^n$ $=$ $\ds \sum_{k \mathop = 0}^n \binom n k x^{n - k} y^k$ $\ds$ $=$ $\ds x^n + \binom n 1 x^{n - 1} y + \binom n 2 x^{n - 2} y^2 + \binom n 3 x^{n - 3} y^3 + \cdots$ $\ds$ $=$ $\ds x^n + n x^{n - 1} y + \frac {n \paren {n - 1} } {2!} x^{n - 2} y^2 + \frac {n \paren {n - 1} \paren {n - 3} } {3!} x^{n - 3} y^3 + \cdots$ where $\dbinom n k$ is $n$ choose $k$. ### Ring Theory Let $\struct {R, +, \odot}$ be a ringoid such that $\struct {R, \odot}$ is a commutative semigroup. Let $n \in \Z: n \ge 2$. Then: $\ds \forall x, y \in R: \odot^n \paren {x + y} = \odot^n x + \sum_{k \mathop = 1}^{n - 1} \binom n k \paren {\odot^{n - k} x} \odot \paren {\odot^k y} + \odot^n y$ where $\dbinom n k = \dfrac {n!} {k! \ \paren {n - k}!}$ (see Binomial Coefficient). If $\struct {R, \odot}$ has an identity element $e$, then: $\ds \forall x, y \in R: \odot^n \paren {x + y} = \sum_{k \mathop = 0}^n \binom n k \paren {\odot^{n - k} x} \odot \paren {\odot^k y}$ ### General Binomial Theorem Let $\alpha \in \R$ be a real number. Let $x \in \R$ be a real number such that $\size x < 1$. Then: $\ds \paren {1 + x}^\alpha$ $=$ $\ds \sum_{n \mathop = 0}^\infty \frac {\alpha^{\underline n} } {n!} x^n$ $\ds$ $=$ $\ds \sum_{n \mathop = 0}^\infty \dbinom \alpha n x^n$ $\ds$ $=$ $\ds \sum_{n \mathop = 0}^\infty \frac 1 {n!} \paren {\prod_{k \mathop = 0}^{n - 1} \paren {\alpha - k} } x^n$ $\ds$ $=$ $\ds 1 + \alpha x + \dfrac {\alpha \paren {\alpha - 1} } {2!} x^2 + \dfrac {\alpha \paren {\alpha - 1} \paren {\alpha - 2} } {3!} x^3 + \cdots$ where: $\alpha^{\underline n}$ denotes the falling factorial $\dbinom \alpha n$ denotes a binomial coefficient. ### Multiindices Let $\alpha$ be a multiindex, indexed by $\set {1, \ldots, n}$ such that $\alpha_j \ge 0$ for $j = 1, \ldots, n$. Let $x = \tuple {x_1, \ldots, x_n}$ and $y = \tuple {y_1, \ldots, y_n}$ be ordered tuples of real numbers. Then: $\ds \paren {x + y}^\alpha = \sum_{0 \mathop \le \beta \mathop \le \alpha} \dbinom \alpha \beta x^\beta y^{\alpha - \beta}$ where $\dbinom n k$ is a binomial coefficient. ### Extended Binomial Theorem Let $r, \alpha \in \C$ be complex numbers. Let $z \in \C$ be a complex number such that $\cmod z < 1$. Then: $\ds \paren {1 + z}^r = \sum_{k \mathop \in \Z} \dbinom r {\alpha + k} z^{\alpha + k}$ where $\dbinom r {\alpha + k}$ denotes a binomial coefficient. ### Abel's Generalisation $\ds \paren {x + y}^n = \sum_k \binom n k x \paren {x - k z}^{k - 1} \paren {y + k z}^{n - k}$ ### Hurwitz's Generalisation $\ds \paren {x + y}^n = \sum x \paren {x + \epsilon_1 z_1 + \cdots + \epsilon_n z_n}^{\epsilon_1 + \cdots + \epsilon_n - 1} \paren {y - \epsilon_1 z_1 - \cdots - \epsilon_n z_n}^{n - \epsilon_1 - \cdots - \epsilon_n}$ where the summation ranges over all $2^n$ choices of $\epsilon_1, \ldots, \epsilon_n = 0$ or $1$ independently. ### Approximations Consider the General Binomial Theorem: $\paren {1 + x}^\alpha = 1 + \alpha x + \dfrac {\alpha \paren {\alpha - 1} } {2!} x^2 + \dfrac {\alpha \paren {\alpha - 1} \paren {\alpha - 2} } {3!} x^3 + \cdots$ When $x$ is small it is often possible to neglect terms in $x$ higher than a certain power of $x$, and use what is left as an approximation to $\paren {1 + x}^\alpha$. ### First Order When $x$ is sufficiently small that $x^2$ can be neglected then: $\paren {1 + x}^\alpha \approx 1 + \alpha x$ and the error is of the order of $\dfrac {\alpha \paren {\alpha - 1} } 2 x^2$ ### Second Order When $x$ is sufficiently small that $x^3$ can be neglected then: $\paren {1 + x}^\alpha \approx 1 + \alpha x + \dfrac {\alpha \paren {\alpha - 1} } 2 x^2$ and the error is of the order of $\dfrac {\alpha \paren {\alpha - 1} \paren {\alpha - 3} } 6 x^3$ ## Examples ### Cube of Sum $\paren {x + y}^3 = x^3 + 3 x^2 y + 3 x y^2 + y^3$ ### Cube of Difference $\paren {x - y}^3 = x^3 - 3 x^2 y + 3 x y^2 - y^3$ ### Fourth Power of Sum $\paren {x + y}^4 = x^4 + 4 x^3 y + 6 x^2 y^2 + 4 x y^3 + y^4$ ### Fourth Power of Difference $\paren {x - y}^4 = x^4 - 4 x^3 y + 6 x^2 y^2 - 4 x y^3 + y^4$ ### Fifth Power of Sum $\paren {x + y}^5 = x^5 + 5 x^4 y + 10 x^3 y^2 + 10 x^2 y^3 + 5 x y^4 + y^5$ ### Fifth Power of Difference $\paren {x - y}^5 = x^5 - 5 x^4 y + 10 x^3 y^2 - 10 x^2 y^3 + 5 x y^4 - y^5$ ### Sixth Power of Sum $\paren {x + y}^6 = x^6 + 6 x^5 y + 15 x^4 y^2 + 20 x^3 y^3 + 15 x^2 y^4 + 6 x y^5 + y^6$ ### Sixth Power of Difference $\paren {x - y}^6 = x^6 - 6 x^5 y + 15 x^4 y^2 - 20 x^3 y^3 + 15 x^2 y^4 - 6 x y^5 + y^6$ ### Power of $11$: $11^4$ $11^4 = \left({10 + 1}\right)^4 = 14 \, 641$ ### Binomial Theorem: $\paren {1 + x}^7$ $\paren {1 + x}^7 = 1 + 7 x + 21 x^2 + 35 x^3 + 35 x^4 + 21 x^5 + 7 x^6 + x^7$ ### Square Root of 2 $\sqrt 2 = 2 \paren {1 - \dfrac 1 {2^2} - \dfrac 1 {2^5} - \dfrac 1 {2^7} - \dfrac 5 {2^{11} } - \cdots}$ ## Also known as This result is also known as the binomial formula.
2023-03-22 03:44:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882233738899231, "perplexity": 583.8116818240261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00287.warc.gz"}
https://codeahoy.com/learn/soap/ch2/
# The SOAP Message Format ## What is SOAP message? A unit of communication in SOAP is a message. A SOAP message is an ordinary XML document containing the following elements shown in the figure below. • A required Envelope element that identifies the XML document as a SOAP message • An optional Header element that contains the message header information; can include any number of header blocks (simply referred to as headers); used to pass additional processing or control information (e.g., authentication, information related to transaction control, quality of service, and service billing and accounting-related data) • A required Body element that contains the remote method call or response information; all immediate children of the Body element are body blocks (typically referred to simply as bodies) • An optional Fault element that provides information about errors that occurred while processing the message SOAP messages are encoded using XML and must not contain DTD references or XML processing instructions. If a header is present in the message, it must be the first immediate child of the Envelope element. The Body element either directly follows the Header element or must be the first immediate child of the Envelope element if no header is present. Because the root element Envelope is uniquely identified by its namespace, it allows processing tools to immediately determine whether a given XML document is a SOAP message. The main information the sender wants to transmit to the receiver should be in the body of the message. Any additional information needed for intermediate processing or added-value services (e.g., authentication, security, transaction control, or tracing and auditing) goes into the header. This is the common approach for communication protocols. The header contains information that can be used by intermediate nodes along the SOAP message path. The payload or body is the actual message being conveyed. This is the reason why the header is optional. Each of the SOAP elements Envelope, Header, or Body can include arbitrary number of <any> elements. Recall that the <any> element enables us to extend the XML document with elements not specified by the schema. ### SOAP Message Example An example SOAP message containing a SOAP header block and a SOAP body is given as: <soap-env:Envelope xmlns:soap-env="http://www.w3.org/2003/05/soap-envelope" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <ac:priority>high</ac:priority> <ac:expires>2006-22-00T14:00:00-05:00</ac:expires> <soap-env:Body> <a:notify <a:note xsi:type="xsd:string"> 15 Reminder: meeting today at 11AM in Rm.601 </a:note> </a:notify> </soap-env:Body> </soap-env:Envelope> Listing 2-1: Example of a SOAP message. The above SOAP message is a request for alert to a Web service. The request contains a text note (in the Body) and is marked (in the Header) to indicate that the message is high priority, but will become obsolete after the given time. The details are as follows: • Lines 1–2: Prefix soap-env, identifies SOAP-defined elements, namely Envelope, Header, and Body, as well as the attribute mustUnderstand (appears in Line 7). • Line 3: Prefix xsd refers to XML Schema elements, in particular the built-in type string (appears in Line 15). • Line 4: Prefix xsi refers to XML Schema instance type attribute, asserting the type of the note as an XML Schema string (appears in Line 15). • Line 7: The mustUnderstand attribute value “1” tells the Web service provider that it must understand the semantics of the header block and that it must process the header. The Web service requestor demands express service delivery. • Lines 12–18: The Body element encapsulates the service method invocation information, namely the method name notify, the method parameter note, its associated data type and its value. SOAP message body blocks carry the information needed for the end recipient of a message. The recipient must understand the semantics of all body blocks and must process them all. SOAP does not define the schema for body blocks since they are application specific. There is only one SOAP-defined body block—the Fault element. A SOAP message can pass through multiple nodes on its path. This includes the initial SOAP sender, zero or more SOAP intermediaries, and an ultimate SOAP receiver. SOAP intermediaries are applications that can process parts of a SOAP message as it travels from the sender to the receiver. ### SOAP Intermediaries and Use cases Intermediaries can both accept and forward (or relay, or route) SOAP messages. Three key use cases define the need for SOAP intermediaries: 1. crossing trust domains, 2. ensuring scalability, and 3. providing value-added services along the SOAP message path. Crossing trust domains is a common issue faced when implementing security in distributed systems. Corporate firewalls and virtual private network (VPN) gateways let some requests cross the trust domain boundary and deny access to others. Similarly, ensuring scalability is an important requirement in distributed systems. We rarely have a simplistic scenario where the sender and receiver are directly connected by a dedicated link. In reality, there will be several network nodes on the communication path that will be crossed by many other concurrent communication flows. Due to the limited computing resources, the performance of these nodes may not scale well with the increasing traffic load. To ensure scalability, the intermediate nodes need to provide flexible buffering of messages and routing based not only on message parameters, such as origin, destination, and priority, but also on the state of the network measured by parameters such as the availability and load of its nodes as well as network traffic information. Lastly, we need intermediaries to provide value-added services in a distributed system. Example services include authentication and authorization, security encryption, transaction management, message tracing and auditing, as well as billing and payment processing. ## SOAP Message Global Attributes SOAP defines three global attributes that are intended to be usable via qualified attribute names on any complex type referencing them. The attributes are as follows: • The mustUnderstand attribute specifies whether it is mandatory or optional that a message receiver understands and processes the content of a SOAP header block. The message receiver to which this attribute refers to is named by the role attribute. • The role attribute is exclusively related to header blocks. It names the application that should process the given header block. • The encodingStyle attribute indicates the encoding rules used to serialize parts of a SOAP message. Although the SOAP specification allows this attribute to appear on any element of the message (including header blocks), it mostly applies to body blocks. • The relay attribute is used to indicate whether a SOAP header block targeted at a SOAP receiver must be relayed if not processed. The mustUnderstand attribute can have values ‘1’ or ‘0’ (or, ‘true’ or ‘false’). Value ‘1’ indicates that the target role of this SOAP message must understand the semantics of the header block and process it. If this attribute is missing, this is equivalent to having value ‘0’. This value indicates that the target role may, but does not have to, process the header block. The role attribute carries an URI value that names the recipient of a header block. This can be the ultimate receiver or an intermediary node that should provide a value-added service to this message. The SOAP specification defines three roles: none, next, and ultimateReceiver. An attribute value of http://www.w3.org/2003/05/soap-envelope/role/next identifies the next SOAP application on the message path as the role for the header block. A header without a role attribute is intended for the ultimate recipient of this message. The encodingStyle attribute declares the mapping from an application-specific data representation to the wire format. An encoding generally defines a data type and data mapping between two parties that have different data representation. The decoding converts the wire representation of the data back to the application-specific data format. The translation step from one data representation to another, and back to the original format, is called serialization and deserialization. The terms marshaling and unmarshalling may be used as alternatives. The scope of the encodingStyle attribute is that of its owner element and that element’s descendants, excluding the scope of the encodingStyle attribute on a nested element. The relay attribute indicates whether a header block should be relayed in the forwarded message if the header block is targeted at a role played by the SOAP intermediary, but not otherwise processed by the intermediary. This attribute type is Boolean and, if omitted, it is equivalent as if included with a value of “false.” ## Error Handling in SOAP: The Fault Body Block If a network node encounters problems while processing a SOAP message, it generates a fault message and sends it back to the message sender, i.e., in the direction opposite to the original message flow. The fault message contains a Fault element which identifies the source and cause of the error and allows error-diagnostic information to be exchanged between participants in an interaction. Fault is optional and can appear at most once within the Body element. The fault message originator can be an end host or an intermediary network node which was supposed to relay the original message. The content of the Fault element is slightly different in these two cases, as will be seen below. A Fault element consists of the following nested elements • The Code element specifies the failure type. Fault codes are identified via namespace-qualified names. SOAP predefines several generic fault codes and allows custom-defined fault codes, as described below. • The Reason element carries a human-readable explanation of the message-processing failure. It is a plain text of type string along with the attribute specifying the language the text is written in. • The Node element names the SOAP node (end host or intermediary) on the SOAP message path that caused the fault to happen. This node is the originator of the fault message. • The Role element identifies the role the originating node was operating in at the point the fault occurred. Similar to the role attribute (described above), but instead of identifying the role of the recipient of a header block, it gives the role of the fault originator. • The Detail element carries application-specific error information related to the Body element and its sub-elements ### SOAP Generic Fault Codes As mentioned, SOAP predefines several generic fault codes. They must be namespace qualified and appear in a Code element. These are: • VersionMismatch: The SOAP node received a message whose version is not supported, which is determined by the Envelope namespace. For example, the node supports SOAP version 1.2, but the namespace qualification of the SOAP message Envelope element is not identical to http://www.w3.org/2003/05/soap-envelope. • DataEncodingUnknown: A SOAP node to which a SOAP header block or SOAP body child element information item was targeted was targeted does not support the data encoding indicated by the encodingStyle attribute. • MustUnderstand: A SOAP node to which a header block was targeted could not process the header block, and the block contained a mustUnderstand attribute value “true”. • Sender: A SOAP message was not appropriately formed or did not contain all required information. For example, the message could lack the proper authentication or payment information. Resending this identical message will again cause a failure. • Receiver: A SOAP message could not be processed due to reasons not related to the message format or content. For example, processing could include communicating with an upstream SOAP node, which did not respond. Resending this identical message might succeed at some later point in time. SOAP allows custom extensions of fault codes through dot separators so that the right side of a dot separator refines the more general information given on the left side. For example, the Code element conveying a sender authentication error would contain Sender.Authentication. SOAP does not require any further structure within the content placed in header or body blocks. Nonetheless, there are two aspects that influence how the header and body of a SOAP message are constructed: encoding rules and communication styles . These are described in Chapter 3 and 4.
2022-12-02 03:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22195012867450714, "perplexity": 2607.9018526993746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00093.warc.gz"}
https://www.projectrhea.org/rhea/index.php/4th_Bonus_Point_-_Ryan_Miller
This is problem 10.21(d) in O+W. ROC is |z|>1/2 4 zeros at z = 0 1 pole at z = 1/2 Since ROC is |z|>1/2 (i.e. includes the unit circle), the Fourier Transform exists.
2019-09-18 09:20:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548331499099731, "perplexity": 2821.5385767954785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573264.27/warc/CC-MAIN-20190918085827-20190918111827-00273.warc.gz"}
https://www.maplesoft.com/support/help/maple/view.aspx?path=CodeTools%2FProfiling%2FGetProfileTable
CodeTools[Profiling] - Maple Programming Help Home : Support : Online Help : Programming : Profiling : CodeTools Profiling Subpackage : CodeTools/Profiling/GetProfileTable CodeTools[Profiling] GetProfileTable get the raw profiling data associated with a procedure Calling Sequence GetProfileTable(p, opts) Parameters p - procedure whose profiling data is to be returned opts - equation(s) of the form output=value where value is one of default or table; specify the type of output Description • The GetProfileTable(p) command returns the rtable of profiling data associated with the procedure p. • You may not need to access the profiling data at this level.  It is usually more useful to use the table of data returned by a call to Build. • The format of the profiling data is as follows: If p has n statements, the data is an rtable of n+1 rows and 3 columns.  The first column is the count of the number of calls, the second is the time spent executing the statement (in milliseconds), and the third is the number of words allocated while executing the statement.  The first row of the rtable is the total calls, time spent, and words used for the entire function.  The rtable has datatype integer[4] and has order C_order. • If you specify the output = table option, GetProfileTable returns the rtable within a table.  This table is compatible with the tables used by other Profiling functions, for example, Merge and PrintProfiles. • It is possible for a procedure name to be assigned a new procedure after the rtable of profiling data has been obtained.  If this occurs, the rtable is no longer valid profiling data for that name.  However, if other names exist that reference the procedure, then the rtable may still be useful. Examples > a := proc(x)     if (x > 1) then         return 1;     else         return 0;     end if; end proc: > $\mathrm{with}\left(\mathrm{CodeTools}\left[\mathrm{Profiling}\right]\right):$ > $\mathrm{Profile}\left(a\right)$ > $a\left(0\right)$ ${0}$ (1) > $\mathrm{GetProfileTable}\left(a\right)$ $\left[\begin{array}{ccc}{1}& {0}& {3}\\ {1}& {0}& {3}\\ {0}& {0}& {0}\\ {1}& {0}& {0}\end{array}\right]$ (2) > $\mathrm{GetProfileTable}\left(a,'\mathrm{output}'='\mathrm{table}'\right)$ ${table}{}\left(\left[{\mathrm{_Inert_ASSIGNEDNAME}}{}\left({"a"}{,}{"PROC"}\right){=}\left[\begin{array}{ccc}{1}& {0}& {3}\\ {1}& {0}& {3}\\ {0}& {0}& {0}\\ {1}& {0}& {0}\end{array}\right]\right]\right)$ (3) > $a\left(2\right)$ ${1}$ (4) > $\mathrm{GetProfileTable}\left(a\right)$ $\left[\begin{array}{ccc}{2}& {0}& {6}\\ {2}& {0}& {6}\\ {1}& {0}& {0}\\ {1}& {0}& {0}\end{array}\right]$ (5)
2020-03-28 22:03:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558005690574646, "perplexity": 2817.2838089027928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00501.warc.gz"}
https://doc.kusakata.com/admin-guide/parport.html
# Parport¶ The parport code provides parallel-port support under Linux. This includes the ability to share one port between multiple device drivers. You can pass parameters to the parport code to override its automatic detection of your hardware. This is particularly useful if you want to use IRQs, since in general these can't be autoprobed successfully. By default IRQs are not used even if they can be probed. This is because there are a lot of people using the same IRQ for their parallel port and a sound card or network card. The parport code is split into two parts: generic (which deals with port-sharing) and architecture-dependent (which deals with actually using the port). ## Parport as modules¶ If you load the parport code as a module, say: # insmod parport to load the generic parport code. You then must load the architecture-dependent code with (for example): # insmod parport_pc io=0x3bc,0x378,0x278 irq=none,7,auto to tell the parport code that you want three PC-style ports, one at 0x3bc with no IRQ, one at 0x378 using IRQ 7, and one at 0x278 with an auto-detected IRQ. Currently, PC-style (parport_pc), Sun bpp, Amiga, Atari, and MFC3 hardware is supported. PCI parallel I/O card support comes from parport_pc. Base I/O addresses should not be specified for supported PCI cards since they are automatically detected. ### modprobe¶ If you use modprobe , you will find it useful to add lines as below to a configuration file in /etc/modprobe.d/ directory: alias parport_lowlevel parport_pc options parport_pc io=0x378,0x278 irq=7,auto modprobe will load parport_pc (with the options io=0x378,0x278 irq=7,auto) whenever a parallel port device driver (such as lp) is loaded. Note that these are example lines only! You shouldn't in general need to specify any options to parport_pc in order to be able to use a parallel port. ### Parport probe [optional]¶ In 2.2 kernels there was a module called parport_probe, which was used for collecting IEEE 1284 device ID information. This has now been enhanced and now lives with the IEEE 1284 support. When a parallel port is detected, the devices that are connected to it are analysed, and information is logged like this: parport0: Printer, BJC-210 (Canon) The probe information is available from files in /proc/sys/dev/parport/. ## Parport linked into the kernel statically¶ If you compile the parport code into the kernel, then you can use kernel boot parameters to get the same effect. Add something like the following to your LILO command line: parport=0x3bc parport=0x378,7 parport=0x278,auto,nofifo You can have many parport=... statements, one for each port you want to add. Adding parport=0 to the kernel command-line will disable parport support entirely. Adding parport=auto to the kernel command-line will make parport use any IRQ lines or DMA channels that it auto-detects. ## Files in /proc¶ If you have configured the /proc filesystem into your kernel, you will see a new directory entry: /proc/sys/dev/parport. In there will be a directory entry for each parallel port for which parport is configured. In each of those directories are a collection of files describing that parallel port. The /proc/sys/dev/parport directory tree looks like: parport |-- default | |-- spintime | -- timeslice |-- parport0 | |-- autoprobe | |-- autoprobe0 | |-- autoprobe1 | |-- autoprobe2 | |-- autoprobe3 | |-- devices | | |-- active | | -- lp | | -- timeslice | |-- irq | |-- dma | |-- modes | -- spintime -- parport1 |-- autoprobe |-- autoprobe0 |-- autoprobe1 |-- autoprobe2 |-- autoprobe3 |-- devices | |-- active | -- ppa | -- timeslice |-- irq |-- dma |-- modes -- spintime File Contents devices/active A list of the device drivers using that port. A "+" will appear by the name of the device currently using the port (it might not appear against any). The string "none" means that there are no device drivers using that port. base-addr Parallel port's base address, or addresses if the port has more than one in which case they are separated with tabs. These values might not have any sensible meaning for some ports. irq Parallel port's IRQ, or -1 if none is being used. dma Parallel port's DMA channel, or -1 if none is being used. modes Parallel port's hardware modes, comma-separated, meaning: • PCSPP PC-style SPP registers are available. • TRISTATE Port is bidirectional. • COMPAT Hardware acceleration for printers is available and will be used. • EPP Hardware acceleration for EPP protocol is available and will be used. • ECP Hardware acceleration for ECP protocol is available and will be used. • DMA DMA is available and will be used. Note that the current implementation will only take advantage of COMPAT and ECP modes if it has an IRQ line to use. autoprobe Any IEEE-1284 device ID information that has been acquired from the (non-IEEE 1284.3) device. autoprobe[0-3] IEEE 1284 device ID information retrieved from daisy-chain devices that conform to IEEE 1284.3. spintime The number of microseconds to busy-loop while waiting for the peripheral to respond. You might find that adjusting this improves performance, depending on your peripherals. This is a port-wide setting, i.e. it applies to all devices on a particular port. timeslice The number of milliseconds that a device driver is allowed to keep a port claimed for. This is advisory, and driver can ignore it if it must. default/* The defaults for spintime and timeslice. When a new port is registered, it picks up the default spintime. When a new device is registered, it picks up the default timeslice. ## Device drivers¶ Once the parport code is initialised, you can attach device drivers to specific ports. Normally this happens automatically; if the lp driver is loaded it will create one lp device for each port found. You can override this, though, by using parameters either when you load the lp driver: # insmod lp parport=0,2 or on the LILO command line: lp=parport0 lp=parport2 Both the above examples would inform lp that you want /dev/lp0 to be the first parallel port, and /dev/lp1 to be the third parallel port, with no lp device associated with the second port (parport1). Note that this is different to the way older kernels worked; there used to be a static association between the I/O port address and the device name, so /dev/lp0 was always the port at 0x3bc. This is no longer the case - if you only have one port, it will default to being /dev/lp0, regardless of base address. Also: • If you selected the IEEE 1284 support at compile time, you can say lp=auto on the kernel command line, and lp will create devices only for those ports that seem to have printers attached. • If you give PLIP the timid parameter, either with plip=timid on the command line, or with insmod plip timid=1 when using modules, it will avoid any ports that seem to be in use by other devices. • IRQ autoprobing works only for a few port types at the moment. ## Reporting printer problems with parport¶ If you are having problems printing, please go through these steps to try to narrow down where the problem area is. When reporting problems with parport, really you need to give all of the messages that parport_pc spits out when it initialises. There are several code paths: • polling • interrupt-driven, protocol in software • interrupt-driven, protocol in hardware using PIO • interrupt-driven, protocol in hardware using DMA The kernel messages that parport_pc logs give an indication of which code path is being used. (They could be a lot better actually..) For normal printer protocol, having IEEE 1284 modes enabled or not should not make a difference. To turn off the 'protocol in hardware' code paths, disable CONFIG_PARPORT_PC_FIFO. Note that when they are enabled they are not necessarily used; it depends on whether the hardware is available, enabled by the BIOS, and detected by the driver. So, to start with, disable CONFIG_PARPORT_PC_FIFO, and load parport_pc with irq=none. See if printing works then. It really should, because this is the simplest code path. If that works fine, try with io=0x378 irq=7 (adjust for your hardware), to make it use interrupt-driven in-software protocol. If that works fine, then one of the hardware modes isn't working right. Enable CONFIG_FIFO (no, it isn't a module option, and yes, it should be), set the port to ECP mode in the BIOS and note the DMA channel, and try with: io=0x378 irq=7 dma=none (for PIO) io=0x378 irq=7 dma=3 (for DMA) `
2018-12-11 02:33:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33965566754341125, "perplexity": 6974.353859050831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823550.42/warc/CC-MAIN-20181211015030-20181211040530-00451.warc.gz"}
https://itprospt.com/num/13050345/ba-ooesenean3eesled2340ee377
5 # BA #ooESENEan3eesled2340EE377... ## Question ###### BA #ooESENEan3eesled2340EE377 BA #ooESENEan 3eesled 2340 EE377 #### Similar Solved Questions ##### For this assignment; you will review two different situations that both involve comparing sample to a population: Consider the information available for each situation, choose the correct test; and use the data to conduct it:Aresearcher wants to compare the email use of the employees at one company to the corresponding population_ Data is collected on the number of emails received by the company's employees It will be necessary to use this sample data as well as population data to determine For this assignment; you will review two different situations that both involve comparing sample to a population: Consider the information available for each situation, choose the correct test; and use the data to conduct it: Aresearcher wants to compare the email use of the employees at one company... ##### The following cata are the amounts that sample of 15 customers spent for lunch S) ata fast-food restaurant: Complete parts a) through (d) below:7.45 4.916.31 6.515.78 5.496.50 7.918.34 8.239.50 9.617.066.765.89 The following cata are the amounts that sample of 15 customers spent for lunch S) ata fast-food restaurant: Complete parts a) through (d) below: 7.45 4.91 6.31 6.51 5.78 5.49 6.50 7.91 8.34 8.23 9.50 9.61 7.06 6.76 5.89... ##### Mirele (RC Box A (Round experment consists contalns 1 Jnoa drawn from = Bubmleelona H j0 Uaed and decimal U h blackrandore Sror the frst Box M Box A. was four whlte given marble Is transferred = white marbles that the and W marble 'peiq was {henrblee 1 mirele (RC Box A (Round experment consists contalns 1 Jnoa drawn from = Bubmleelona H j0 Uaed and decimal U h blackrandore Sror the frst Box M Box A. was four whlte given marble Is transferred = white marbles that the and W marble 'peiq was {henrblee 1... Reaction of $left(mathrm{CH}_{3} ight)_{3} mathrm{CH}$ with $mathrm{Cl}_{2}$ forms two products: $left(mathrm{CH}_{3} ight)_{2} mathrm{CHCH}_{2} mathrm{Cl}(63 %)$ and $left(mathrm{CH}_{3} ight)_{3} mathrm{CCl}(37 %) .$ Why is the major product formed by cleavage of the stronger $1^{circ} mathrm{C}-m... 5 answers ##### Let f(c) = (2' + 4r + 5)(120 2c) , find f ' (2).f' (2)PreviewTipEnter your answer Qs an expression. Example: Jx*2+14,*/S , (e+byc Be sure your vuriables match those in thc queation Let f(c) = (2' + 4r + 5)(120 2c) , find f ' (2). f' (2) Preview Tip Enter your answer Qs an expression. Example: Jx*2+14,*/S , (e+byc Be sure your vuriables match those in thc queation... 1 answers ##### Find the cube roots of each complex number. Leave the answers in trigonometric form. Then graph each cube root as a vector in the complex plane. $$1+i \sqrt{3}$$ Find the cube roots of each complex number. Leave the answers in trigonometric form. Then graph each cube root as a vector in the complex plane. $$1+i \sqrt{3}$$... 1 answers ##### In Exercises$55-60$, find the component form of$\mathbf{v}$and sketch the specified vector operations geometrically, where$\mathbf{u}=2 \mathbf{i}-\mathbf{j}$and$\mathbf{w}=\mathbf{i}+2 \mathbf{j} .$$$\mathbf{v}=\frac{3}{2} \mathbf{u}$$ In Exercises$55-60$, find the component form of$\mathbf{v}$and sketch the specified vector operations geometrically, where$\mathbf{u}=2 \mathbf{i}-\mathbf{j}$and$\mathbf{w}=\mathbf{i}+2 \mathbf{j} .$$$\mathbf{v}=\frac{3}{2} \mathbf{u}$$... 5 answers ##### Widows A recent study indicated that 40% of the 112 women overage 55 in the study were widows. Round up your answers to the nextwhole number for the following questions.How large a sample must you take to be 99% confident that theestimate is within 0.02 of the true proportion of women over age 55who are widows? Widows A recent study indicated that 40% of the 112 women over age 55 in the study were widows. Round up your answers to the next whole number for the following questions. How large a sample must you take to be 99% confident that the estimate is within 0.02 of the true proportion of women over age 5... 5 answers ##### Write a balanced chemical equation for the neutralizationreaction of (a) hydrochloric acid and (b) acetic acid with sodiumhydroxide. Write a balanced chemical equation for the neutralization reaction of (a) hydrochloric acid and (b) acetic acid with sodium hydroxide.... 5 answers ##### Compute the cosine of the angle between the plane through P = (9,0,0),@ = (0,5,0),and R = (0,0,7) and the yz-plan defined as the angle between their normal vectors_(Use symbolic notation and fractions where needed )cos(0) Compute the cosine of the angle between the plane through P = (9,0,0),@ = (0,5,0),and R = (0,0,7) and the yz-plan defined as the angle between their normal vectors_ (Use symbolic notation and fractions where needed ) cos(0)... 5 answers ##### Explain the hormone system that leads to the secretion oftestosterone in biologic males OR estrogen in biologic females(choose one)Imelda's "condition" (since it doesn't have much ill-effect onpersonal health, I'm loathe to label this anything other than anatural variant) involves a testosterone-receptor that does notbind testosterone. Would this receptor (whether it's thecommon variant or not) be inside of cells or on the surface ofcells? Explain. This is a revi Explain the hormone system that leads to the secretion of testosterone in biologic males OR estrogen in biologic females (choose one) Imelda's "condition" (since it doesn't have much ill-effect on personal health, I'm loathe to label this anything other than a natural varian... 5 answers ##### The president of a university claimed that the entering class this year appeared to be larger than the entering classes from previous years, but their mean SAT score is lower than previous years: He took a sample of 20 of this year's entering students and found that their mean SAT score is 1501 with a standard deviation of 53. The university's records indicate that the mean SAT score for entering students from previous years is 1520.He wants to find out if his claim is supported by the The president of a university claimed that the entering class this year appeared to be larger than the entering classes from previous years, but their mean SAT score is lower than previous years: He took a sample of 20 of this year's entering students and found that their mean SAT score is 1501... 2 answers ##### QuGai4nJese Manz Chanc; the local Cll -nl mnchil pupplic- c MALIS Lcconlion: tor tcupcoming Wnct the high school Jcssc cuIchiscd Uce Sccts CNit pincr (our boxe Mancns Ind Ite Eue stcks bill; belone Laxe > was 524.,40 CAi ~Tnt$i0,A0 #ken shc poueh Aalel cTIl Mpct; IvL Iadn WmtAnu [40 @uc :ckS Chadc purchases Hncaa 513.40 #hen bought thrcv shcets piper; tu0 boxes of marers and e glue stick Determine the unit ccfot euch item markssoltunn: dcreloper Is produccu new handheld computer He cold Co QuGai4n Jese Manz Chanc; the local Cll -nl mnchil pupplic- c MALIS Lcconlion: tor tcupcoming Wnct the high school Jcssc cuIchiscd Uce Sccts CNit pincr (our boxe Mancns Ind Ite Eue stcks bill; belone Laxe > was 524.,40 CAi ~Tnt \$i0,A0 #ken shc poueh Aalel cTIl Mpct; IvL Iadn WmtAnu [40 @uc :ckS ... ##### Cos(x+h)-cosx (6) Lim h-0sin(sinx) lim X-0lim x.sin X-+0 cos(x+h)-cosx (6) Lim h-0 sin(sinx) lim X-0 lim x.sin X-+0... ##### Question 16II0 Question 16 I I 0...
2022-08-17 10:12:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553123354911804, "perplexity": 5403.57238950344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572898.29/warc/CC-MAIN-20220817092402-20220817122402-00156.warc.gz"}
http://pagesetter.net/not-working/readfile-php.html
Home > Not Working > Readfile Php Never accept paths as inputIt's very tempting to write something likereadfile($_GET['file']);but before you do, think about it: anyone could request any file on the server, even if it's outside the public Am on shared hosting, so not sure I can amend it. Terms and Rules Contact Us Help Top RSS COMPANY About Careers Contact Us SUPPORT Forum Wiki Documentation STAY CONNECTED RSS Twitter Linkedin Copyright © 2013-2016 Lite Speed Technologies Inc. Why does this 7-Segment Display not function properly? Synchronization and File Position If hFile is opened with FILE_FLAG_OVERLAPPED, it is an asynchronous file handle; otherwise it is synchronous. I'm running two servers 1 delivers the site the other podcast files. So what puzzles me is where does it go wrong then. What should I pack for an overland journey in a Bronze Age? ## Readfile Php It showes the correct file format, but still doesn't want to display the file May 29, 2007,05:26 #5 barbara1712 View Profile View Forum Posts SitePoint Evangelist Join Date Apr 2007 Location On the wamp server it works fine (file downloaded), but on the actual server the content of the file is echoed to the screen (without a specific instruction) but the file Put the download files inside a folder (e.g. Of course you need to setup the DB, table, and columns. Jun 8, 2007,08:33 #11 natalalaa View Profile View Forum Posts SitePoint Member Join Date May 2007 Posts 23 Mentioned 0 Post(s) Tagged 0 Thread(s) Try to put fopen('your full file path'); Return value If the function succeeds, the return value is nonzero (TRUE). Ob_clean bytes delivered like readfile() does. } return$status; } ?> up down 1 peavey at pixelpickers dot com ¶11 We are stuck with the distinct possibility of half of our visitors seeing either an annoying third blank window being opened or the script writing over their original window, depending on Php Readfile Image Not Working Then file 1 passes the information over to file 2 to do the download logic and write the header information. If hFile was opened with FILE_FLAG_OVERLAPPED, the following conditions are in effect: The lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that Care to elaborate? –Pekka 웃 Dec 7 '09 at 18:33 @Joseph: tags don't belong to subject –SilentGhost Dec 8 '09 at 16:57 add a comment| 4 Answers 4 active Regardless, my point stands: PHP makes it easy to hack together code that appears to be working, but developers should read and adhere to the official specifications.UPDATE: I released a free Fread Php I really don't know what I'm doing wrong. If an error occurs, FALSE is returned and unless the function was called as @readfile(), an error message is printed. The meaning of 'already' in the sentence 'Let's go already!' Ultrasonic Sensors and Pets Make me a hexagon! ## Php Readfile Image Not Working If they have it set a specific way, honor their setting. add a note Filesystem Functions basename chgrp chmod chown clearstatcache copy delete dirname disk_free_space disk_total_space diskfreespace fclose feof fflush fgetc When a synchronous read operation reaches the end of a file, ReadFile returns TRUE and sets *lpNumberOfBytesRead to zero. Readfile Php Does this series involving sine converge or diverge? Php Force Download File Basic first-day-at-school security principle, that. up down 1 daren -remove-me- schwenke ¶5 years ago If you are lucky enough to not be on shared hosting and have apache, look I tried to display data in $s. Pipes If an anonymous pipe is being used and the write handle has been closed, when ReadFile attempts to read using the pipe's corresponding read handle, the function returns FALSE and What should I do about this security issue? Yes, my password is: Forgot your password? Php Download File From Server • redstrike Member I want to force download files instead of letting browsers handle files automaticaly. • Asking University to reimburse renting a car How does \hline work? • Reads occur at the position specified by the file pointer if supported by the device. • I spent 3h searching why function doesnt work :) tnx! –Miha Trtnik Feb 22 '12 at 15:43 This is unnecessary. • By default it will replace". • ReadFile sets this value to zero before doing any work or error checking. • Mysterious LCD interface without wires My boss asks me to stop writing small functions and do everything in the same loop How to stop a Linux process for later execution swapping-out • open_basedir can be edited via php.ini share|improve this answer answered Oct 11 '13 at 1:13 3rdLion 1512 add a comment| Your Answer draft saved draft discarded Sign up or log • Why would this A-10 Thunderbolt be flown over rural New Hampshire? more hot questions question feed lang-php about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Not the answer you're looking for? I was using this line of PHP code to handle download. Browse other questions tagged php or ask your own question. any ideas? Readfile Php Download PHP:$filename=basename($url).PHP_EOL;$filesize=getRemoteFileSize(To cancel all pending asynchronous I/O operations, use either: CancelIo—this function only cancels operations issued by the calling thread for the specified file handle. lpOverlapped [in, out, optional] A pointer to an OVERLAPPED structure is required if the hFile parameter was opened with FILE_FLAG_OVERLAPPED, otherwise it can be NULL. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I'm technical referent but I lost the lead for technical decisions Why do Phineas and Ferb get 104 days of summer vacation? Php Fopen Handling large file sizesreadfile() is a simple way to ouput files files. For an hFile that does not support byte offsets, Offset and OffsetHigh are ignored. There's many files that exist, but can't be read. So this is the proper chunked readfile (which isn't really readfile at all, and should probably be crossposted to passthru(), fopen(), and popen() just so browsers can find this information): Higher up doesn't carry around their security badge and asks others to let them in. Do GUI based application execute shell commands in the background? Register FAQ/Rules My SitePoint Forum Actions Mark Forums Read Quick Links View Forum Leaders Remember Me? No, create an account now. The "problem" with using readfile (which works fine) is forgetting to disable the output buffer (ie. Visual indicator when a float is too tall Puppet-like fantasy characters. Why didn't "spiel" get an "sh"? Overlaying two images Should I have doubts if the organizers of a workshop ask me to sign a behavior agreement upfront? gmdate("D, d M Y H:i:s") . " GMT"); header("Content-Disposition: attachment; filename={$new_name}"); header("Content-Transfer-Encoding: binary"); ?> Cheers, Peavey up down 0 Brian ¶2 years ago If you Use headers correctlyThis is a very widespread problem and unfortunately even the PHP manual is plagued with errors. Can somebody help me?$FileName = $File; //"TestFile.txt";Print ($FileName);$len = filesize($File); // Calculate File Sizeprint ($len);if(ini_get('zlib.output_compression'))ini_set('zlib.output_compression', 'Off');if (file_exists("TestFile.txt") {header('Content-Description: File Transfer');header('Content-Type: text/txt');header('Content-Disposition: attachment; filename=TestFile.txt');header('Transfer-Encoding: binary');header('Expires: 0');header('Cache-Control: must-revalidate');header('Pragma: public');header('Content-Length: ' .$len);ob_clean();flush();@readfile("\$File"exit;} else Not the answer you're looking for?
2018-07-16 10:43:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24684812128543854, "perplexity": 8503.927954302455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589251.7/warc/CC-MAIN-20180716095945-20180716115945-00579.warc.gz"}
https://physics.stackexchange.com/questions/668062/if-we-compress-a-mass-m-to-create-a-black-hole-how-does-it-gravitational-fiel
# If we compress a mass $M$ to create a black hole, how does it gravitational field change? If we create a black hole by compressing a mass $$M$$ to a radius smaller than Schwarzschild Radius, would the gravitational force of the mass $$M$$ (now a black hole) be different before and after becoming a black hole? • No, as long as the BH doesn't gain angular momentum or cahrge during collapsing. – KP99 Sep 25 at 15:36 • Are you counting the energy & pressure required to compress the matter? Sep 25 at 15:57 • Possible duplicates: physics.stackexchange.com/q/664342/2451 and links therein. Sep 25 at 16:05 • @safesphere So does that mean black holes are impossible objects in GR? – KP99 Sep 26 at 16:04
2021-11-27 08:18:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.526914656162262, "perplexity": 833.4408059700887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00447.warc.gz"}
https://yalmip.github.io/command/nchoosek/
# nchoosek Tags: Updated: ### Syntax y = nchoosek(x,k) Note that only x can be a decision variable, k has to be a constant integer. The operator is implemented using a mixed-integer model.
2020-09-19 09:20:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24603603780269623, "perplexity": 2009.9086290927053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00409.warc.gz"}
https://pos.sissa.it/289/023/
Volume 289 - VIII International Workshop On Charm Physics (CHARM2016) - CP Violation, Mixing and Nonleptonic Decays CP Violation, Mixing and nonleptonic decays at BESIII M.G. Zhao* and  On behalf of the BESIII collaboration Full text: pdf Published on: February 28, 2017 Abstract The BESIII detector at Beijing Electron-Positron Collider has collected the world's largest charm threshold data (2.93 fb$^{-1}$), which provide a good laboratory for quantum correlation measurement as well as testing QCD in charm meson decays. % This work focuses on the recent measurement of $CP$ asymmetry, mixing parameter, strong phase difference and branching fractions of $D^0\to K^-\pi^+\pi^+\pi^-$, $D^0\to K^0_SK^+K^-$, $D^{0,\pm}\to PP^{\prime}$ ($P=Pseudoscalar$), $D^{0,\pm}\to\omega\pi^{0,\pm}$, $D^+_S\to\eta^{\prime}\rho$ and $D^+_S\to\eta^{\prime}+anything$ at the BESIII experiment. DOI: https://doi.org/10.22323/1.289.0023 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
2022-08-09 05:17:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5031394362449646, "perplexity": 8116.664890460835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00465.warc.gz"}
http://www.oalib.com/relative/3246489
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Physics , 2000, DOI: 10.1088/1126-6708/2000/01/038 Abstract: We analyse D-branes on orbifolds with discrete torsion, extending earlier results. We analyze certain Abelian orbifolds of the type C^3/ \Gamma, where \Gamma is given by Z_m x Z_n, for the most general choice of discrete torsion parameter. By comparing with the AdS/CFT correspondence, we can consider different geometries which give rise to the same physics. This identifies new mirror pairs and suggests new dualities at large N. As a by-product we also get a more geometric picture of discrete torsion. Chien-Hao Liu Mathematics , 1998, Abstract: The study of fibrations of the target manifolds of string/M/F-theories has provided many insights to the dualities among these theories or even as a tool to build up dualities since the work of Strominger, Yau, and Zaslow on the Calabi-Yau case. For M-theory compactified on a Joyce manifold $M^7$, the fact that $M^7$ is constructed via a generalized Kummer construction on a 7-torus ${\smallBbb T}^7$ with a torsion-free $G_2$-structure $\phi$ suggests that there are natural fibrations of $M^7$ by ${\smallBbb T}^3$, ${\smallBbb T}^4$, and K3 surfaces in a way governed by $\phi$. The local picture of some of these fibrations and their roles in dualities between string/M-theory have been studied intensively in the work of Acharya. In this present work, we explain how one can understand their global and topological details in terms of bundles over orbifolds. After the essential background is provided in Sec. 1, we give general discussions in Sec. 2 about these fibrations, their generic and exceptional fibers, their monodromy, and the base orbifolds. Based on these, one obtains a 5-step-routine to understand the fibrations, which we illustrate by examples in Sec. 3. In Sec. 4, we turn to another kind of fibrations for Joyce manifolds, namely the fibrations by the Calabi-Yau threefolds constructed by Borcea and Voisin. All these fibrations arise freely and naturally from the work of Joyce. Understanding how the global structure of these fibrations may play roles in string/M-theory duality is one of the major issues for further pursuit. Mathematics , 2007, Abstract: Previously the two of the authors defined a notion of dual Calabi-Yau manifolds in a G_2 manifold, and described a process to obtain them. Here we apply this process to a compact G_2 manifold, constructed by Joyce, and as a result we obtain a pair of Borcea-Voisin Calabi-Yau manifolds, which are known to be mirror duals of each other. Mithat Unsal Physics , 2004, DOI: 10.1088/1126-6708/2005/12/033 Abstract: We construct a nonperturbative regularization for Euclidean noncommutative supersymmetric Yang-Mills theories with four (N= (2,2)), eight (N= (4,4)) and sixteen (N= (8,8)) supercharges in two dimensions. The construction relies on orbifolds with discrete torsion, which allows noncommuting space dimensions to be generated dynamically from zero dimensional matrix model in the deconstruction limit. We also nonperturbatively prove that the twisted topological sectors of ordinary supersymmetric Yang-Mills theory are equivalent to a noncommutative field theory on the topologically trivial sector with reduced rank and quantized noncommutativity parameter. The key point of the proof is to reinterpret 't Hooft's twisted boundary condition as an orbifold with discrete torsion by lifting the lattice theory to a zero dimensional matrix theory. B. S. Acharya Physics , 1996, Abstract: It is argued that $M$-theory compactified on {\it any} of Joyce's $Spin(7)$ holonomy 8-manifolds are dual to compactifications of heterotic string theory on Joyce 7-manifolds of $G_2$ holonomy. Mathematics , 2007, Abstract: We study the Dirichlet problem for fully nonlinear, degenerate elliptic equations of the form f(Hess, u)=0 on a smoothly bounded domain D in R^n. In our approach the equation is replaced by a subset F of the space of symmetric nxn-matrices, with bdy(F) contined in the set {f=0}. We establish the existence and uniqueness of continuous solutions under an explicit geometric F-convexity'' assumption on the boundary bdy(F). The topological structure of F-convex domains is also studied and a theorem of Andreotti-Frankel type is proved for them. Two key ingredients in the analysis are the use of subaffine functions and Dirichlet duality, both introduced here. Associated to F is a Dirichlet dual set F* which gives a dual Dirichlet problem. This pairing is a true duality in that the dual of F* is F and in the analysis the roles of F and F* are interchangeable. The duality also clarifies many features of the problem including the appropriate conditions on the boundary. Many interesting examples are covered by these results including: All branches of the homogeneous Monge-Ampere equation over R, C and H; equations appearing naturally in calibrated geometry, Lagrangian geometry and p-convex riemannian geometry, and all branches of the Special Lagrangian potential equation. B. S. Acharya Physics , 1995, Abstract: We construct the heterotic dual theory in four dimensions of eleven dimensional supergravity compactified on a particular Joyce manifold, $J$. In particular, $J$ is constructed from resolving fixed point singularities of orbifolds of the seven torus in such a way that one is forced to consider a generalised orbifold on the heterotic side. We conjecture that a heterotic dual exists for all the compact 7-manifolds of $G_{2}$ holonomy constructed by Joyce. Eric R. Sharpe Physics , 2000, DOI: 10.1103/PhysRevD.68.126003 Abstract: In this article we explain discrete torsion. Put simply, discrete torsion is the choice of orbifold group action on the B field. We derive the classification H^2(G, U(1)), we derive the twisted sector phases appearing in string loop partition functions, we derive M. Douglas's description of discrete torsion for D-branes in terms of a projective representation of the orbifold group, and we outline how the results of Vafa-Witten fit into this framework. In addition, we observe that additional degrees of freedom (known as shift orbifolds) appear in describing orbifold group actions on B fields, in addition to those classified by H^2(G, U(1)), and explain how these new degrees of freedom appear in terms of twisted sector contributions to partition functions and in terms of orbifold group actions on D-brane worldvolumes. This paper represents a technically simplified version of prior papers by the author on discrete torsion. We repeat here technically simplified versions of results from those papers, and have included some new material. E. Sharpe Physics , 2003, DOI: 10.1016/S0550-3213(03)00412-7 Abstract: In this paper we make two observations related to discrete torsion. First, we observe that an old obscure degree of freedom (momentum/translation shifts) in (symmetric) string orbifolds is related to discrete torsion. We point out how our previous derivation of discrete torsion from orbifold group actions on B fields includes these momentum lattice shift phases, and discuss how they are realized in terms of orbifold group actions on D-branes. Second, we describe the M theory dual of IIA discrete torsion, a duality relation to our knowledge not previously understood. We show that IIA discrete torsion is encoded in analogues of the shift orbifolds above for the M theory C field. B. S. Acharya Physics , 1996, DOI: 10.1016/0550-3213(96)00326-4 Abstract: We present an ansatz which enables us to construct heterotic/M-theory dual pairs in four dimensions. It is checked that this ansatz reproduces previous results and that the massless spectra of the proposed dual pairs agree. The new dual pairs consist of M-theory compactifications on Joyce manifolds of $G_2$ holonomy and Calabi-Yau compactifications of heterotic strings. These results are further evidence that M-theory is consistent on orbifolds. Finally, we interpret these results in terms of M-theory geometries which are K3 fibrations and heterotic geometries which are conjectured to be $T^3$ fibrations. Even though the new dual pairs are constructed as non-freely acting orbifolds of existing dual pairs, the adiabatic argument is apparently not violated. Page 1 /100 Display every page 5 10 20 Item
2019-12-10 19:03:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7613545060157776, "perplexity": 885.2116368609051}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528490.48/warc/CC-MAIN-20191210180555-20191210204555-00146.warc.gz"}
https://socratic.org/questions/how-can-an-atomic-mass-not-be-a-whole-number
# How can an atomic mass not be a whole number? E.g. Iron has for naturally occurring isotopes with masses: 53.940 $\mu$, 55.935 $\mu$, 56.935 $\mu$, and 57.933 $\mu$. If you get their abundances and multiply the corresponding mass to their abundance, you get the average atomic mass, which is 55.846 $\mu$.
2020-01-20 14:56:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.871711015701294, "perplexity": 1868.130381616054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598800.30/warc/CC-MAIN-20200120135447-20200120164447-00337.warc.gz"}
https://zitaoshen.rbind.io/project/optimization/introduction-on-multi-stage-robust-optimization/
# 1 Introduction Multistage robust optimization is focused on the worst-case optimization problem affected by uncertainty under a dynamic setting. Unlike static robust optimization problems, which all decisions are implemented simultaneously and before any uncertainty is realized, the multistage robust optimization, however, includes some adjustable decisions, which depend on the revealed data before the decisions are made. Therefore, multistage robust optimization is also called adaptive robust multistage optimization or adjustable robust multistage optimization. Although robust optimization has already been discussed for decades, multistage robust optimization is a relatively new topic. In 2002, A. Ben-Tal, A. Goryashko, and A. Nemirovski were the first to discuss robust multistage decision problems, opening the field to numerous other papers either dealing with theoretical concepts or applying the framework to practical problems, such as inventory management, energy generation and distribution, portfolio management, facility location and transportation, dynamic pricing and so on. # 2 Decisions Types Based on the choice of whether people allow their decisions to be dependent on the revealed information or not, there can be two kinds of decisions. Here-and-now decision: Those decisions should be made as a result of solving the problem before the actual data “reveals itself” and as such should be independent of the actual values of the data. Wait-and-see decisions: Those decisions could be made after the controlled system “starts to live,” at time instants when part (or all) of the true data is revealed. It is fully legitimate to allow for these decisions to depend on the part of the data that indeed “reveals itself” before the decision should be made. # 3 General Formulation For a better understanding of the formulation of adjusted robust multistage optimization, let’s consider a general-type uncertain optimization problem-a collection $p=\{min_{x}\{f(x,\xi):F(x,\xi)\in K\}:\xi\in \Xi\}.............(1)$ of instances - optimization problems of the form $min_{x}{f(x,):F(x,)K}$ where $$x \in R^n$$ is the decision vector, $$\xi\in R^L$$ represents the uncertain data, the real-valued function $$f(x;\xi)$$ is the objective, and the vector-valued function $$F(x;\xi)$$ takes values in $$R^m$$ along with a set $$K\subset R^m$$ specify the constraints; finally, $$\Xi\subset R^L$$ is the uncertainty set. The Robust Counterpart (RC) of uncertain problem (1) is defined as $min_{x,t}\{t:\forall \xi \in \Xi: f(x,\xi)\leqslant t,F(x,\xi)\in K\}.............(2)$ Therefore, all the decision variables x are here-and-now decisions. Now, we want to adjust the robust counterpart so that we can corporate the wait-and-see decisions into the decision making process. First, let’s define $$\forall j \leqslant n$$: $x_j=X_j(P_j \xi)........(3)$ where $$P_j$$ are given in advance matrices specifying the “information base” of the decisions $$x_j$$. In other words, $$P_j$$’s value will decide how much portion of the true data that $$x_j$$ can depend on. $$X_j()$$ are called decision rules; these rules can in principle be arbitrary functions on the corresponding vector spaces. The Adjusted Robust Counterpart (ARC) of uncertain problem (1) is defined as $min_{t,\{X_j()\}^n_{j=1}}\{t:\forall \xi \in \Xi: f(X(\xi),\xi)\leqslant t,F(X(\xi),\xi)\in K\}$ $X(\xi)=[X_j(P_j \xi)], \forall j\in n........(4)$ by replacing the the decision variables $$x_j$$ in (2) with function (3). Note that the ARC is an extension of the RC; the latter is a case of the former corresponding to the case of trivial information base in which all matrices $$P_j$$ are zero, so that $$x_j$$ are here-and-now decision. # 4 Policies Choice From the computational viewpoint, solving a robust multi-stage linear programming model is NP-hard[4]. The reason comes from the fact that the ARC is typically severely computationally intractable. The equation (4) is an infinite-dimensional problem, where one wants to optimize over functions -decision rules- rather than vectors, and these functions, in general, depend on many real variables. Seemingly the only option here is sticking to a chosen in advance parametric family of decision rules, like piece-wise constant/linear/quadratic functions of $$P_j(\xi)$$ with simple domains of the pieces. With this approach, a candidate decision rule is identified by the vector of values of the associated parameters, and the ARC becomes a finite-dimensional problem, the parameters being our new decision variables. This approach is indeed possible and in fact will be the focus of what follows. ## 4.1 Static Policies The simplest type of policy to consider is a static one, whereby all future decisions are constant and independent of the intermediate observations. Such policies do not increase the underlying complexity of the decision problem, and often they result in tractable robust counterparts. For instance, this kind of policy can be optimal for LPs with row-wise uncertainty. Here is an example $P=\{min_x\{c^T_\xi x+d_\xi:a^T_{i\xi}x\leq b_{i\xi},i=1,...,m\}:\xi\in \Xi\}$ where the uncertainty vector can be partitioned into $$J + 1$$ blocks. Also, its objective depend solely on $$\xi_0\in \Xi_0$$, and the data in the jth constraint depend solely on $$\xi_j\in \Xi_j$$. ## 4.2 Affine Decision Rules Consider an affinely perturbed uncertain conic problem: $C=\{min_{x\in R^n}\{c^T_{\xi}x+d_{\xi}:A_{\xi}x+b_{\xi}\in K: \xi \in \Xi\}...........(5)$ where $c_{} ; d_{} ;A_{} ; b_{}$ are affine in $$\xi$$, K is a cone, which is the direct product of nonnegative rays, and $$\Xi$$ is a convex compact uncertainty set defined as following $\xi=\{\xi\in R^L: \exists u : \bf{P}(\xi,u)\succeq 0\}$ where P is affine in $$[\xi; u]$$. Assume that along with the problem, we are given an information base $$\{P_j\}^n_{j=1}$$for it; here $$P_j$$ are $$m_j*n$$ matrices.Then we need to use affine decision rules to approximate the ARC of the problem. Affine decision rules will have the following structure: $x_j=X_j(P_j\xi)=p_j+q^T_jP_j\xi,j=1,...,n.............(6)$ Hence,Affinely Adjustable Robust Counterpart (AARC), which is the resulting restricted version of the ARC of (5), will be $min_{t,\{p_j,q_j\}^n_{j=1}} \{\{ t: \sum_{j=1}^n c^j_\xi[ p_j+q_j^T P_j \xi ]+d_\xi-t \leq 0, \sum_{j=1}^n A^j_\xi[ p_j+q_j^T P_j\xi ]+b_\xi\in K \}\forall \xi\in \Xi\}$ where where $$c^j_{\xi}$$is j-th entry in $$c_\xi$$ , and $$A^j_\xi$$ is j-th column of $$A_\xi$$ . Note that the variables in this problem are t and the coefficients $$p_j,q_j$$ of the affine decision rules (6). As such, these variables do not specify uniquely the actual decisions $$x_j$$; these decisions are uniquely defined by these coefficients and the corresponding portions $$P_j\xi$$ of the true data once revealed. For illustration, you can take a look on section 5.3.4 on one of the reference books Robust Optimization. That is a example on applying AARC to a simple multi-product multi-period inventory model. ## 4.3 Piecewise Affine Decisions Rules It is to be expected that in the general case, applying affine decision rules will generate suboptimal policies compared to fully adjustable ones. Thus, one might be tempted to investigate whether tighter conservative approximations can be obtained by employing more sophisticated (yet tractable) decision rules, in particular, nonlinear ones. Below are some examples of how this can be achieved by taking piecewise affine decision rules. === K Adaptability=== Consider now a multi-stage optimization where the future stage decisions are subject to integer constraints. The framework introduced above cannot address such a setup, since the later stage policies, $$X_j(P_j\xi)$$, are necessarily continuous functions of the uncertainty. The setting of K Adaptability is to deal with the situation mentioned in the previous section. It is introduced here: Finite Adaptability in Multistage Linear Optimization Here is an example. Let’s start with a simple case, i.e., the two-stage Robust optimization. Suppose the second -stage variables,$$x_2=X_2(P_2\xi)$$, are piecewise constant functions of the uncertainty, with k pieces. Due to the inherent finiteness of the framework, the resulting formulation can accommodate discrete variables. In addition, the level of adaptability can be adjusted by changing the number of pieces in the piecewise constant second stage variables. Consider a two-stage problem of the form $min: c^T x_1+d^T x_2$ $s.t: A_1(\xi)+A_2(\xi)x_2\geq b, \forall \xi\in \Xi$ $x_1\in \chi_1,x_2\in \chi_2$ where $$\chi_2$$ may contain integrality constraints. In the k adaptability framework, with k-piecewise constant second stage variables, this becomes $Adapt_k(\xi)= min_{\Xi=\Xi_1\cup...\cup \Xi_k} \left \{ min: c^T x_1+max\{d^T x_2 ^{(1)},...,d^T x_2 ^{(k)}\}\\ s.t: A_1(\xi)+A_2(\xi)x_2 ^{(1)}\geq b, \forall \xi\in \Xi_1\\ .\\ .\\ .\\ A_1(\xi)+A_2(\xi)x_2 ^{(k)}\geq b, \forall \xi\in \Xi_k\\ x_1\in \chi_1,x_2^{(j)}\in \chi_2 \right \}$ If the partition of the uncertainty set, $$\Xi=\Xi_1\cup...\cup \Xi_k$$ is fixed, then the resulting problem retains the structure of the original nominal problem, and the number of second stage variables grows by a factor of k. Furthermore, the static problem (i.e., with no adaptability) corresponds to the case k = 1, and hence if this is feasible, then the k-adaptable problem is feasible for any k. This allows the decision-maker to choose the appropriate level of adaptability. This flexibility may be particularly crucial for substantial scale problems, where the nominal formulation is already on the border of what is currently tractable. The complexity of K adaptability is in finding a suitable partition of the uncertainty. In general, computing the optimal partition even into two regions is NP-hard. However, we also have the following positive complexity result. It says that if any one of the three quantities:(a) Dimension of the uncertainty; (b) Dimension of the decision-space; and (c ) Number of uncertain constraints, is small, then computing the optimal 2-piecewise constant second stage policy can be done efficiently. ## 4.4 Other Piecewise Decision Rule Design Although decision rules have been established as the preferred solution method for addressing multistage adaptive optimization problems. However, previously mentioned methods cannot efficiently design binary decision rules for multistage problems with a large number of stages, and most of those decision rules are constrained by their a prior design. In 2014, Dimitris Bertsimas and Angelos Georghiou proposed a new designing way to address those problems. In their papers, they derive the structure for optimal decision rules involving continuous and binary variables as piecewise linear and piecewise constant functions, respectively. Then, they propose a methodology for the optimal design of such decision rules with finite pieces and solve the problem using mixed integer optimization. For more details, You can find paper here # 5 An Example on Inventory Management Here, this example will show you how to apply ARC framework to formulate a problem. Let’s consider a simple inventory problem in which a retailer needs to order some goods to satisfy demand from his customers while incurring the lowest total cost, which is the sum of the cost for ordering $$(c_t)$$, holding$$(h_t)$$, and backlogging $$(b_t)$$ over a finite time horizon. First, we can formulate a deterministic model where implementable and auxiliary decisions can be identifieded: $min_{x_t,S_t^+,S_t^-} \sum^T_{t=1}(c_t x_t+h_t s_t^+ +b_t s_t^-)$ $s.t. s^+_t\geq 0, s^-_t \geq 0, \forall t$ $s^+_t \geq y_1+\sum^t_{t'=1} x_{t'}-d_{t'} ,\forall t$ $s^-_t \geq -y_1+ \sum^t_{t'=1}d_{t'}-x_{t'},\forall t,$ $0\leq x_t\leq M_t,\forall t$ Here, $$x_t$$ is number of goods ordered at time t and received at t+1; $$y_t$$ is number of goods in stock at beginning of time t; * $$d_t$$ is demand between time t and t+1 * $$M_t$$ is max order size * $$s_t^+$$ is amount of goods held in storage during stage t * $$s_t^-$$ is amount of backlogged customer demands during stage t Then, one might consider that at each point of time, the inventory manager can observe all of the prior demand before placing an order for the next stage. This could apply, for instance, to the case when the entire unmet demand from customers is backlogged, so censoring of observations never arises. Then, the sequence of decision variables and observations can be defined as follows From the figure [5], it is clear that $$s^+_t$$ and $$s^-_t$$ are auxiliary variables that are adjustable with respect to the full uncertainty vector $$d_{[T]} =d$$. Once $$d_{[T]}$$ is revealed, the uncertainty can be considered reduced to zero. Hence, we can reformulate the previous problem by using a multistage ARC model: $min_{x_1,\{x_t()\}^T_{t=2},\{S_t^+(),S_t^-()\}^T_{t=1}} \sum^T_{d\in U}(c_t x_t(d_{[t-1]})+h_t s_t^+(d) +b_t s_t^-(d))$ $s.t. s^+_t(d)\geq 0, s^-_t (d)\geq 0, \forall t,\forall d\in U$ $s^+_t(d) \geq y_1+\sum^t_{t'=1} x_{t'}(d_{[t-1]})-d_{t'} ,\forall t,\forall d\in U$ $s^-_t (d)\geq -y_1+ \sum^t_{t'=1}d_{t'}-x_{t'}(d_{[t-1]}),\forall t,\forall d\in U$ $0\leq x_t(d_{[t-1]})\leq M_t,\forall t,\forall d\in U$ where $$U \subseteq R^T$$ For more about stochastic optimization, go check: here # 6 Reference [1] Bertsimas, Dimitris; Brown, David; Caramanis, Constantine, “Theorey and applications of Robust Optimization,” Oct 2010. [Online]. Available: {https://ui.adsabs.harvard.edu/abs/2010arXiv1010.5445B. [2] W. S. Choe, “Adaptive robust optimization,” 2015. [Online]. Available: https://optimization.mccormick.northwestern.edu/index.php/Adaptive_robust_optimization. [Accessed 13 April 2019]. [3] C. Gounaris, “Modern Robust Optimization: Opportunities for Enterprise-Wide Optimization,” Dept. of Chemical Engineering and Center for Advanced Process Decision-making Carnegie Mellon University, 7 September 2017. [Online]. Available: http://egon.cheme.cmu.edu/ewo/docs/EWO_Seminar_09_07_2017.pdf. [Accessed 13 April 2019]. [4] A.Ben-Tal, L. El Ghaoui and A. Nemirovski, Robust Optimization, Princeton University Press, 2009. [5] Erick DelageDan A. Iancu. Robust Multistage Decision Making. In INFORMS TutORials in Operations Research. Published online: 26 Oct 2015; 20-46. https://doi.org/10.1287/educ.2015.0139
2022-01-18 04:45:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7678454518318176, "perplexity": 735.6067109847263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00691.warc.gz"}
https://deepai.org/publication/online-stochastic-optimization-with-wasserstein-based-non-stationarity
# Online Stochastic Optimization with Wasserstein Based Non-stationarity We consider a general online stochastic optimization problem with multiple budget constraints over a horizon of finite time periods. At each time period, a reward function and multiple cost functions, where each cost function is involved in the consumption of one corresponding budget, are drawn from an unknown distribution, which is assumed to be non-stationary across time. Then, a decision maker needs to specify an action from a convex and compact action set to collect the reward, and the consumption each budget is determined jointly by the cost functions and the taken action. The objective of the decision maker is to maximize the cumulative reward subject to the budget constraints. Our model captures a wide range of applications including online linear programming and network revenue management, among others. In this paper, we design near-optimal policies for the decision maker under the following two specific settings: a data-driven setting where the decision maker is given prior estimates of the distributions beforehand and a no information setting where the distributions are completely unknown to the decision maker. Under each setting, we propose a new Wasserstein-distance based measure to measure the non-stationarity of the distributions at different time periods and show that this measure leads to a necessary and sufficient condition for the attainability of a sublinear regret. For the first setting, we propose a new algorithm which blends gradient descent steps with the prior estimates. We then adapt our algorithm for the second setting and propose another gradient descent based algorithm. We show that under both settings, our polices achieve a regret upper bound of optimal order. Moreover, our policies could be naturally incorporated with a re-solving procedure which further boosts the empirical performance in numerical experiments. ## Authors • 1 publication • 7 publications • 83 publications • ### The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems Online allocation problems with resource constraints are central problem... 11/18/2020 ∙ by Santiago Balseiro, et al. ∙ 0 • ### Non-stationary Stochastic Optimization We consider a non-stationary variant of a sequential stochastic optimiza... 07/20/2013 ∙ by O. Besbes, et al. ∙ 0 • ### Constrained Upper Confidence Reinforcement Learning Constrained Markov Decision Processes are a class of stochastic decision... 01/26/2020 ∙ by Liyuan Zheng, et al. ∙ 0 • ### Online Convex Optimization with Binary Constraints We consider online optimization with binary decision variables and conve... 05/05/2020 ∙ by Antoine Lesage-Landry, et al. ∙ 0 • ### From Predictive to Prescriptive Analytics In this paper, we combine ideas from machine learning (ML) and operation... 02/22/2014 ∙ by Dimitris Bertsimas, et al. ∙ 0 • ### Non-stationary Stochastic Optimization with Local Spatial and Temporal Changes We consider a non-stationary sequential stochastic optimization problem,... 08/09/2017 ∙ by Xi Chen, et al. ∙ 0 • ### Selling Information Through Consulting We consider a monopoly information holder selling information to a budge... 07/09/2019 ∙ by Yiling Chen, et al. ∙ 0 ##### This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. ## 1 Introduction In this paper, we study a general online stochastic optimization problem with budgets, each with an initial capacity, over a horizon of finite discrete time periods. At each time period , a reward function and a cost function are drawn independently from a distribution. Then the decision maker should specify a decision , where is assumed to be a convex and compact set. Accordingly, a reward is generated, and each budget is consumed by amount of budget, where . The decision maker’s objective is to maximize the total generated reward subject to the budget capacity constraints. Our formulation generalizes several existing problems studied in the literature. When and are linear functions for each , our formulation reduces to the online linear programming (OLP) problem (Buchbinder and Naor, 2009). Our formulation could also be applied to network revenue management (NRM) problem (Talluri and Van Ryzin, 2006), including the quantity-based model, price-based model and choice-based model (Talluri and Van Ryzin, 2004) (See detailed discussions in Section ). Note that in the OLP problem, the reward function and cost functions are assumed to be drawn from an unknown distribution which is stationary across time (Li and Ye, 2019), while in the NRM problem, the distribution is usually assumed to be known to the decision maker though it can be non-stationary across time (Talluri and Van Ryzin, 2006). In this paper, we assume an unknown non-stationary input, i.e., and are drawn from an unknown distribution which is non-stationary across time. More specifically, we consider the following two settings: a data-driven setting where there exists an available prior estimate to approximate the true distribution at each time period, and a no distribution information setting where the distribution at each time period is completely unknown to the decision maker. Note that the first setting reduces to the known non-stationary setting of the NRM problem when the prior estimates are identical to the true distributions, while the second setting reduces to the unknown stationary setting of the OLP problem when the distribution at each period is identical to each other Though we consider an unknown non-stationary input, it may be too pessimistic to consider an adversary setting where the distribution at each time period could be arbitrarily chosen. Moreover, for our first setting where there exists prior estimates of the distributions, the estimates are usually “close” to the true distributions. Thus, we assume that for each setting, the true distributions fall into an uncertainty set, which controls the non-stationarity or estimates ambiguity over the distributions. Our goal is to derive near-optimal policies for both settings, which perform well over the uncertainty set. We compare the performances of our policies to the so-called “offline” optimization problem, i.e., to maximize the objective function with full information/knowledge of all the ’s and ’s. Moreover, we use worst-case regret to measure the performance of our policies over the uncertainty set, which is defined as the maximal difference between the expected value of the “offline” problem and the expected reward collected by the policy, over all distributions in the uncertainty set. The formal definitions will be provided in the next section after introducing the notations and formulations. ### 1.1 Main Results and Contributions For our first data-driven setting, we assume the availability of some prior estimate for ’s, where denotes the distribution of the arrival input at time period . We propose a new non-stationarity measure, which is defined as the cumulative Wasserstein distance of the prior estimate from the true distribution for each , and we name this new measure as Wasserstein-based non-stationarity budget with prior estimate (WBNB-P). Then, we introduce an uncertainty set based on WBNB-P driven by a parameter , which is called the variation budget, and the set covers all the arrival inputs ’s that have WBNB-P no greater than We illustrate the sharpness of our WBNB-P by showing that if the variation budget is linear in , sublinear regret could not be achieved by any policy. Note that the Wasserstein distance has been widely used as a measure of the ambiguity in the distributionally robust optimization literature (e.g. Esfahani and Kuhn (2018)) for its power to represent confidence set and its great performance, both theoretically and empirically. To the best of our knowledge, we are the first to propose its use in online optimization to measure estimates ambiguity (or non-stationarity in the second setting). We develop a new gradient-based algorithm that adjusts the gradient descent direction according to all the prior estimates. Our algorithm is motivated by the traditional online gradient descent (OGD) algorithm (Hazan, 2016), which applies a linear update rule according to the gradient of the functions at the current time period. Note that the OGD algorithm uses only historical information in every step and it has been shown to work well in a stationary setting, even when the distribution is unknown (Lu et al., 2020; Sun et al., 2020; Li et al., 2020). For non-stationary setting, we have to make use of the prior estimates of the future time periods to guide the budget consumption. For that purpose, we develop a new gradient descent algorithm which combines the linear update at each time period with the offline convex relaxation obtained over prior estimates. We show that our algorithm achieves a regret bound, which is of optimal order. Note that even for a special case where the prior estimate is identical to the true distribution at each time period, i.e., a known non-stationary setting, our regret bound turns out to be new. Similar result for this setting is only known in Devanur et al. (2019) for a competitive ratio, where denotes the minimal capacity of the budget constraints and they assume the reward function and cost function are all linear functions. However, their result on competitive ratio doesn’t translate to result on regret. It is also not clear how to generalize their method to the setting where the true distributions are unknown and there are estimates ambiguity. Our algorithm and analysis are totally different from theirs. Specifically, their method is based on the concentration property of the arrival process and applying Chernoff-type inequalities to derive high probability bounds. In contrast, our approach is based on applying an adjusted gradient descent step to balance the budget consumption. We show that the budget consumption on every sample path can be represented by certain dual variables and our update rule ensures that these dual variables are bounded almost surely. In this way, we provide a new methodology to analyze the online optimization problem in a non-stationary environment. For the second setting where no prior estimates of the distributions are available, we modify our WBNB-P by replacing the prior estimate of each distribution with their uniform mixture distribution in the Wasserstein distance. Then the Wasserstein distance measure can be regarded as a measure over the non-stationarity of the distributions and we formulate the uncertainty set accordingly with a variation budget. In this case, the offline convex relaxation admits a trivial solution (capacity should be allocated equally across time), and our adjusted gradient descent algorithm reduces to the classical gradient descent algorithm. We prove that our algorithm achieves a regret bound of optimal order, even when the distributions are chosen adversarially over the uncertainty set. Note that there is a stream of literature that studies non-stationary online optimization without budget constraints (Besbes et al., 2015; Cheung et al., 2019), which also constructs the uncertainty set via a variation budget. However, their non-stationarity measure is concerned about the temporal change of the distributions over time. We provide an example in Section 4.1 showing that such measure would fail in a budget constrained setting. Thus it motivates us to propose our measure based on the global change of the distributions, i.e., comparing each with their uniform mixture distribution. An independent work (Balseiro et al., 2020) also consider using global change of the distributions to derive a measure of non-stationarity. However, their measure is based on the total variation metric between distributions. By illustrating the advantage of using Wasserstein distance instead of total variation distance or KL-divergence through a simple example in Section 4.2, we show that our measure is sharper and we establish the suitability of the Wasserstein-based non-stationary measure. Finally, to the best of our knowledge, our model is new comparing to the existing literature. Our measure in both settings can be universally applied to various online linear programming and network revenue management formulations, and it thus fills the gap between the studies of these problems in the stochastic setting and the adversarial setting. Specifically, For the first setting, the prior knowledge could be obtained from the historical data and its presence is aligned with the settings in the network revenue management literature (Talluri and Van Ryzin, 2004; Gallego et al., 2019). However, the network revenue management literature always assumes a precise knowledge of the true input while our paper allows a deviation of the prior estimate from the true distribution The deviation can be interpreted as an estimation or model misspecification error. Thus our results in the first setting generalize this line of literature. For the second setting, the assumption of no available prior knowledge is consistent with the setting of online linear programming problem (Molinaro and Ravi, 2013; Agrawal et al., 2014; Gupta and Molinaro, 2014) and the setting of blind network revenue management (Besbes and Zeevi, 2012; Jasin, 2015). For the online linear programming problem, the literature studies either the stochastic setting or the random permutation setting, and for the blind network revenue management, the literature is only focused on the stochastic setting. Compared to these two streams of literature, our results in the second setting relax the stochastic assumption in a non-stationary (more adversarial but not fully adversarial) manner. From a modeling perspective, our work contributes to the study of non-stationary environment for online learning/optimization problem. This line of literature has mainly concerned with the unconstrained settings such as unconstrained online optimization problem (Besbes et al., 2015), bandits problem (Garivier and Moulines, 2008; Besbes et al., 2014) , reinforcement learning problem (Cheung et al., 2019; Lecarpentier and Rachelson, 2019). Our notion of Wasserstein-based non-stationarity adds to the current dictionary of non-stationarity definitions and it specializes for a characterization of the constrained setting. ### 1.2 Literature review The formulation of online stochastic optimization studied in this paper roots in two major applications: the online linear programming (LP) problem and the network revenue management problem. We briefly review these two streams of literature as follows. The online LP problem (Molinaro and Ravi, 2013; Agrawal et al., 2014; Gupta and Molinaro, 2014) covers a wide range of applications through different ways of specifying the underlying LP, including secretary problem (Ferguson and others, 1989), knapsack problem (Kellerer et al., 2003), resource allocation problem (Vanderbei and others, 2015), quantity-based network revenue management (NRM) problem (Jasin, 2015), generalized assignment problem (Conforti et al., 2014), network routing problem (Buchbinder and Naor, 2009), matching problem (Mehta et al., 2005), etc. Notably, the problem has been studied under either the stochastic input model where the coefficient in the objective function, together with the corresponding column in the constraint matrix is drawn from an unknown distribution or the random permutation model where they arrive in a random order. As noted in the paper (Li et al., 2020), the random permutation model exhibits similar concentration behavior as the stochastic input model. The non-stationary setting of our paper and relaxes the i.i.d. structure and it can be viewed as a third paradigm for analyzing the online LP problem. The network revenue management (NRM) problem has been extensively studied in the literature and a main focus is to propose near-optimal policies with strong theoretical guarantees. One popular way is to construct a linear programming as an upper bound of the optimal revenue and use the optimal solution to derive heuristic policies. Specifically, Gallego and Van Ryzin (1994) propose a static bid-price policy based on the dual variable of the linear programming upper bound and proves that the revenue loss is when each period is repeated times and the capacities are scaled by . Subsequently, Reiman and Wang (2008) show that by re-solving the linear programming upper bound once, one could obtain an upper bound on the revenue loss. A follow-up work (Jasin and Kumar, 2012) shows that under a so-called “non-degeneracy” assumption, a policy which re-solves the linear programming upper bound at each time period would lead to an revenue loss, which is independent of the scaling factor . The relationship between the performances of the control policies and the number of times of resolving the linear programming upper bound is further discussed in their later paper (Jasin and Kumar, 2013). Recently, Bumpensanti and Wang (2020) propose an infrequent re-solving policy and show that their policy obtains an upper bound of the revenue loss even without the “non-degeneracy” assumption. With a different approach, Vera and Banerjee (2020) prove the same upper bound for the NRM problem and their approach is generalized from their previous work (Vera et al., 2019) for other online decision making problems, including online stochastic knapsack, online probing, and dynamic pricing. Note that all the approaches mentioned above are mainly developed for the stochastic/stationary setting. When the arrival process of customers is non-stationary over time, Adelman (2007) develops a strong heuristic based on a novel approximate dynamic programming (DP) approach. This approach is further investigated under various settings in the literature (for example (Zhang and Adelman, 2009), (Kunnumkal and Talluri, 2016)). Remarkably, although the approximate DP heuristic is one of the strongest heuristics in practice, it does not feature for a theoretical bound. Finally, by using non-linear basis functions to approximate the value of the DP, Ma et al. (2020) develop a novel approximate DP policy and derive a constant competitiveness ratio for their policy, which depends on the problem parameters. ## 2 Problem Formulation Consider the following convex optimization problem max T∑t=1ft(xt) (CP) s.t. T∑t=1git(xt)≤ci,  i=1,...,m, xt∈X,  t=1,...,T, where the decision variables are for . Here is a compact convex set in . The function ’s are functions in the space of concave continuous functions and ’s are functions in the space of convex continuous functions, both of which are supported on We define the vector-value function . Throughout the paper, we use to index the constraint and (or sometimes ) to index the decision variables, and we use bold symbols to denote vectors/matrices and normal symbols for scalars. In this paper, we study the online stochastic optimization problem where the functions in (CP) are revealed in an online fashion and one needs to determine the value of decision variables sequentially. Specifically, at each time the functions are revealed, and we need to decide the value of instantly. Different from the offline setting, at time , we do not have the information of the future part of the optimization problem. Given the history , the decision of can be expressed as a policy function of the history and the observation at the current time period. That is, xt=πt(ft,gt,Ht−1). (1) The policy function can be time-dependent and we denote policy The decision variable must conform to the constraints in (CP) throughout the procedure, and the objective is aligned with the maximization objective for the offline problem (CP). ### 2.1 Parameterized Form, Probability Space, and Assumptions Consider a parametric form of the underlying problem (CP) where the functions are parameterized by a parameter . Specifically, ft(xt)\coloneqqf(xt;θt),  git(xt;θt)\coloneqqgi(xt;θt) for each and . The function is concave in its first argument, while the function is convex in its first argument. We define the vector-value function . Then the problem (CP) can be rewritten as the following parameterized convex program max T∑t=1f(xt,θt) (PCP) s.t. T∑t=1gi(xt,θt)≤ci,  i=1,...,m, xt∈X.  t=1,...,T, where the decision variables are We note that this parametric form (PCP) is introduced mainly for presentation purpose, since it avoids the complication of defining probability measure in function space, and also it does not change the nature of the problem. We assume the knowledge of and a priori. Here and hereafter, we will use (PCP) as the underlying form of the online stochastic optimization problem. The problem of online stochastic optimization, as its name refers, involves stochasticity on the functions for the underlying optimization problem. The parametric form (PCP) reduces the randomness from the function to the parameters ’s, and therefore the probability measure can be defined in the parameter space of . First, we consider the following distance function between two parameters , ρ(θ,θ′)\coloneqqsupx∈X∥(f(x,θ),g(x,θ))−f(x,θ′),g(x,θ′))∥∞ (2) where is the L norm in Without loss of generality, let be a set of class representatives, that is, for any , In this way, the parameter space can be viewed as a metric space equipped with metric Also, note that we define the metric based on the vector-valued function , instead of a metric in the parameter space (or ). This is because the main focus is on the effect of different parameter on the function value rather than the original Euclidean difference in the parameter space. Let be the smallest -algebra in that contains all open subsets (under metric ) of We denote the distribution of as and can thus be viewed as a probability measure on Throughout the paper, we make the following assumptions. Assumption 1 (a) and (b) imposes boundedness on function and ’s. Assumption 1 (c) states the ratio between and is uniformly bounded by for all and . Intuitively, it tells that for each unit consumption of resource, the maximum amount of revenue earned is upper bounded by . In Assumption 1 (d), we assume ’s are independent of each other but we do not assume the exact knowledge of them. However, there can be dependence between components in the vector-value functions ###### Assumption 1 (Boundedness and Independence) We assume • for all . • for all and In particular, for all • There exists a positive constant such that for any and each , we have that holds for any as long as . • and ’s are independent with each others. We do not assume the knowledge of ’s. In the following, we illustrate the online formulation through two application contexts: online linear programming and online network revenue management. We choose the more general convex formulation (PCP) with the aim of uncovering the key mathematical structures for this online optimization problem, but we will occasionally return to these two examples to generate intuitions throughout the paper. ### 2.2 Examples Online linear programming (LP): The online LP problem (Molinaro and Ravi, 2013; Agrawal et al., 2014; Gupta and Molinaro, 2014) can be viewed as an example of the online stochastic optimization formulation of (CP). Specifically, the decision variable , the functions and are linear functions, and the parameter where . Specifically, and At each time , the coefficient in the objective together with the corresponding column in the constraint matrix is revealed and one needs to determine the value of immediately. Price-based network revenue management (NRM): In the price-based NRM problem (Gallego and Van Ryzin, 1994), a firm is selling a given stock of products over a finite time horizons by posting a price at each time. The demand is price-sensitive and the firm’s objective is to maximize the total collected revenue. This problem could be cast in the formulation (PCP). Specifically, the parameter refers to the type of the -th arriving customer, and the decision variable represents to the price posted by the decision maker at time . Accordingly, denotes the resource consumption under the price and denotes the collected revenue. Choice-based network revenue management: In the choice-based NRM problem (Talluri and Van Ryzin, 2004), the seller offers an assortment of the products to the customer arriving in each time period and the customer chooses a product from the assortment to purchase according to a given choice model. The formulation (PCP) can model the choice-based NRM problem as a special case by assuming that given each and , and are all random variables. Specifically, for each , refers to the assortment offered at time and denotes the customer type. Then denotes the revenue collected by offering assortment , and denotes the according resource consumption, where and are both stochastic and their distribution follows the choice model of the customer with type . Note that although in the following sections we only analyze the case where for each and , and are deterministic, our analysis and results could be generalized directly to the case where and are random and follow known distributions. ### 2.3 Performance Measure We denote the offline optimal solution of optimization problem (CP) as , and the offline (online) objective value as (). Specifically, R∗T \coloneqqT∑t=1ft(x∗t) RT(π) \coloneqqT∑t=1ft(xt). in which online objective value depends on the policy . Aligned with general online learning/optimization problem, we focus on minimizing the gap between the online and offline objective values. Specifically, the optimality gap is defined as follows: RegT(H,π)\coloneqqR∗T−RT(π) where the problem profile encapsulates a random realization of the parameters, i.e., Note that , , and are all dependent on the problem profile as well, but we omit in these terms for notation simplicity without any ambiguity. We define the performance measure of the online stochastic optimization problem formally as regret RegT(π)\coloneqqmaxP∈Ξ EH∼P[RegT(H,π)] (3) where denotes the probability measure of all time periods and the expectation is taken with respect to the parameter ; compactly, the problem profile . We consider the worst-case regret for all the distribution in a certain set where the set will be specified in later sections. We conclude this section with a few comments on our formulation of the online stochastic optimization problem. Generally speaking, the problem of online learning/optimization with constraints falls into two categories: (i) first-observe-then-decide and (ii) first-decide-then-observe. Our formulation belongs to the first category in that at each time , the decision maker first observes the parameter and hence functions , and then determines the value of . In many application contexts of operations research and operations management, the observations constitute the meaning of customers/orders arriving sequentially to the system, and the decision variables capture accordingly the acceptance/rejection/pricing decisions of the customers. The problems discussed earlier, such as matching, resource allocation, network revenue management, all fall into this category. For the second category, the representative problems are bandits with knapsacks (Badanidiyuru et al., 2013) and online convex optimization (Hazan, 2016), where the decision is made first and the observation arrives after the decision. For example, in the classic bandits problem, the decision of which arm to play will affect the observation, and in the online convex optimization (or more generally two-player game setting (Cesa-Bianchi and Lugosi, 2006)), the “nature” may even choose the function against our made decision in an adversarial manner. There is a line of literature on online convex optimization with constraints, namely, the OCOwC problem (Mahdavi et al., 2012; Yu et al., 2017; Yuan and Lamperski, 2018). While the same underlying optimization problem (CP) is used in our formulation and the OCOwC problem, a key distinction is which of the decision or the observation is made first. Our formulation allows to observe before making the decision, and it thus enables us to adopt a stronger benchmark (as the definition of ), that is, a dynamic oracle which permits different value over different time periods. In contrast, the OCOwC problem requires to make decision before observe the functions and thus it considers a weaker benchmark which requires the decision variables take the same value over different time periods. We have not yet discussed much about the conditions on the distributions except for independence. Importantly, this is one of the main themes of our paper. The canonical setting of online stochastic learning problem refers to the case when all the distributions are the same, i.e., for On the other extreme, the adversarial setting of online learning problem refers to the case when ’s are adversarially chosen. Our work aims to bridge these two ends of the spectrum with a novel notion of non-stationarity, and we aim to relate the regret of the problem with structural property on . In the same spirit, the work on non-stationary stochastic optimization (Besbes et al., 2015) proposes an elegant notion of non-stationarity called variation budget. Subsequent works consider similar notions in the settings of bandits (Besbes et al., 2014; Russac et al., 2019) and reinforcement learning (Cheung et al., 2019). To the best of our knowledge, all the previous works along this line consider unconstrained setting and thus our work contributes to this line of work in illustrating how the constraints interact with the non-stationarity. We will return to the point later in the paper. ## 3 Algorithm and Motivation ### 3.1 Benchmark Upper Bounds and Main Algorithm In this section, we motivate and present the prototype of the main algorithm. To begin with, we first establish two useful upper bounds for the expected optimal reward . The derivation of the first upper bound is standard in online decision making problems and it is also known as the deterministic upper bound or the prophet benchmark (for example, see (Jasin and Kumar, 2012)). The motivation for such an upper bound is that the offline optimum obtained by solving (PCP) often preserves complex structure, and thus is very hard to analyze. Comparatively, the proposed upper bound features for better tractability and provides a good starting point for algorithm design and analysis. For a function and a probability measure in the parameter space we introduce the following notation Pu(x(θ))\coloneqq∫θ∈Θu(x(θ);θ)dP(θ) where is a measurable function. Thus can be viewed as a deterministic functional that maps function to a real value and it is obtained by taking expectation with respect to the parameter . Consider the following optimization problem RUBT= max T∑t=1Ptf(xt(θ)) (4) s.t. T∑t=1Ptgi(xt(θ))≤ci,  i=1,...,m, xt(θ):Θ→X is a % measurable function for t=1,...,T. The optimization problem (4) can be viewed as a deterministic relaxation of (PCP) where the objective/constraints are all replaced with their expected counterparts, and the constraints are only required to be satisfied on expectation. In the following, Lemma 1 shows the optimal objective value is an upper bound for . Thus it formally establishes as a surrogate benchmark for when analyzing the regret. ###### Lemma 1 It holds that . Now we seek for a second upper bound by considering the Lagrangian function of (4), L(p,x1:T(θ))=m∑i=1cipi+T∑t=1Pt(f(xt(θ))−m∑i=1pi⋅gi(xt(θ))) (5) where encapsulates all the primal decision variables. The primal variables are expressed in a function form because for each different value of , we allow a different choice of the primal variables. At time , the parameter follows the distribution . The vector conveys a meaning of dual price for each type of resource where is the multiplier/dual variable associated with the -th constraint. It follows from weak duality that RUBT≤minp≥0maxx1:T(θ)L(p,x1:T(θ)) (6) where the maximum is taken with respect to all measurable functions that maps to . In fact, the inner maximization with respect to can be achieved in a point-wise manner by defining the following function for each h(p;θ)\coloneqqmaxx∈X{f(x;θ)−m∑i=1pi⋅gi(x;θ)} where is a function of the dual variable and it is also parameterized by This also echoes the “first-observe-then-decide” setting where at each time , the decision maker first observes the parameter and then decides the value of Moreover, let L(p):=c⊤p+T∑t=1Pth(p,θ) (7) and it holds that . Thus, RUBT≤minp≥0L(p). where the right-hand-side serves as the second upper bound of the problem. The above discussions are summarized in Proposition 1. The advantage of the function is that it only involves the dual variable , and the dual variable is not time-dependent. ###### Proposition 1 It holds that minp≥0maxx∈XL(p,x)=minp≥0L(p) (8) Consequently, we have the following upper bound of , E[R∗T]≤T⋅minp≥0L(p). (9) Algorithm 1 describes a simple primal-dual gradient descent algorithm for solving the online stochastic optimization problem. Essentially, it performs online/stochastic gradient descent for minimizing . To see this, the expected dual gradient update (12) is in fact the gradient with respect to the -th component of the function E[g(~xt;θt)−cT] =−cT+Ptg(~xt;θ) =−∂∂p(1Tc⊤p+Pth(p,θ)). The first line comes from taking expectation with respect to and the second line comes from the definition of in Algorithm 1. Also, the right-hand-side of the second line is the gradient of the -th term in (by absorbing into the summation in ). In the algorithm, the value of the primal decision variable is then decided based on the value of and the observation as in the definition of the function . Throughout the paper, we assume the optimization problem in defining can be solved efficiently. This implicit assumption is further discussed in Section A4. ###### Proposition 2 Under Assumption 1, if we consider the set , then the regret of Algorithm 1 has the following upper bound RegT(π1)≤O(√T) where stands for the policy specified by Algorithm 1. Proposition 2 states that the regret of Algorithm 1 is in a stationary (i.i.d.) setting where the distribution remains the same over time. We present this result mainly for benchmark purpose to better interpret the results in the later sections. In fact, Algorithm 1 and Proposition 2 can be directly implied from several recent results on the application of gradient-based algorithms for different online stochastic optimization problems. Lu et al. (2020) propose and analyze a dual mirror descent algorithm for the online resource allocation problem under the stationary (i.i.d.) setting. Li et al. (2020) analyze a special case of Algorithm 1 for the online linear programming problem under both the stationary (i.i.d.) setting and the random permutation setting. While both works achieve an regret under the setting where the underlying distribution is unknown, a recent work (Sun et al., 2020) considers the network revenue management problem and achieves an regret by exploiting the knowledge of underlying distribution and the structure of the problem. Our paper generalizes the formulations in these three papers (Lu et al., 2020; Li et al., 2020; Sun et al., 2020), and the contribution of the result in this section is mainly on illuminating the idea from the general formulation, but the derivation of Algorithm 1 and the proof of Proposition 2 are not novel and they follow a similar roadmap as the analyses therein. ## 4 Non-stationary Environment: Wasserstein-Based Distance and Analysis In this section, we present the definition of Wasserstein-based non-stationarity and an analytical result on the performance of Algorithm 1 in a non-stationary environment. The aim of such a non-stationarity measure is to relate the best achievable algorithmic performance with the intensity of non-stationarity of the environment (distribution ’s). We will show that our notion of non-stationarity is necessitated by the presence of constraints and thus differs from the prevalent notion of variational budget in the unconstrained setting for online learning problems. ### 4.1 Wasserstein-Based Non-stationarity The Wasserstein distance, also known as Kantorovich–Rubinstein metric or optimal transport distance (Villani, 2008; Galichon, 2018) , is a distance function defined between probability distributions on a metric space. Its notion has a long history dating back a century ago and gains increasingly popularity in recent years with a wide range of applications including generative modeling (Arjovsky et al., 2017), robust optimization (Esfahani and Kuhn, 2018), statistical estimation (Blanchet et al., 2019), etc. In our context, the Wasserstein distance for two probability measures and on the metric parameter space is defined as follows, W(Q1,Q2)\coloneqqinfQ1,2∈J(Q1,Q2)∫ρ(θ1,θ2)dQ1,2(θ1,θ2) (13) where denotes all the joint distributions for that have marginals and . The distance function is defined earlier in (2). Now, we define the Wasserstein-based non-stationarity budget (WBNB) as WT(P)\coloneqqT∑t=1W(Pt,¯PT) (14) where and is defined to be the uniform mixture distribution of , i.e., ¯PT\coloneqq1TT∑t=1Pt. The WBNB measures the total deviation of ’s from the “centric” distribution . Next, we illustrate the difference between the WBNB with the prevalent notion of variation budget (Besbes et al., 2014, 2015; Cheung et al., 2019). Specifically, Besbes et al. (2015) define the variation budget for the stochastic optimization problem in a non-stationary setting, which can be viewed as an unconstrained version of our online stochastic optimization problem (there is no function in (PCP)). Thus, the variation budget can be defined as follows (in the language of our paper), VT\coloneqqT−1∑t=1TV(Pt,Pt+1) where denotes the total variation distance between two distributions. If we temporarily put aside the different distance function used (total variation versus Wasserstein), the variation budget measures the total amount of changes throughout the evolution of the environment and it concerns the local change between two consecutive distributions and Comparatively, the WBNB is more of a “global” property that measures the distance between all ’s and the centric distribution This global property is in fact necessitated by the shift from an unconstrained setting to a constrained setting, and it can be illustrated through the following example adapted from (Golrezaei et al., 2014). Consider the following two linear programs as the underlying problem (PCP) for the online stochastic optimization problem, max x1+...+xc+(1+κ)xc+1+...+(1+κ)xT (15) s.t. x1+...+xc+xc+1+...+xT≤c 0≤xt≤1  for t=1,...,T. max x1+...+xc+(1−κ)xc+1+...+(1−κ)xT (16) s.t. x1+...+xc+xc+1+...+xT≤c 0≤xt≤1  for t=1,...,T. where , and here we assume is an even number. In the first scenario (15), the optimal solution is to wait and accept the later half of the orders while in the second scenario (16), the optimal solution is to accept the first half of the orders and deplete the resource at half time. In both scenarios, the structural difference between the first half and the second half of the orders can be captured by the non-stationarity with which the orders are generated. The contrast between the two scenarios (of whether the first half or the second half is more profitable) creates difficulty for the online decision making. Without knowledge of the future orders, there is no way we can obtain a sub-linear regret in both scenarios, i.e. we will inevitably incur a loss that is a fixed proportion of the optimal value in at least one of these two scenarios. Because if we exhaust too much resource in the first half of the time, then for the first scenario (15), we do not have enough capacity to accept all the relatively profitable orders in the second half. On the contrary, if we have too much remaining resource at the half way, then for the second scenario (16), those orders that we miss in the first half are irrevocable. The intuition is summarized in Proposition 3. Golrezaei et al. (2014) use the example to illustrate the importance of balancing resource usage in an online context; here we revisit this example from a non-stationarity perspective. For these two examples, we can let the distribution in the general formulation (PCP) be a point mass distribution. Then there is only one change point throughout the whole procedure, so the variation budget for these two examples is while the WBNB for these two examples is In the hope of using the non-stationarity measure to characterize the problem difficulty, the WBNB is more suitable, because the variation budget is but a sublinear regret is still unachievable. Intuitively, the presence of the constraint(s) limits our ability to rectify the decision in a non-stationary environment: for example, in (15), even if we learn that the second half of the orders are more profitable, we may not be able to accept them because of the shortage of the resource. Thus the global (indeed, more restrictive) notion of non-stationarity – WBNB is necessary in the online stochastic optimization problem with the presence of the constraints. We will see in the rest of the paper that while the variation budget captures only the learnability of the non-stationary environment, the WBNB aims to characterize whether the non-stationary environment is learnable under the permission of the resource constraints. ###### Proposition 3 The worst-case regret of constrained online stochastic optimization in adversarial setting is . ### 4.2 Lower Bound: Why Wasserstein Distance Based on the notion of WBNB, we define a set of distributions (17) ###### Theorem 1 Under Assumption 1, if we consider the set as in (17), there is no algorithm that can achieve a regret better than . Theorem 1 states that the lower bound of the best achievable regret is . The part is due to Lemma 1 in (Arlotto and Gurvich, 2019). The part can be established from (15) and (16): for these two examples, we can view the coefficients in the objective function and the constraint as a point mass distribution . With , we can verify that both examples belong to the set Then we can follow a similar argument as Proposition 3 to show that any algorithm will incur at least optimality gap in one of the two scenarios. The way how the lower bound on is established also explains why the Wasserstein distance instead of the total variation distance or the KL-divergence is used for our non-stationarity measure. If we revisit the examples (15) and (16), a smaller value of should indicate a smaller variation/non-stationarity between the first half and the second half of observations in both examples. However, the total variation distance fails to characterize this point in that for any non-zero value of , the total variation distance between and for is always (since and have different supports). In other words, if we replace the Wasserstein distance with the total variation distance in our definition of WBNB (14), then the quantity will always be for all The KL-divergence may be even ill-defined when the two distributions have different support. In this light, the Wasserstein distance is a smoother representation of the distance between two distributions than the total variation distance or the KL-divergence. Interestingly, this coincides with the intuitions in the literature of generative adversarial network (GAN) where Arjovsky et al. (2017) replace the KL-divergence with the Wasserstein distance in training GANs. Simultaneously and independently, Balseiro et al. (2020) analyze the dual mirror descent algorithm under a similar setting as our results in this section. The key difference is that Balseiro et al. (2020) consider the total variation distance, which inherits the definition of variation functional from (Besbes et al., 2015). As argued above, the Wasserstein distance is smoother in measuring the difference between distributions and thus it provides sharper regret upper bounds as we will see in the rest of the paper. ### 4.3 Algorithm Analysis and Regret Upper Bound Now we connect Algorithm 1 with our notion of WBNB and establish a regret upper bound for the algorithm under WBNB. For a probability measure over the metric parameter space , we define LQ(p)\coloneqq1Tc⊤p+Qh(p;θ). Then the dual function can be expressed as L(p)=T∑t=1LPt(p). Recall that at each time , Algorithm 1 utilizes a stochastic gradient of the -th term in i.e., function Intuitively, when all ’s are close to each other, the functions should be close to each other. Consequently, though the stochastic gradient is taken with respect to a different function at each time , as long as the difference is small, the stochastic gradient descent should be effective in minimizing . This intuition is aligned with the analysis in (Besbes et al., 2015) given that takes an unconstrained form (if we ignore the non-negativeness constraint). Lemma 2 states that the function has certain “Lipschitz continuity” in regard with the underlying distribution . Specifically, the supremum norm between two functions and is bounded by the Wasserstein distance between two distributions and up to a constant. ###### Lemma 2 For two probability measures and over the metric parameter space , we have that supp∈Ω¯p∣∣LQ1(p)−LQ2(p)∣∣≤max{1,¯p}⋅(m+1)W(Q1,Q2) where and is an arbitrary positive constant. Note that the Lipschitz constant in Lemma 2 involves an upper bound of the function argument The following lemma provides such an upper bound for the dual price ’s in Algorithm 1. Its proof largely relies on the part (c) of Assumption 1, and reversely, the key role of the part (c) of Assumption 1 throughout our analysis is to ensure an upper bound for the dual price. ###### Lemma 3 Under Assumption 1, for each , the dual price vector satisfies , where is specified by (12) in Algorithm 1 and the constant is defined in Assumption 1 (c). The following theorem builds upon Lemma 2 and Lemma 3 and it states that the regret of Algorithm 1 is upper bounded by . Its proof mimics the standard analysis of online/stochastic gradient descent (Hazan, 2016) and integrates the notion of non-stationarity in a similar manner as (Besbes et al., 2015). ###### Theorem 2 Under Assumption 1, if we consider the set as in (17), then the regret of Algorithm 1 has the following upper bound RegT(π1)≤O(max{√T,WT}) where stands for the policy specified by Algorithm 1. Remarkably, the factors on and are additive in the regret upper bound of Algorithm 1. In comparison, the factor on and the variation budget are usually multiplicative in the regret upper bounds in the line of works that adopts the variation budget as nonstationary measure (Besbes et al., 2014, 2015; Cheung et al., 2019). The price of such an advantage for WBNB is that the WBNB is a more restrictive notion than the variation budget; for example, recall that in (15) and (16), the variational budget is , but the WBNB is . Another important feature of our result is that Algorithm 1 does not depend on or utilize the knowledge of the quantity On the upside, this avoids the assumption on the prior knowledge of variation budget (Besbes et al., 2015). On the downside, there is nothing the algorithm can do even when it knows that is large. Technically, it means for Algorithm 1, the WBNB contributes nothing in the dimension of algorithm design, and it will only influence the algorithm analysis. Specifically, it quantifies the extent to which the non-stationary environment will deteriorate the performance of Algorithm 1, and the quantification is done by the additional regret compared to the regret in the stationary environment (Proposition 2). Theorem 1 and Theorem 2 seemingly conclude our discussion on the problem by validating the optimality of Algorithm 1. In terms of the worst-case performance (regret), no algorithm can do better than the simple gradient-based algorithm. However, we emphasize that the optimality is contingent on the specific choice of the distribution set . In the next section, we present a more generalized and realistic setting, and develop new algorithm and more analysis under the WBNB. ## 5 Non-stationary Environment with Prior Estimate: Blend of Gradient Update and Offline Solution In this section, we generalize our previous notion of WBNB and present a second algorithm in a more general context. The motivation for the generalization is two-fold: • Availability of future information: In our previous setting, we consider a “blind” setting where no knowledge about the future distributions is assumed. However, the non-stationarity in practical applications may exhibit predictable patterns such as demand seasonality, the day-of-week effect, and demand surge due to pre-scheduled promotion or shopping festivals. Then, the questions are (i) how to revise the definition of non-stationarity for such a predictable environment, and (ii) how to utilize the predictability of future information for better algorithm design. • Restrictiveness of our previous WBNB: In Section 4.1, we note that the WBNB is a global and more restrictive measure than the classic notion of variation budget. In particular, the example (15) or (16) show that one single change point for the sequence of distributions may cause the WBNB to scale linearly with . Thus the regret upper bound in Theorem 2 can be quite loose in this type of change-point setting. ### 5.1 Wasserstein-Based Non-stationarity with Prior Estimate Suppose the decision maker has a prior estimate/prediction for each distribution , and all the predictions are made available at the very beginning of the procedure. We consider the following Wasserstein-based non-stationarity budget with Prior Estimate (WBNB-P): WPT(P,^P)=T∑t=1W(Pt,^Pt) (18) where denotes the true distribution and denotes the prior estimate. By its definition, the new WBNB-P measures the total deviation of true distributions from their prior estimates , whereas our previous WBNB (14) considers the deviation from the centric distribution . Besides, we can also view WBNB-P as a measure of total estimation error. Now we present an algorithm that utilizes the prior estimate. The starting point is the same as the derivation of Algorithm 1. Specifically, we define ^L(p)=c⊤p+T∑t=1^Pth(p,θ) (19) where the true distribution is replaced by its estimate for each in function defined in (7). Thus it can be viewed as an approximation for the true dual function based on prior estimate. Let denote one optimal solution to , ^p∗∈argminp≥0^L(p) (20) and for each , define γt\coloneqq^Ptg(^x(θ);θ)~{}~{}where~{}^x(θ)=argmaxx∈X{f(x;θ)−(^p∗)⊤⋅g(x;θ)}. (21) Here, denotes the expected resource consumption in the -th time period under the dual optimal solution . Accordingly, for each , we define the following function , ^Lt(p)\coloneqqγ⊤tp+^Pth(p;θ). (22) and then we have the following relation between and ###### Lemma 4 For each , it holds that ^p∗∈argminp≥0^Lt(p) (23) where is defined in (20) as the minimizer of the function . Moreover, it holds that ^L(^p∗)=T∑t=1^Lt(^p∗). (24) The definition of and Lemma 4 construct a way to decompose the function into a summation of functions. The new scheme absorbs the term in function into the summation in a different way compared to the scheme in the previous two sections where . Inspired by this new scheme, Algorithm 2 replaces the term in Algorithm 1 with for the update of dual prices . From an algorithmic perspective, Algorithm 2 adjusts the gradient descent direction in Algorithm 1 based on ’s computed from an offline problem (19) specified by the prior estimate. The new update rule with ’s thus coincides with a stochastic gradient with respect to the function . From Lemma 4, we note that each function shares the same optimal solution with their aggregated function Intuitively, at each time , though the stochastic gradient is computed from a different function , all the gradient descent directions point to the same optimal solution This special property makes the gradient update in Algorithm 2 more effective, and in essence, the algorithm performs one iteration of online stochastic gradient descent with respect to the function at each time . A natural question is why we do not use the optimal solution to as a fixed dual price throughout the procedure as the well-known bid price policy (Talluri and Van Ryzin, 1998) for the network revenue management problem. We defer to Section B11 for a detailed discussion on this question. Another way to interpret Algorithm 2 is from the resource consumption perspective. The sequence represents the optimal way to allocate the resource over time according to the prior estimate. In Algorithm 1, from the update rule of the dual price, we know that if at time period , the resource consumption of constraint is larger (resp. smaller) than , i.e., (resp. ), then we have that (resp. ). In this sense, the dual price balances the process of the resource consumption. However, when the prior estimates ’s are available, it may be no longer desirable to allocate the resource evenly over all time periods. Thus reflects the adjustment on resource consumption suggested by the prior estimate. A larger (resp. smaller) value of indicates that more (resp. less) resource should be allocated to time period . ### 5.2 Regret Analysis Based on the notion of WBNB-P, we define a set of distributions ΞP(^P)\coloneqq{P:WPT(P,^P)≤WT,P=(
2021-04-11 00:50:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8889021277427673, "perplexity": 456.6661200475808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00487.warc.gz"}
https://lists.gnu.org/archive/html/help-gnu-emacs/2007-10/msg00698.html
help-gnu-emacs [Top][All Lists] ## Re: What is the best html to latex program on the market or the internet From: Edd Barrett Subject: Re: What is the best html to latex program on the market or the internet ? Date: Tue, 23 Oct 2007 08:33:32 -0000 User-agent: G2/1.0 On Oct 23, 2:26 am, address@hidden wrote: > maybe I should post in european tex groups also > > On Oct 22, 2:57 pm, address@hidden wrote: > > > Basically, it should do all that any of the tools below and in > > > 1/ > > human readable output that maintains the text lines of the source, ie > > does not scramble the text lines or insert newlines unnecessarily or > > removes them. inserts minimal latex elements. > > > 2/ > > maintains cross-links, ie convert <href to \ref and <name= to \label > > > but if the set of htmls is incomplete proceed with the assumption that > > the reference is there, ie dont delete the links or try to modify them > > or their addresses. One of the tool I tested is too smart in this > > respect and actually ruins the result. > > > 3/ > > proper conversion of images, tables, etc. No math mode involved in > > html. > > > 4/ > > Even an emacs lisp function could be written by a guru that can do the > > job. > > > 5/ > > Is there any commercial wysiwig tool ? > > > LaTeX etc > > > * html2latex is a program based on the NCSA html parser. Contact: > > * Another html2latex can combine several HTML files into a single > > LaTeX file, converting links between the files to references. External > > URL's can be converted into footnotes or into a bibliography sorted on > > URL. Contact: address@hidden (Frans J. Faase) > > * Another html2latex implemented on Linux by yacc+lex+C. Also > > available from the TSX-11 Linux FTP site as nc-html2latex-0.97.tar.gz. > > Contact: address@hidden (Naoya Tozuka) > > * htmlatex.pl is a perl script to do the conversion (may be moving > > soon). Contact: address@hidden (Jake Kesinger) > > * There is also a sed script to convert HTML into LaTeX. Hi, I don't know if this can be of help: http://openwetware.org/wiki/User:Austin_J._Che/Extensions/LatexDoc This is something that we are looking into to allow researchers to distribute documents in both PDF and web-based (we hope). Thanks Edd
2019-08-23 12:41:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7165303230285645, "perplexity": 11797.076141403946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00463.warc.gz"}
https://search.r-project.org/CRAN/refmans/dynsurv/html/bayesCoxMcmc.html
bayesCoxMcmc {dynsurv} R Documentation ## Get the MCMC Samples from bayesCox ### Description Returns the MCMC samples produced by bayesCox into data frames. ### Usage bayesCoxMcmc(object, parts = c("h0", "coef"), ...) ### Arguments object A bayesCox object parts A character vector specifying the parts to be exacted from the MCMC output text file produced by bayesCox. One or more following options can be specified: "h0" for baseline hazard function, "coef" for covariate coefficients, "nu" for sampled latent variance of coefficients, "jump" for indicators of jumps, and "all" for all of the above. The default value is c("h0", "beta"). ... Other arguments that are not used now. [Package dynsurv version 0.4-3 Index]
2022-11-28 08:43:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5599139928817749, "perplexity": 14501.268822529455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00074.warc.gz"}
https://kops.uni-konstanz.de/handle/123456789/21245
KOPS - Das Institutionelle Repositorium der Universität Konstanz # Real Closed Exponential Fields ## Dateien zu dieser Ressource Prüfsumme: MD5:fa1733c1297a57e8bbd07752013f6e40 D'AQUINO, Paola, Julia F. KNIGHT, Salma KUHLMANN, Karen LANGE, 2011. Real Closed Exponential Fields @unpublished{DAquino2011Close-21245, title={Real Closed Exponential Fields}, year={2011}, author={D'Aquino, Paola and Knight, Julia F. and Kuhlmann, Salma and Lange, Karen}, note={Also publ. in: Fundamenta Mathematicae ; 219 (2012), 2. - S. 163-190} } D'Aquino, Paola 2011 terms-of-use D'Aquino, Paola eng Real Closed Exponential Fields 2013-02-04T09:55:24Z 2013-02-04T09:55:24Z Kuhlmann, Salma Lange, Karen Kuhlmann, Salma In an extended abstract Ressayre considered real closed exponential fields and integer parts that respect the exponential function. He outlined a proof that every real closed exponential field has an exponential integer part. In the present paper, we give a detailed account of Ressayre's construction, which becomes canonical once we fix the real closed exponential field, a residue field section, and a well ordering of the field. The procedure is constructible over these objects; each step looks effective, but may require many steps. We produce an example of an exponential field $R$ with a residue field $k$ and a well ordering $<$ such that $D^c(R)$ is low and $k$ and $<$ are $\Delta^0_3$, and Ressayre's construction cannot be completed in $L_{\omega_1^{CK}}$. Lange, Karen Knight, Julia F. Knight, Julia F. ## Dateiabrufe seit 01.10.2014 (Informationen über die Zugriffsstatistik) daquino_212455.pdf 57
2018-02-19 04:14:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7779194116592407, "perplexity": 7481.796959186021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00105.warc.gz"}
http://www.pgafarmers.com/kindle/a-primer-on-pd-es-models-methods-simulations-unitext-volume-65
By Sandro Salsa, Federico M. G. Vegni, Anna Zaretti, Paolo Zunino ISBN-10: 8847028620 ISBN-13: 9788847028623 This e-book is designed as a sophisticated undergraduate or a first-year graduate path for college students from numerous disciplines like utilized arithmetic, physics, engineering. It has advanced whereas educating classes on partial differential equations over the last decade on the Politecnico of Milan. the most objective of those classes used to be twofold: at the one hand, to coach the scholars to understand the interaction among concept and modelling in difficulties bobbing up within the technologies and nonetheless to provide them a superb heritage for numerical equipment, akin to finite variations and finite parts. Similar differential equations books Download PDF by Gilles Aubert, Pierre Kornprobst: Mathematical Problems in Image Processing: Partial The up to date 2d version of this booklet offers various photograph research functions, stories their targeted arithmetic and exhibits the way to discretize them. For the mathematical neighborhood, the publication exhibits the contribution of arithmetic to this area, and highlights unsolved theoretical questions. For the pc imaginative and prescient group, it offers a transparent, self-contained and worldwide review of the math thinking about snapshot procesing difficulties. Download e-book for kindle: Differential Equations: Theory and Applications by David Betounes (auth.) The e-book offers a entire creation to the speculation of normal differential equations on the graduate point and comprises purposes to Newtonian and Hamiltonian mechanics. It not just has a lot of examples and special effects, but additionally has a whole choice of proofs for the key theorems, starting from the standard life and distinctiveness effects to the Hartman-Grobman linearization theorem and the Jordan canonical shape theorem. During the last 4 many years there was huge improvement within the conception of dynamical platforms. This e-book begins from the phenomenological viewpoint reviewing examples. for that reason the authors talk about oscillators, just like the pendulum in lots of edition together with damping and periodic forcing , the Van der Pol method, the Henon and Logistic households, the Newton set of rules visible as a dynamical procedure and the Lorenz and Rossler procedure also are mentioned. Read e-book online Parabolic systems with polynomial growth and regularity PDF The authors identify a chain of optimum regularity effects for suggestions to basic non-linear parabolic structures $u_t- \mathrm{div} \ a(x,t,u,Du)+H=0,$ lower than the most assumption of polynomial development at price $p$ i. e. $|a(x,t,u,Du)|\leq L(1+|Du|^{p-1}), p \geq 2.$ they provide a unified therapy of varied interconnected points of the regularity concept: optimum partial regularity effects for the spatial gradient of options, the 1st estimates at the (parabolic) Hausdorff measurement of the comparable singular set, and the 1st Calderon-Zygmund estimates for non-homogeneous difficulties are accomplished right here Extra info for A Primer on PDEs: Models, Methods, Simulations (UNITEXT, Volume 65) Sample text We call the number κ = ξ 2 − ξ 1 thickness of the transition layer. 62) and integrate over (ξ 1 , ξ 2 ); this yields ξ2 − ξ1 = ε U (ξ 2 ) U (ξ 1 ) ds . q (s) − vs + A¯ Thus, the thickness of the transition layer is proportional to ε. As ε → 0, the transition region becomes more and more narrow and eventually a shock wave that satisfies the entropy inequality is obtained. This phenomenon is clearly seen in the important case of viscous Burgers’ equation that we examine in more details in the next subsection. X vm t (t > 0). 35) is equivalent to ρ (x, t) = r x t −1 where r = (q ) is the inverse function of q . Indeed this is the general form of a rarefaction wave (centered at the origin) for a conservation law. We have constructed a continuous solution ρ of the green light problem, connecting the two constant states ρm and 0 by a rarefaction wave. However, it is not clear in which sense ρ is a solution across the lines x = ±vm t, since, there, its derivatives undergo a jump discontinuity. 34) is the only solution. Study the problem (Burgers equation) ut + uux = 0 x ∈ R, t > 0 u (x, 0) = g (x) x ∈ R when the initial data g(x), respectively, is: ⎧ ⎧ ⎨ 1 if x < 0 ⎨ 0 if x < 0 2 if 0 < x < 1 1 if 0 < x < 1 b) a) ⎩ ⎩ 0 if x > 1 0 if x > 1 ⎧ if x ≤ 0 ⎨1 1 − x if 0 < x < 1 c) ⎩ 0 if x ≥ 1. 4. The conservation law ut + u3 ux = 0 x ∈ R, t > 0 11 We refer the reader to Quarteroni [43] and Le Veque [40] for a detailed treatment of this matter.
2018-12-10 10:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584479570388794, "perplexity": 1646.0689140733934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823322.49/warc/CC-MAIN-20181210101954-20181210123454-00125.warc.gz"}
https://tex.stackexchange.com/questions/201433/bib-file-is-not-recognized
bib file is not recognized I am using a template from this journal. In this template the bibliography part is written inside the tex file. I would like to use my own bib file. I tried the following lines at the end of the tex file bedore the \end{document} but it doesn't recognize the bib file: \bibliographystyle{dgruyter_author} \bibliography{Mybib} I have added some citation in the middle of the text \cite I checked the warning I get is: Package natbib warning: Citation "Brown1997High" on page 1 undefined. • The bib file is in the main folder of the template files beside the main tex file. • I have used the bib file with other styles for instance IEEE, ... before. • I am using winedt8.1 on windows What should I do? Update: if I change the code to: \bibliographystyle{plain} \bibliography{Mybib} it works. But now the question is: How can I use the style from the template? A copy of the style file can be found here. \documentclass[USenglish,twocolumn]{article} \usepackage[utf8]{inputenc}%(only for the pdftex engine) %\RequirePackage[no-math]{fontspec}%(only for the luatex or the xetex engine) \usepackage[big]{dgruyter_author} \begin{document} \articletype{Research Article{\hfill}Open Access} \author*[1]{Corresponding Author} \affil[1]{Affil, E-mail: email@email.edu} \title{\huge Article title} \runningtitle{Article title} \maketitle \section{Introduction} \paragraph{Reference to a standard} Elements to cite: Standard symbol and number, Title \cite{standard-1}. % >>>>> I replaced the following lines >>>>> %\begin{thebibliography}{99} %\bibitem{standard-2} ISO/TR 9544:1988, Information processing --- Computer-assisted publishing --- Vocabulary %\end{thebibliography} % >>>>> with these lines >>>>>>> \bibliographystyle{plain} \bibliography{Mybib} \end{document} \end{document} • I added the sty file, maybe it is useful. The problem is that it works with \bibliographystyle{plain} but the style is different from the other parts of the tex. – NKN Sep 16 '14 at 13:53 If I rename your style file dgruyter.sty to dgruyter_author.sty and add the missing logo dg-degruyter.png (it is called in the style file!) into the same directory and add a bib file with package filecontents into your MWE it compiles for me with two warnings comming from the style file.. The new MWE is: \RequirePackage{filecontents} \begin{filecontents*}{\jobname.bib} @book{billingsley, title = {Convergence of Probability Measures}, author = {P. Billingsley}, year = {1968}, publisher = {Wiley, New York}, } \end{filecontents*} \documentclass[USenglish,twocolumn]{article} \usepackage[utf8]{inputenc}%(only for the pdftex engine) %\RequirePackage[no-math]{fontspec}%(only for the luatex or the xetex engine) %\usepackage[big]{dgruyter} % throws two warnings \usepackage[big]{dgruyter_author} % file dgruyter.sty -> dgruyter_author.sty \begin{document} \articletype{Research Article{\hfill}Open Access} \author*[1]{Corresponding Author} \affil[1]{Affil, E-mail: email@email.edu} \title{\huge Article title} \runningtitle{Article title} \maketitle \section{Introduction} \paragraph{Reference to a standard} Elements to cite: Standard symbol and number, Title \cite{billingsley}. % >>>>> I replaced the following lines >>>>> %\begin{thebibliography}{99} %\bibitem{standard-2} ISO/TR 9544:1988, Information processing --- Computer-assisted publishing --- Vocabulary %\end{thebibliography} % >>>>> with these lines >>>>>>> \bibliographystyle{plain} \bibliography{\jobname} \end{document} Please try this new MWE and tell us if it shows the result you want.
2019-10-17 22:51:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934271335601807, "perplexity": 7903.141023953541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00328.warc.gz"}
https://datascience.stackexchange.com/questions/44354/ordinal-classification-with-xgboost
# Ordinal classification with xgboost I am working in the problem where the dependent variables are ordered classes, such as bad, good, very good. How could I declare this problem in xgboost instead of normal classification or regression? Thanks ## 2 Answers You can run 2 xgboost binary classifiers • 1 classifier classifies if sample is (good or very good) • 2 classifier classfies if sample is very good • if both true on unseen data classify as very good • if only 1st one true, second false classify as good both false=> classify as bad • What to do if first false but second true? – Ben Reiniger Oct 29 '19 at 15:50 • if both classfiers trained well, should happen only rarely and should be classified as bad. if need more tuning can output probabilities and compare probabilities instead of labels – alexprice Oct 30 '19 at 12:45 • Indeed, this is probably a better situation than the regression setup in the other answer in the case of conflicting uncertainty. You could just output "I don't know," or if a decision is required, make sure the classifiers are probabilistic and well-calibrated. – Ben Reiniger Oct 30 '19 at 15:05 • You can also use the prediction/probabilities of earlier labels as features for the higher labels. For example, the classifier 2 can be given the probability that classifier 1 already indicated it was at least 'good' as a feature – DrewH Feb 14 '20 at 21:42 I think you can use a regression setup, e.g. bad=0, good=0.5, very good = 1 for labels, and then postprocess output of XGBoost, such as pred_value < 0.25 => prediction_label=bad, pred_value >= 0.25 and pred_value < 0.75 => prediction_label=good and so on. • +1, but the two-classifier ordinal approach seems more flexible. – Ben Reiniger Oct 30 '19 at 15:06
2021-04-22 13:42:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5698409080505371, "perplexity": 3025.4473712122426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00100.warc.gz"}
https://math.stackexchange.com/questions/2529197/zolotarevs-lemma-and-quadratic-reciprocity
# Zolotarev's Lemma and Quadratic Reciprocity The law of quadratic reciprocity is unquestionably one of the most famous results of mathematics. Carl Gauss, often called the "Prince of Mathematicians", referred to it as "The Golden Theorem". He published six proofs of it in his lifetime. To date over 200 proofs of this result have been found. The single most frustrating thing about this theorem is that there are no easy proofs of it, at least when measured relative to the simplicity of the statement and the mathematics it involves. For someone like myself, who prides themselves on being able to find very slick proofs, it can drive you insane. As an undergraduate, when confronted with the lattice-point proof in an introductory number theory class, I refused to learn it. I thought to myself there's no way I need to go through all that just to prove something so simple. There must be an easier way. Zolotarev's Proof of the law has been to date the simplest, and quite frankly most elegant, proof that I can find. The crucial step involves equating the value of the Legendre symbol with the signature of the permutation on $\mathbb{Z}_q$ induced by left multiplication. It can take a little bit longer going through it the first time, but winds up being one of those proofs you can just remember without needing to re-reference it. I had a difficult time finding the result from a single source, at least in a satisfactory form, and had to compile different results from different sources. I thought others might similarly struggle, and so I've typed it below as a resource for them. Zolotarev's Lemma relates the value of the Legendre symbol to the signature of a permutation of $$\mathbb{Z}_p$$ It is stated and proved below, along with its use in what is considered to be a very elegant proof of the quadratic reciprocity law. Zolotarev's Lemma: Let $$p$$ be an odd prime, $$a \in \mathbb{Z}_p^\times$$, and $$\tau_a : \mathbb{Z_p} \to \mathbb{Z_p}$$ be the permutation of $$\mathbb{Z_p}$$ given by $$\tau_a(x):= ax, \,$$ then $$\binom{a}{p} = \text{sgn}(\tau_a)$$ Proof: We determine the signature based on the parity of the cycle structure. Note that the signature of a k-cycle is $$(-1)^{k-1}$$. Let $$m = |a|$$ in $$\mathbb{Z}_p^\times$$. Since $$0$$ is a singleton orbit (i.e. fixed point) of $$\tau_a$$, and therefore has no effect on its signature, it suffices to proof this for the restriction of $$\tau_a$$ to $$\mathbb{Z}_p^\times$$, as the signature will be the same for both. Each cycle has the form $$(x,ax,a^2x,a^3x,\dots ,a^{m-1}x)$$. Thus the cycle structure consists of $$(p-1)/m \$$ $$\ m$$-cycles, and its signature is therefore given by: $$\\$$ $$\text{sgn}(\tau_a)= \left((-1)^{m-1}\right)^{p-1 \over m} = \begin{cases} (-1)^{\frac{p-1}{m}} & \mathrm{if} \ m \text{ is even} \\ \\ \ \ \ 1 & \text{if } m \text{ is odd} \end{cases}$$ $$\\$$ Recall that $$x^2 = 1 \; (\text{mod }p) \implies x = \pm 1 \; (\text{mod }p)$$, and thus $$a^{m/2} = -1 \:$$ as $$\ \frac{m}{2} < m = |a|$$. If $$m$$ is even we have $$a^{\frac{p-1}{2}} = \left(a^{\frac{m}{2}}\right)^{\frac{p-1}{m}} = (-1)^{\frac{p-1}{m}} = \text{sgn}(\tau_a)$$ $$\\$$ If $$m$$ is odd, then $$(2,m) = 1 \text{ and } 2,m \,\big| \, p\!-\!1 \implies 2m \, \big| \, p\!-\!1$$ and we have: $$a^{\frac{p-1}{2}} = \left(a^m \right)^\frac{p-1}{2m} = 1^{\frac{p-1}{2m}} = 1 = \text{sgn}(\tau_a)$$ $$\\$$ Euler's criterion then finishes the argument. $$\\$$ Corollary: If p and q are odd primes, $$a \in \mathbb{Z}_q$$, and $$b \in \mathbb{Z}_p$$ then $$\binom{p}{q}$$ and $$\binom{q}{p}$$ are equal to the signatures of the permutations $$x \mapsto qx + b$$ and $$x \mapsto a + px$$ respectively. $$\\$$ Proof The argument is symmetric. We shall prove it for $$\binom{p}{q}$$. Let $$a \in \mathbb{Z}_q$$ and define the permutation $$\sigma: \mathbb{Z}_q \to \mathbb{Z}_q$$ by $$\sigma(x):= a + x$$. If $$a = 0$$, then $$\sigma = Id$$ and $$\text{sgn}(\sigma) = 1$$. Otherwise, the permutation consists of a single p-cycle, $$(x,a+x,2a+x,\dots,(q-1)a+x)$$ and thus sgn$$(\sigma) = 1$$ also. Letting $$\tau_p$$ be as defined above, the permutation $$x \to a+px$$ is equal to the composition $$\sigma \tau_p$$ and thus by Zolotarev's Lemma its signature is $$\text{sgn}(\sigma \tau_p) = \text{sgn}(\sigma)\text{sgn}(\tau_p) = \text{sgn}(\tau_p) = \binom{p}{q}$$. $$\\$$ The Law of Quadratic Reciprocity: If $$p$$ and $$q$$ are odd primes then $$\binom{p}{q} \binom{q}{p} = (-1)^{\frac{p-1}{2} \frac{q-1}{2}}$$ Proof Let $$\tau: \mathbb{Z}_{pq} \to \mathbb{Z}_p \times \mathbb{Z}_q \ \text{ and } \ \lambda,\alpha : \mathbb{Z}_p \times \mathbb{Z}_q \to \mathbb{Z}_p \times \mathbb{Z}_q$$ be permutations defined as follows: \begin{align} \tau(x):=& \ (x,x) \\ \lambda(a,b):=& \ \left(a,a\!+\!p{}b\right) \\ \alpha(a,b):=& \ \left(q{}a\!+\!b,b\right) \end{align} Now define the permutation $$\varphi: \mathbb{Z}_{pq} \to \mathbb{Z}_{pq}$$ by the rule $$\varphi(a+pb):= qa+b$$. This function is well-defined by the Division Algorithm, provided we view it as being defined only on the residues. It is routine to extend this argument to account for the congruence classes in general. Note that $$\varphi = \tau^{-1} \circ \alpha \lambda^{-1}\! \circ \tau$$ and thus $$\text{sgn}(\varphi) = \text{sgn}(\alpha)\text{sgn}(\lambda)$$ We count the signature of $$\varphi$$ in two ways - by its cycle parity and then by its inversions. Looking at $$\lambda$$'s cycle structure, we note that for each $$a \in \mathbb{Z_p}$$ the restriction of $$\lambda$$ to $$\{a\} \times \mathbb{Z}_q$$ is still a permutation, and its cycle structure is identical to the cycle structure of the permutation $$b \mapsto a+pb$$ it induces in its second coordinate. In particular the restriction of $$\lambda$$ to $$\{a\} \times \mathbb{Z}_q$$ has a signature equal to $$\binom{p}{q}$$. We can then extend this function to the rest of $$\mathbb{Z}_p \times \mathbb{Z}_q$$ by making it the identity, and we can then view $$\lambda$$ as the p-fold composition of these permutations, and thus $$\text{sgn}(\lambda) = \binom{p}{q}^p = \binom{p}{q}$$. It is best to see this via an example.Similarly, $$\text{sgn}(\alpha) = \binom{q}{p}$$ and thus $$\text{sgn}(\varphi) = \binom{p}{q}\binom{q}{p}$$ We now count the inversions. Note that $$a_1 + p{}b_1 < a_2 +p{}b_2 \text{ and }q{}a_2+b_2 < q{}a_1+b_1 \implies a_1 - a_2 < p(b_2 - b_1) < p{}q(a_1 - a_2)$$ Since $$a_1 - a_2$$ gets larger when multiplied by the positive integer $$pq$$ we must have that $$a_1 - a_2 > 0$$ which then forces $$b_2 - b_1 > 0$$. It can also be seen that this is a sufficient condition for an inversion as well. Thus the pair $$\left(a_1 + pb_1,a_2+pb_2 \right)$$ represents an inversion under $$\varphi$$ if and only if $$a_1 > a_2 \text{ and } b_2 > b_1$$. Since given any pair of distinct integers one is necessarily larger than the other, any pair of doubles $$(a_1,a_2),(b_1,b_2)$$ corresponds to a unique inversion,provided we don't distinguish between $$(a_1,a_2) \text{ and } (a_2,a_1)$$ (and similarly for the $$b_i$$).The number of inversions is therefore $$\binom{p}{2}\binom{q}{2} = \frac{p-1}{2}\frac{q-1}{2} \,(\text{mod }2)$$ Equating the two values for $$\text{sgn}(\varphi)$$ gives us our result. • very nice, thank you. – Jack D'Aurizio Nov 20 '17 at 16:47 • I fail to see why $\phi = \tau^{-1} \circ \alpha\lambda^{-1} \circ \tau$. Could you please explain. Tried calculating $\tau^{-1} \circ \alpha\lambda^{-1} \circ \tau (a+pb)$ to see if it equals $qa+b$, but i am getting nowhere. – crskhr Dec 4 '18 at 8:35 • @crskhr Sure. Note that $a + pb = a \mod p$. So we have $\tau^{-1} \circ \alpha \lambda^{-1} \circ \tau(a+pb) = \tau^{-1} \circ \alpha \lambda^{-1}(a+pb,a+pb) = \tau^{-1} \circ \alpha \lambda^{-1}(a,a+pb) = \tau^{-1} \circ \alpha(a,b) = \tau^{-1}(qa+b,b) = \tau^{-1}(qa+b,qa+b) = qa+b$ – David Reed Dec 4 '18 at 23:57 • @DavidReed Thanks, i failed to notice $a+pb =a\pmod{p}$, hence couldn't arrive at the desired conclusion. Thanks once again.Magical proof! Where did you get this from? – crskhr Dec 5 '18 at 1:54 • @DavidReed Also there is a slight mistake. $\text{sgn}(\tau_{a})$ will be 1 if $m$ is odd and if its even it will be $(-1)^{\frac{p-1}{m}}$ – crskhr Dec 5 '18 at 5:25 I would like to add another proof of the Zolotarev's lemma. Consider $$a\in\mathbb{Z}_p^{\times}$$ and permutation $$\tau_a\colon x\mapsto ax$$. Note that for any permutation $$\pi\in S(\mathbb{Z}_p)$$ we have (here, we represent $$\mathbb{Z}_p$$ as set $$\{0,1,\ldots,p-1\}$$) $$\text{sgn}~\pi=(-1)^{\text{inv}~\pi}=\prod_{0\leq i where $$\text{inv}~\pi$$ is a number of inversions. Hence, $$\text{sgn}~\tau_a=\prod_{0\leq i Now, note that $$\tau_a(i)\equiv a\cdot i\pmod p$$, so $$\text{sgn}~\tau_a\equiv\prod_{0\leq i due to Fermat's Little Theorem. Finally, by Euler's criterion we have $$\left(\dfrac{a}{p}\right)\equiv a^{\frac{p-1}{2}}\pmod p$$. Thus, $$\text{sgn}~\tau_a\equiv\left(\dfrac{a}{p}\right)\pmod p,$$ which reduces to (because $$\text{sgn}~\pi\in\{-1,1\}$$ and $$p>2$$) $$\text{sgn}~\tau_a=\left(\dfrac{a}{p}\right),$$ as desired.
2020-11-26 13:06:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 103, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9751944541931152, "perplexity": 190.38949679179882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00634.warc.gz"}
https://testbook.com/question-answer/for-paramagnetic-materials-magnetic-susceptibilit--5f3be7d78eb3f323a61b19b6
# For paramagnetic materials, magnetic susceptibility (χ) is ________. This question was previously asked in SSC Scientific Assistant Physics Official Paper (Held On: 25 November 2017 Shift 2) View all SSC Scientific Assistant Papers > 1. positive and small 2. negative and small 3. negative and large 4. positive and large Option 1 : positive and small Free PYST 1: SSC CGL - General Awareness (Held On : 20 April 2022 Shift 2) 2.3 Lakh Users 25 Questions 50 Marks 10 Mins ## Detailed Solution CONCEPT: • Magnetic susceptibility (χm): It is the property of the substance which shows how easily a substance can be magnetized. • It is defined as the ratio of the intensity of magnetization (I) in a substance to the magnetic intensity (H) applied to the substance,  i.e. $$\chi = \frac{I}{H}$$ • It is a scalar quantity with no units and dimensions EXPLANATION: • Paramagnetic substances are those which develop feeble magnetization in the direction of the magnetizing field. • Such substances are feebly attracted by magnets and tend to move from weaker to stronger parts of a magnetic field. • Magnetic susceptibility is small and positive i.e. 0 <  χ. Therefore option 1 is correct. • Example: Manganese, aluminum, chromium, platinum, etc. Diamagnetic substances: • Diamagnetic substances are those which develop feeble magnetization in the opposite direction of the magnetizing field. • Such substances are feebly repelled by magnets and tend to move from stronger to weaker parts of a magnetic field. • Magnetic susceptibility is small and negative i.e. -1 ≤  χ ≤ 0. • Examples: Bismuth, copper, lead, zinc, etc. Ferromagnetic substances: • Ferromagnetic substances are those which develop strong magnetization in the direction of the magnetizing field. • They are strongly attracted by a magnet and tend to move from weaker to the stronger part of a magnetic field. • Magnetic susceptibility is very large and positive i.e. χ > 1000 • Example: Iron, cobalt, nickel, gadolinium, and alloys like alnico.
2023-03-29 04:10:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5349167585372925, "perplexity": 8425.495575045676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00339.warc.gz"}
https://www.physicsforums.com/threads/taylor-expansions-and-integration.913956/
# I Taylor expansions and integration. Tags: 1. May 7, 2017 ### JamesHG I have a short doubt: Let f(x) be a fuction that can't be integrated in an analytical way . Is anything wrong if I expand it in a Taylor' series around a point and use this expansion to get the value of the definite integral of the function around that point? Suppose that the interval between the integral limits it's short so that the expansion is a good approximation to the function in that interval. Thanks! 2. May 7, 2017 ### eys_physics If the function is analytic around the point (i.e. all the derivatives exist and or finite at the point), I cannot see any problem. Obviously, unless you use infinite number terms it will usually only be an approximation to your integral over f(x).
2017-08-23 06:28:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448598027229309, "perplexity": 331.68110475946526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00625.warc.gz"}
http://iemsjl.org/journal/article.php?code=63292
• Editorial Board + • For Contributors + • Journal Search + Journal Search Engine ISSN : 1598-7248 (Print) ISSN : 2234-6473 (Online) Industrial Engineering & Management Systems Vol.17 No.3 pp.454-463 DOI : https://doi.org/10.7232/iems.2018.17.3.454 # Commercialization of Public Sector Technology: The Case of a Respirator for Disaster Cheolhan Kim, Janghyeok Yoon* Department of Computer Engineering, Daejon University, Daejon, Republic of Korea Department of Industrial Engineering, Konkuk University, Seoul, Republic of Korea Corresponding Author, E-mail: janghyoon@konkuk.ac.kr August 18, 2017 October 27, 2017 December 27, 2017 ## ABSTRACT A technology motivated for a product is the most necessary component for the product’s development, but there are many other considerations throughout the product development for the market, in particular, in the case of public sector technology. Despite existing technology commercialization models, they may not work well for the public sector technologies in a low level of technology readiness. In this paper, we show a practical technology commercialization process of a respirator for disaster. The idea of our target product introduced in this paper comes from a public research institute’s patented technology which generates oxygen from chemical reaction without heat and toxic. Despite hardships for technology commercialization, this product was successfully launched with the help of various R&D programs funded by the government or research institutes. In this paper, we describe the process of technology commercialization of a respirator for disaster and the issues and considerations raised in that process. This paper provides a practical case about technology commercialization of a public sector technology and would be beneficial to the firms which attempt to commercialize the public sector’s technologies. ## 1. INTRODUCTION As globalization increases competition among firms, new product development through technology innovation is becoming more important for firms’ survival (Schilling and Hill, 1998). However, success of new product development is unguaranteed as well as unusual (Cummings and Teng, 2003), due to the difficulties of designing a viable business model for a product and finding a technology with proper technology readiness level (TRL) and manufacturing readiness level (MRL) for the product (Cooper and Edgett, 2006; Cooper, 2013). In addition, an overall roadmap of technology commercialization and the detail plans required at each phase of the roadmap must be prepared for new product development (Park et al., 2011). TRLs are a measurement system with 9 levels to assess the maturity level of a particular technology; for example, TRL 4 is “component and/or breadboard validation in laboratory environment and TRL6 is “system/subsystem model or prototype demonstration in a relevant environment (ground or space)” (US DoD, 2011). Even though TRL is not clearly defined and understood between stakeholders in Korea, the research results of the public sector, including government institutes, can be generally treated as “research to prove feasibility” phase and “technology development” phase which is classified as the exploratory development level (Park et al., 2011; Kassicieh and Radosevich, 2013). From the viewpoint of commercialization, the public sector’s technologies in the exploratory development level has many difficulties in bringing them to market, in particular when they are transferred to the private sector for commercialization (Park, 2011). This is be-cause companies must have a technology of at least TRL 6 to launch new products the market wants, but the public sector mostly develops and provides technologies of TRL 4 (Belz, 2010). There are well-known models for commercialization (Cooper, 1990; Goldsmith, 1995; Jolly, 1997; Trezona, 2007), but startup companies which are interested in public sector technologies are suffering from implementing and practicing their transferred technology. This is because existing technology commercialization models describe what kinds of functionalities are provided but do not provide sufficient details to address technologies in a low level TRL, such as public sector technology. In general, a technology is one of the product components that implement functions to meet customer requirements. For this reason, even though a new technology is developed and a new product based on this new technology is designed, many other tasks, such as related parts, resources, competitors and business models, should be considered and prepared for successful technology commercialization. In this paper, we show a case study of new product commercialization based on a technology transferred from the public sector and discuss various issues and considerations raised in the process. We believe that this paper will clarify what detailed activities and considerations are required in technology commercialization from the case study, and therefore it would be a good aid to the companies which try to commercialize a new product with public sector technology. The rest of this paper is organized as follows. This paper overviews the literature of commercialization models, followed by a technology commercialization flow which we performed through trial and error. Finally, we conclude the paper with discussions derived from our case study. ## 2. LITERATURE REVIEW Most of the technology commercialization models have several stages or phases, which include operational procedures, decision making points, or stakeholder’s views. This section reviews prior works for technology commercialization, followed by several concepts, such as technology readiness and business model development, to frame our case study. Cooper (1990) proposed Stage-Gate model as an idea-to-launch system to increase the efficiency of product development and its success possibility (Cooper, 1990). The Stage-Gate model includes a conceptual and operational map from idea screening to product launching (Figure 1) (Cooper and Edgett, 2006). Procedures for new product development are assigned to the stages each of which consists of a set of concurrent, cross-functional, proven and prescribed activities. Because each gate contains input, criteria and output, it determines Go/Kill and prioritization with input information: each gate decides whether or not projects or alternatives are suitable for the new product under consideration (Cooper et al., 2002). Decision making that requires ‘Go’ or ‘Kill’ decision for alternatives is processed by Gate Keepers which require cross-functional team staffing from marketing, financial, development and production departments. Jolly’s model includes five major segments: Imaging, Incubating, Demonstrating, Promoting and Sustaining (Figure 2) (Jolly, 1997). These segments are the main elements of product innovation process, and four bridges are needed to mobilize stakeholders’ endorsement. Jolly’s model well addresses commercialization processes, while its complexity of moving from a prototype to a product has the possibility of causing an unclear understanding among the stakeholders. Trezona (2007) specified a technology innovation model that is composed of four journeys (technology, company, market and regulation journeys) (Trezona, 2007). Among the four journeys, technology journey can be considered a general commercialization model from the view of technology development (Figure 4). The study suggested that firms should overcome barriers addressed in each journey for successful technology commercialization, and it also put stress on the interactions between the journeys for value creation. TRLs were developed to help making decisions concerning technology development and transfer and to provide a common understanding of technology maturity (US DoD, 2011). TRLs are a set of management metrics that enable the maturity assessment of a particular technology and consistent maturity comparison between different types of technologies in the context of specific system, application and operational environment (Mankins, 2009). United States Department of Defense (US DoD) and Automotive Council described Manufacturability and Producibility for how to successfully communicate their accomplished or expected stages of technology development and readiness for manufacture (MRL-Working-Group, 2011). Manufacturability means the characteristics considered in the design cycle and it focuses on the process capabilities, machine or facility flexibility and overall ability to consistently produce at the required level of cost and quality. Producibility means the relative ease of producing an item that meets engineering, quality and affordability requirements. A business model is a design at the initial stage of product design and development. Although the compo-nents of business models vary according to researchers (Timmers, 1998; Osterwalder and Pigneur, 2010; Amit and Zott, 2012), customer value proposition is a common component. Thus, product development must start with finding customers’ needs or proposing a new value to target customers (Osterwalder et al., 2014). The procedure of technology commercialization must consider business models as well as product development activities, and it then addresses value positioning between customers’ requirements and a product that meets the requirements. Based on this value positioning, the product can be finally developed with proper costs. ## 3. CASE STUDY OF TECHNOLOGY COMMERCIALIZATION In this chapter, we discuss the technology commer-cialization process we performed and customized during the project of developing a portable respirator. At first, we tried to follow traditional technology commercialization processes introduced in the previous section, but we found that there were many obstacles that we had to over overcome to produce a commercial product with our technology transferred from the public sector. During the four years journey to commercialize our transferred technology, we had faced with various unexpected issues about the transferred technology’s stability, manufacturability, funding, market situation and customers, and distribution channels definition. In addition, we were able to conclude that although a new product is perfectly developed, its commercialization would be failed if a business model, which defines a value chain from proposition of customer value to capture of the value, was not well-established. Therefore, we were able to obtain the following concep-tual definition of technology commercialization success:(1) $Technology commercialization ≈ new product development × business model$ (1) Figure 5 shows the technology commercialization flow which we went through by trial and error. Our prod-uct development was supported by various government organizations, so our commercialization case included stakeholders in the product development process and defined decision making criteria in the decision table to move forward the next step; the decision table is similar to the gates of the Stage-Gate model suggested by Cooper (2013). In addition, the stakeholder table shows the supporting agencies or funding sources which are involved in the product development (Figure 5 and Table 1). Each funding agency had its own decision points in its project support; for example, KIMM focused on field test and prototype development, and KISTI was interested in the product simulation and feasibility test using supercomputer. In order to increase production manufacturability and to reduce the number of assembly parts through recursive improvement, the parts were redesigned and tested through simulation and field tests. This flow for product commercialization was very iterative and thus it reminded us of the importance of manufacturability which was not considered during the design phase. ### 3.1. Technology Search and Selection One of the programs of INNOPOLIS Foundation in South Korea is to transfer the research result of the public sector, such as national research institutes and universities of DAEDUK Science Park, to the private sector for the purpose of technology commercialization. To select the technology transferred from the public sector, the following criteria of technology were considered: • - Future promise of the technology: how much possibility does the technology has to be developed further as a new or emerging technology? • - Maturity of the technology: how much possible is it to realize a target product with the technology? • - Compliance of the technology: how much is the tech-nology compliant with other existing technologies? • - Applicability of the technology: how widely is the technology used to develop diverse products? Our transferred technology was a portable chemical oxygen generator without releasing heat and toxic substances, and this technology was developed by KRIBB (Korean Research Institute of Bioscience and Biotechnology), which is a public research institute. However, this technology was found to be only TRL 3 and the oxygen generation ratio was relatively lower than other existing methods. Thus, we had to find an alternative technology with higher TRL which is available for our product, and finally we selected a technology that uses a compressed oxygen bombe as a source of oxygen. These decision making points affected our product specification and design in the later steps. ### 3.2. Product Definition, Product Portfolio Definition and Market Analysis These three steps (product definition, product portfolio definition and market analysis) are iterative until a new product and its market are finally decided upon. Although Quality Function Deployment (QFD) could be used to define the concept of product (Cohen, 1995), QFD has limitations unless all requirements of the product are clearly defined. Thus we assumed several situation ideas where our product can be used according to time, place and occasion (Figure 6). We extracted primary functions and secondary functions of the product from various situation ideas, and the specifications of Minimum Viable Product (MVP) were then determined. The primary function of MVP is to provide pure oxygen to keep people from toxic gas and subsequently product portfolios then could be defined based on the combination of secondary functions. The important requirements of target customers (factory workers) were “wearable” or “portable” and “providing 10 minutes of duration time to escape from disaster sites. Building on these requirements, the configuration and performance of the product were able to be defined. For the analysis of target market, we tried two ap-proaches. The first one is to search prior business cases and the other is to define customer value proposition. Products using oxygen was classified into gas masks and respirators. In the case of gas masks, filters are the key technology and oxygen tanks are generally used to secure escape time. Respirators are used in the medical industry to maintain and help patents’ breathing using an electric oxygen generator. We found that 111 existing products were positioned in the market (Figure 7). Based on the result of market analysis, a new product definition within the market could be made as a personal portable respirator for emergency such as fire or gas leakage. Because this niche market for our product was not defined yet, there were no products similar to what we tried to release. Therefore, we needed to propose new customer value proposition to the market. For this reason, business situations and scenarios used for product definition were reused and evaluated. Table 2 shows the general characteristics of gas masks and oxygen respirators. The ideal solution of gas masks and respirators is to provide pure oxygen in any situations. A gas mask provides oxygen through gas filtering, while a respirator provides oxygen generated from electric or chemical reaction. Thus, customer value proposition of our new product was defined as providing pure oxygen without heat and toxic to prevent suffocation by gas or smog generated from a conflagration in a plant. Therefore, the requirements of the new product were defined as the following: • - Should provide pure oxygen during the golden time in any places • - Should be easy to carry and usable • - Should have a simple structure • - Should not be expensive • - Should be integrated with other information and communication technologies Nine blocks of business model was discussed before designing our new product. Business canvas as shown in Figure 8 is a tool used to give stakeholders business insights (Osterwalder and Pigneur, 2010). Once the target customers and the customer value proposition of a product are derived through market analysis, an initial business model canvas can be designed. After then, this business model canvas can be revised and updated whenever the relationship between channels and target customers changes during product development. Finally, customer value proposition was redefined according to a target customer segment and the channel to approach the target customer was also redesigned. Figure 8 shows details for key factors for business model generation. For example, our target customer was set to workers under dangerous environment, purchase managers and safety managers in multiuse facilities. Key channels for product sales were considered safety-related exhibitions, web sites and sales agents. In addition we defined value proposition to customers as the functions that provide pure oxygen and help safe escape from disaster areas. ### 3.4. Product Design and Simulation To develop a physical product, geometric design and functional interaction design should be considered simultaneously. Functional interaction design shows function structures and the interactions between functions to identify useful functions and harmful functions based on TRIZ (the Russian acronym of the theory of inventive problem solving) method (Savransky, 2000), while geometric design shows product configurations and geometric features to appeal to customers. First, based on the TRIZ functional interaction diagram analysis, we found that elimination of exhaust gas caused by breathing was necessary to remove the harmful functions. Next, as the geometric design and the functional interaction design are interrelated each other, we introduced supercomputer simulation for feasibility analysis of various product designs and their fluid volume fraction with the help of KISTI Supercomputing Center (Figure 9). Because only one technology cannot compose the whole of a product, other supplementary technologies related to the primary function must be defined for the complete product. We were able to define a product architecture of the target product with various functions and their structure (Figure 10). Each system is composed of functional elements, which are then translated into Bill of Technology (BoT), which is similar to the concept of Bill of Materials (BoM). Each functional element in a BoT can be replaced into other potential technologies that provides the same function, taking into consideration technology level, implementation cost or relevance with other technology. Therefore, a product portfolio can be easily defined by mixing alternative technologies of BoT according to customer needs or product costs. ### 3.5. Prototype Development and Field Test Product architecture is the scheme by which the functional elements of the product are arranged into physical chunks and by which the chunks interact each other. In this step, performance specification must be defined though the customer’s work environment and the performance, operability and durability of the prototype must be tested through field tests. The critical performance factors were to keep a constant emission ratio and for workers to easily use the product under dangerous situations. Therefore, several types of design mock-ups and molds were designed based on the field test results. Then, some of the prototypes were dropped from the final alternatives. ### 3.6. Design Production Process and Mass Production Important things in this step are productivity and quality control of the final product. However, there are many hidden problems for mass production. A regulator is very important assembly of our product, as it allows pressure control of the oxygen that comes out of the bombe. However, complicated regulator structures made it difficult to assemble parts and increased the assembling time and costs and thereby affected the quality of the final product. Unfortunately, these kinds of manufacturing issues during the prototype design were not found. Thus, we had to redesign the structure of the regulator and perform field tests for the newly designed prototypes. ## 4. CONCLUDING REMARKS A technology which is motivated for a product is the most necessary component for product development, but there are many other considerations throughout the product development for the market, in particular, in the case of public sector technology. In addition, successful product release to the market cannot be guaranteed without designing a proper business model for a product and finding a technology with proper technology readiness. Therefore, technology commercialization processes as a product development roadmap should be prepared for new product development. Prior models for technology commercialization have been proposed, but they did not work well for our technology, which is transferred from a public research institute, in a low TRL. Through the new product development for four years, we were able to release a respirator for disaster into the market. This paper traced a case of new product development and commercialization process based on a patented technology transferred from a public research institute, and described various issues and considerations raised in that process. From an academic standpoint, this research contributes to providing academia with a practical example that utilizes and extends existing technology commercialization models to commercialize a public sector technology. From an industrial perspective, our practical case study can be a useful reference to startup firms or those who attempt to commercialize a new technology of the public sector. Next, we were able to conclude from this case study that several considerations should be cross-checked whenever a new technology is adopted for product development. First, assessment of TRL should be conducted by a third party organization. As TRL is known as an important factor used to predict the possibility of success of technology commercialization, specialized agencies should evaluate technology independently and fairly. This is because if a new technology with low TRL is transferred, it has the possibility of increasing the costs to reach a complete product level (Belz, 2010). Second, a product devel-opment team should monitor and match customer needs throughout the process of technology commercialization. This provides new chances to find out and enter new markets. Third, field test should be carried out over long periods and under tough conditions that are very similar to those in actual situations. This assures robustness of product and provides customer satisfaction which leads to survival in the market at the initial stage of product launching. Fourth, business use cases and early adopter groups should be secured, because it is hard to obtain references with a new product. Finally, we conclude that business models should be presented and shared among stakeholders to align the business goal of a new product. ## ACKNOWLEDGEMENTS This research was supported by Regional University Supporting Program of National Research Foundation of Korea (2013R1A1A4A01008903) and by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (NRF-2018R1D1A1B07045768). ## Figure Stage-Gate model (Cooper and Edgett, 2006). Jolly’s model for the commercialization of new technology (Jolly, 1997) Goldsmith’s commercialization model (Goldsmith, 1995). Technology journey (Trezona, 2007). Technology commercialization flow for Respirators. Example of satiation idea. Market analysis for respirators and gas masks. Business model canvas for the new product. Examples of product simulation by supercomputer. Product architecture of the target product. ## Table R&D fund for technology commercialization Comparison between gas mask and oxygen respirator ## REFERENCES 1. Amit, R. and Zott, C. (2012), Creating value through business model innovation , MIT Sloan Management Review, 53(3), 41-49. 2. Belz, A. (2010), The McGraw-Hill 36-hour course product development, McGraw Hill Professional, US. 3. Cohen, L. (1995), Quality function deployment: How to make QFD work for you, Prentice Hall. 4. Cooper, R. G. (1990), Stage-gate systems: A new tool for managing new products , Business Horizons, 33(3), 44-54. 5. Cooper, R. G. (2013), New products: What separates the winners from the losers and what drives success, PDMA Handbook of New Product Development, 3-34. 6. Cooper, R. G. and Edgett, S. J. (2006), Stage-Gate® and the critical success factors for new product development, BP Trends, 1-6. 7. Cooper, R. G. , Edgett, S. J. , and Kleinschmidt, E. J. (2002), Optimizing the stage-gate process: What best-practice companies do-II , Research-Technology Management, 45(6), 43-49. 8. Cummings, J. L. and B. S. Teng (2003), Transferring R&D knowledge: The key factors affecting knowledge transfer success , Journal of Engineering and Technology Management, 20(1-2), 39-68. 9. Goldsmith, H. R. (1995), A model for technology commercialization, Proc. Mid-Continent Regional Technology Transfer Centre Affilliate, NASA Johnson Space Centre, Houston. 10. Jolly, V. K. (1997), Commercializing new technologies: Getting from mind to market, Harvard Business Press, Boston, Massachusetts. 11. Kassicieh, S. K. and Radosevich, H. R. (2013), From lab to market: Commercialization of public sector technology, Springer Science & Business Media. 12. MRL-Working-Group (2011), Manufacturing Readiness Level (MRL)Deskbook. 13. Mankins, J. C. (2009), Technology readiness assessments: A retrospective , Acta Astronautica, 65(9-10), 1216-1223. 14. Osterwalder, A. and Pigneur, Y. (2010), Business model generation: A handbook for visionaries, game changers, and challengers, John Wiley & Sons. 15. Osterwalder, A. , Pigneur, Y. , Bernarda, G. , and Smith, A. (2014), Value proposition design: How to create products and services customers want, John Wiley & Sons. 16. Park, J. B. , Cho, Y. A. , Lee, S. K. , Sung, Y. Y. , and Kwan, Y. K. (2011), Promoting technology commercialization in the Korean Private Sector, Issue Paper, Seoul, KIET. 17. Savransky, S. D. (2000), Engineering of creativity: Introduction to TRIZ methodology of inventive problem solving, CRC Press, USA. 18. Schilling, M. A. and Hill, C. W. (1998), Managing the new product development process: Strategic imperatives , Academy of Management Executive, 12(3), 67-81. 19. Timmers, P. (1998), Business models for electronic markets , Electronic Markets, 8(2), 3-8. 20. Trezona, R. (2007), The Carbon Trust Directed Research approach , Available from: http://www.ukerc.ac.uk/Downloads/PDF/07/0711BioenergyCT/0711Trezona.pdf. 21. US DoD (United States Department of Defense) (2011), Technology readiness assessment (TRA) guidance, Revision Posted, 13.
2019-02-20 21:32:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3305934965610504, "perplexity": 3246.4698790734046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247496694.82/warc/CC-MAIN-20190220210649-20190220232649-00185.warc.gz"}
https://codereview.stackexchange.com/tags/configuration/hot
# Tag Info 15 That looks crazy. But some constructive comments: Did you check for ssh-copy-id script, that is usually shipped with openssh? It solves for you the first part of your problem in standard way. The second problem is in my eyes non-existent. Do you really need your authorized_keys in all accounts? Isn't it enough for one user that can do sudo? The third is ... 11 I see some things that may help you improve your program. Use all of the required #includes The function strncpy is used but its declaration is in #include <cstring> which is not actually in the list of includes. Don't expose class internals The sections member data is a private data member, which is fine and appropriate, but then the get_sections()... 10 Interface I'm not overly fond of the interface you've defined to the function. I think trying to combine reading integers and reading strings into a single function makes it more difficult to use. For most C code, I think the old guiding principle of UNIX ("do one thing, and do it well") provides excellent guidance. As such, I'd probably have two ... 8 I would use something like perl -lane '@h{@F[1..$#F]}=()if/^Host\b/;END{$,=" ";print keys %h}' -- file or perl -lane '@h{ @F[ 1 .. $#F ] } = () if /^Host\b/; END {$, = " "; print keys %h; }' -- file -l removes newlines from input and adds them to prints -n runs the code for each line of the input -... 7 Here are some things that may help you improve your program. Use a state machine The approach you currently have uses bool variables keeping track things such as whether a section has been found, whether a character is the last one and weaves the interpretation int a long series of nested if statements. That approach may work for small, simple grammars ... 7 This version seems much improved. Good job! There are still a few things that might be further improved, which I list below. Avoid defining types that aren't used The config.hpp code currently contains this line: typedef std::pair<std::string, std::string> keyvalue; However, that type isn't actually used anywhere within the header. It's only ... 7 Just declare a (static readonly) dictionary: var values = new Dictionary<(DatabaseType, DomainType), DatabaseConfigType>() { { (DatabaseType.Type1, DomainType.Domain1), DatabaseConfigType.Type1Domain1 } }; Filled with all possible combinations. Then your function will be reduced to: DatabaseConfigType GetDatabaseConfiguration(DatabaseType ... 7 That's nutty, trying to address a server by a changing IP address. Reconfiguring the wiki to adjust to it is even crazier. The standard practice for adapting to dynamic IP addresses is to use a dynamic DNS service. Then, everything else works normally. Pick a hostname, like afuna.noip.com. Run a DDNS client to register your hostname whenever you acquire a ... 7 You are trying to recreate functionality that is already provided out of the box. Reference Configure an ASP.NET Core App From documentation it supports providers for INI, JSON, and XML. Each configuration value maps to a string key. There's built-in binding support to deserialize settings into a custom POCO object (a simple .NET class with properties)... 7 __func__ Unlike __FUNCTION__, __func__ is standard C since C99. Some compilers might not support C99 and therefore in those compilers you have to use whatever they provide, but if your compiler supports C99, you should use __func__. If you have a crappy compiler, think about using a better one. magic numbers What is a magic number, and why is it bad? ... 6 Looks good to me. A couple remarks: section* get_section(const std::string& sectionname); std::list<section>& get_sections(); Not so sure it is a good idea returning mutable pointers/references to the data, since changing a section in memory has no effect to the underlaying file representation. It could lead to misunderstandings. It is ... 6 Assuming you already have a class called Enchant (should be a superclass of your enchantments), you could use an enum to organize all of your enchantments: public enum Enchantments { MULTI_FIRE(1, "multifirearrow") { Enchant getEnchant() { return new MultiFireArrow(); } }, BOLT(2, "bolt") { Enchant getEnchant() { return new Bolt(); } ... 6 Use $(...) always There's really no good reason today to use ..., always use the modern$(...) instead. Avoid flags of echo The flags of echo, such as -n don't work consistently in all systems, so it's better to avoid them when possible. If you really don't want to print a newline, you could use printf, though it's not POSIX compliant. Checking that a ... 6 Definitely go with Method 1: There is no custom code to support. ConfigParser is part of the standard Python library; any programmer can easily look up its documentation to see how it works, if they weren't already familiar with it. URL normalization has no business being part of the configuration file parser. Including such normalization would violate ... 6 Notes: Quote your variables. Ref Security implications of forgetting to quote a variable in bash/POSIX shells Don't use ALLCAPS varnames. It's too easy to overwrite important shell variables like PATH. A2ENSITE=$A2A2ENSITE$HOST".conf" -- I don't see the A2A2ENSITE variable anywhere this is a perhaps a corollary of the ALLCAPS vars problem: they can be hard ... 6 Your program crashes when it reads a line from the parameters file that doesn't contain an = sign. The name of the macro TEXT is misleading. It doesn't contain a text but a filename. Whenever you output an error message, it belongs on stderr instead of stdout. To do this, replace printf( with fprintf(stderr,. 5 I'll start by agreeing with Mug that a ISettingsProvider should have a CurrentSetting property and simply also implement INotifyPropertyChanged. ApplicationData.Current.DataChanged += (a, o) => ... I like delegate expressions as much as the next guy, but it's not always appropriate to use one letter identifiers in them. I mean, we complain when people ... 5 public readonly int NewSetting; public SettingChangedEventArgs(int newSetting) { NewSetting = newSetting; } Notice how the code formatting is getting confused with NewSetting? It might be readonly, but it's still a public field you have here. An EventArgs class is a class like any other - there's no reason to not properly encapsulate its data. I like ... 5 The conditions to create keys The condition on the directory seems a bit excessive here: if [ ! -d "$HOME/.ssh" ] && [ ! -f "$HOME/.ssh/id_rsa" ] && [ ! -f "\$HOME/.ssh/id_rsa.pub" ]; then echo -e "Private / Public keys not generated" echo -e "Generating..." ssh-keygen -b 4096 fi My concern is that if the key files are missing ... 5 Since you’re willing to install third-party packages (I see a pip install -r requirements.txt step in your README), there are a couple of packages you can install that will probably be quite useful for your code. For secure config: consider keyring keyring provides an interface to the system keychain, so configuration is stored more securely than by mere ... 5 Style is always in the eyes of the beholder For a lot of cases you will get differing answers, for the problem at hand, parsing a file to produce an instance of a class representing the information there are a lot of different ways some coming down to -taste-. To help deciding between some of them it's always good to look at how it will be used. This review ... 5 configuration is usually accessed statically I have a different impression. Configuration is accessed in a way that is convenient in terms of maintainability and testing. And static references do not help in those areas. It is common to register settings in DI containers and either inject them directly or using some sort of wrapper (probably what you have ... 5 OpenChakraLinkPlusSaveNext() is a very long function, with its logic for incrementing, loading, and saving state scattered and duplicated all over the place. Functions should adhere to the Single Responsibility Principle, and do one thing only. Another problem is that the implementation is strongly tied to configparser. As a result, you need to put int() ... 5 Do you think this service is useful? I don't think it's possible to answer that question with the toy example given. IMO it doesn't shed any light on how you intend to use it for things like committing things to the database, saving files, sending emails, reading files, executing queries and and and... public FeatureService(ILogger<FeatureService&... 4 I don't have a whole lot to add, as this is pretty straightforward and seems like a completely reasonable way to do what you're trying to do. In regards to your questions: There's a tradeoff for using accessor methods for properties. Every time you add a new one, you need to add a constant to the Key struct, and add an accessor method to the Configuration ... 4 What kind of things are you looking to improve? Your solution looks pretty straightforward. The only problem with an ini file is that ColdFusion can't instantly/natively parse it. Whereas XML/JSON can be parsed directly into a Coldfusion Struct. <cfscript> public struct function loadini( required string configFile) { var stResult = {}; var ... 4 I generally don't like to add a second answer but the magnitude of what I'm gong to suggest is a lot greater than what I proposed previously, which doesn't appear to be enough. I pulled the entire source off of GitHub and what's really missing from your question is how you are "injecting" these dependencies. Specifically, you are newing a lot of stuff up in ... 4 I think a couple of Role interfaces would really help here. You need to do two things with your configuration load it save it so: public interface IConfigurationReader { Configuration ReadConfiguration(); } public interface IConfigurationWriter { void WriteConfiguration<T>(T configuration); } Obviously those names could be improved. You ... 4 Pardon my lack of Python knowledge, but I think a function something like this might work: def fatal_error(message): log.error(message) app_exit(1) And now your catch cases look more like this: try: root = get_root(xmlfile) except ET.ParseError as e: fatal_error("Error while parsing {0}: {1}".format(xmlfile, e)) except IOError as e: ... 4 Generally a parser can be modeled as a state machine. Think about what different expressions your grammar consists of. I don't know the INI spec but here is a suggestion for a section header: maybe_whitespace - zero or more tabs and/or spaces identifier - one or more (is there a maximum?) alphanumeric ASCII characters open_section_header - a "[" ... Only top voted, non community-wiki answers of a minimum length are eligible
2020-12-01 14:34:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20562316477298737, "perplexity": 2505.379067241388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674594.59/warc/CC-MAIN-20201201135627-20201201165627-00296.warc.gz"}
https://math.stackexchange.com/questions/1847126/implicit-differentiation-of-a-two-variables-function
# Implicit differentiation of a two variables function A function $z=z(x,y)$ is given implicitly by the function $$f\left(\frac{x}{y},\frac{z}{x^{\lambda}}\right)=0$$ where $\lambda\in\mathbb{R},\lambda\neq0$. I have to show that if $f(u,v)$ is differentiable and $$\frac{\partial f}{\partial v}(u,v)\neq0$$ then $$x\frac{\partial z}{\partial x}+y\frac{\partial z}{\partial y}=\lambda z$$ I tried to do this using the Implict function theorem: $$\frac{\partial z}{\partial x}=-\displaystyle\dfrac{\frac{\partial f}{\partial x}}{\frac{\partial f}{\partial z}}$$ So calling $$F(x,y)=f\left(\frac{x}{y},\frac{z}{x^{\lambda}}\right)$$ and applying the chain rule, I got: $$\frac{\partial F}{\partial x}=\frac{\partial f}{\partial x}\cdot\frac{\partial x}{\partial x}+\frac{\partial f}{\partial y}\cdot\frac{\partial y}{\partial x}$$ so $$\frac{\partial F}{\partial x}=\frac{\partial f}{\partial x}\cdot\frac{1}{y}-\frac{\partial f}{\partial y}\cdot\frac{\lambda z}{x^{\lambda+1}}$$ But I don't know if is this the right way and, if it really is, what I'm supposed to do from now on? I did all this trying to find some expression to $\frac{\partial f}{\partial x}$, but I don't know if it worked. • Some observations: (i) You should use the fact that $F(x,y)=0$ for all $x,y$ [currently you are ignoring that fact], (ii) It is better to write "$\frac{\partial f}{\partial u}$" rather than "$\frac{\partial f}{\partial x}$" to avoid confusion and to be consistent with the notation of the question [in particular, your $\frac{\partial x}{\partial x}$ is not quite correct], (iii) you forgot that differentiating $zx^{-\lambda}$ with respect to $x$ must use the product rule and that is where you get derivatives of $z$ in the picture. Can you solve the problem now? – Michael Jul 3 '16 at 3:32 • Also, you can get rid of the formula "$\frac{\partial z}{\partial x} = -\frac{\partial f/ \partial x}{\partial f/\partial z}$" (I'm not sure what $\partial f /\partial x$ is even intended to mean). – Michael Jul 3 '16 at 3:35 • @Michael I fixed it, but does it equal to 0 (the last expression)? – mvfs314 Jul 3 '16 at 4:04 • I do not understand your comment above. What did you fix? What equals 0? If you can solve your own question now, one method is to answer your own question below. – Michael Jul 3 '16 at 15:55 • I don't know what to do. – mvfs314 Jul 3 '16 at 18:39 $$df=\frac{\partial f}{\partial u}du+\frac{\partial f}{\partial v}dv=0$$ where $$du=d(x/y)=\frac{1}{y}dx-\frac{x}{y^2}dy$$ $$dv=d(z/x^\lambda)=\frac{1}{x^\lambda}dz-\frac{\lambda z}{x^{\lambda+1}}dx$$ We know that $$dz=\frac{\partial{z}}{\partial x}dx+\frac{\partial{z}}{\partial y}dy$$ So $$dv=d(z/x^\lambda)=\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}dx+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}dy-\frac{\lambda z}{x^{\lambda+1}}dx$$ Now $df$ becomes $$df=\frac{\partial f}{\partial u}\bigg( \frac{1}{y}dx-\frac{x}{y^2}dy\bigg)+\frac{\partial f}{\partial v}\bigg(\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}dx+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}dy-\frac{\lambda z}{x^{\lambda+1}}dx \bigg)=0$$ This equation is always satisfied when $$\frac{1}{y}dx-\frac{x}{y^2}dy=0$$ $$\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}dx+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}dy-\frac{\lambda z}{x^{\lambda+1}}dx=0$$ From the first condition it follows that $$dy=\frac{y}{x}dx$$ Replacing this into the second condition $$\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}dx+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}\frac{y}{x}dx-\frac{\lambda z}{x^{\lambda+1}}dx=0$$ $$\bigg(\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}\frac{y}{x}-\frac{\lambda z}{x^{\lambda+1}}\bigg)dx=0$$ $$\frac{1}{x^\lambda}\frac{\partial{z}}{\partial x}+\frac{1}{x^\lambda}\frac{\partial{z}}{\partial y}\frac{y}{x}-\frac{\lambda z}{x^{\lambda+1}}=0$$ $$x\frac{\partial{z}}{\partial x}+y\frac{\partial{z}}{\partial y}=\lambda z$$
2020-12-05 02:03:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587870836257935, "perplexity": 173.80449214145554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00622.warc.gz"}
http://cms.math.ca/10.4153/CMB-2001-005-7
location:  Publications → journals → CMB Abstract view Normal Subloops in the Integral Loop Ring of an $\RA$ Loop Published:2001-03-01 Printed: Mar 2001 • Edgar G. Goodaire • César Polcino Milies Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax PDF PostScript Abstract We show that an $\RA$ loop has a torsion-free normal complement in the loop of normalized units of its integral loop ring. We also investigate whether an $\RA$ loop can be normal in its unit loop. Over fields, this can never happen. MSC Classifications: 20N05 - Loops, quasigroups [See also 05Bxx] 17D05 - Alternative rings 16S34 - Group rings [See also 20C05, 20C07], Laurent polynomial rings 16U60 - Units, groups of units
2014-07-26 15:11:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21129110455513, "perplexity": 13618.880270051217}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997902579.5/warc/CC-MAIN-20140722025822-00052-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.wintherscoming.no/cmb/milestone3.php
# Overview: Milestone III The topic of the third milestone of this project is the evolution of structure in the universe: how did small fluctuations in the baryon-photon-dark-matter fluid grow from shortly after inflation until today? The ultimate goal of this part is to construct two-dimensional functions - of time and Fourier scale, $x$ and $k$ - for each of the main physical quantities of interest, $\Phi(x,k)$, $\Psi(x,k)$, $\delta_{\rm CDM}(x,k)$, $\delta_b(x,k)$, $v_{\rm CDM}(x,k)$, $v_b(x,k)$, $\Theta_\ell(x,k)$. The deliverables are the following: • In the paper define the quantities to be computed and give a short description of the algorithms used • One plot for each of the physical quantities as function of $x$, for three different $k$'s within the interval of interest. For the radiation quantities, plot only the first two multipoles (i.e. the energy density $\delta_\gamma = 4\Theta_0$ and the velocity $v_\gamma = -3\Theta_1$). Choose values of $k$ such that each of the three main regimes are shown; large-scales (i.e., unaffected by causal physics), small-scales (i.e., early oscillations with subsequent damping), and intermediate scales (i.e., scales that have just undergone a few oscillations). • A transcript of the module written for the evaluations The project splits into two branches, one for Ph.D. students and one for Master students. The difference is that Ph. D. students have to consider also neutrinos and polarization, while Master students only need to take into account photons, baryons and dark matter, and only temperature fluctuations. So make sure to choose the one appropriate for you. (But of course, if you are a Master student and want to go for the more advanced problem, you are more than welcome to do so) There might be typos in the equations here so make sure you double-check with those in Callin (2006). A short introduction to the milestone can be found in this PDF (keynote) shown below. Page: / # Theoretical background See the lecture notes for the theory you should know for this milestone. During the lectures, we have derived (or will derive) the linearized Einstein and Boltzmann equations for photons, baryons and dark matter, and their respective inflationary initial conditions. The task of the current part of the project is to solve these equations numerically. The good news is that the numerical solution of these equations follows in exactly the same path as when solving for instance the Peebles' equation or the equation for the conformal time. The bad news is that the expressions for the equations are somewhat more complicated. But if one is just a little careful about typing these in correctly, everything should work just fine. But before we write down the equations, there are a few issues that should be pointed out. First, at early times the optical depth, $\tau$, is very large. This means that electrons at a given place only observe temperature fluctions that are very nearby. This, in turn, implies that it will only see very smooth fluctuations, since the full system is in thermodynamic equilibrium, and all gradients are efficiently washed out. The only relevant quantities in this regime are therefore 1) the monopole, $\Theta_0$, which measures the mean temperature at the position of the electron, 2) the dipole, $\Theta_1$, which is given by the velocity of the fluid due to the Doppler effect, and 3) the quadrupole, $\Theta_2$, which is the only relevant source of polarization signals. The regime where this is the case is called tight coupling. At later times, though, the fluid becomes thinner, and the electrons start seeing further away, and then become sensitive to higher-ordered multipoles, $\Theta_l$. Fortunately, because of a very nice computational trick due to Zaldarriaga and Seljak called line of sight integration, we only need to take into account a relatively small number of these (six is enough), and so the system of relevant equations is therefore still tractable. Note that before 1996 or so, people actually included thousands of variables, to trace multipoles for the full range. Needless to say, this was slow, and other approximations were required. A second issue is the very large value of $\tau^{\prime}$ at early times, which multiplies $(3\Theta_1 + v_b)$ in the Boltzmann equations. The latter factor is very small early on, and the product of the two is therefore numerically extremely unstable. The result is that the standard Boltzmann equation set is completely unstable if one simply implements the full expressions at early times. The solution to this problem is to use a proper approximation for $(3\Theta_1 + v_b)$ at early times. See the appendix for a derivation. ### The full system Photon temperature multipoles: \begin{align} \Theta^\prime_0 &= -\frac{ck}{\mathcal{H}} \Theta_1 - \Phi^\prime, \\ \Theta^\prime_1 &= \frac{ck}{3\mathcal{H}} \Theta_0 - \frac{2ck}{3\mathcal{H}}\Theta_2 + \frac{ck}{3\mathcal{H}}\Psi + \tau^\prime\left[\Theta_1 + \frac{1}{3}v_b\right], \\ \Theta^\prime_\ell &= \frac{\ell ck}{(2\ell+1)\mathcal{H}}\Theta_{\ell-1} - \frac{(\ell+1)ck}{(2\ell+1)\mathcal{H}} \Theta_{\ell+1} + \tau^\prime\left[\Theta_\ell - \frac{1}{10}\Pi \delta_{\ell,2}\right], \quad\quad 2 \le \ell \lt \ell_{\textrm{max}} \\ \Theta_{\ell}^\prime &= \frac{ck}{\mathcal{H}} \Theta_{\ell-1}-c\frac{\ell+1}{\mathcal{H}\eta(x)}\Theta_\ell+\tau^\prime\Theta_\ell, \quad\quad \ell = \ell_{\textrm{max}}\\ \end{align} Photon polarization multipoles: \begin{align} \Theta^\prime_{P0} &= -\frac{ck}{\mathcal{H}}\Theta_{P1} + \tau^\prime\left[\Theta_{P0} - \frac{1}{2}\Pi \right]\\ \Theta_{P\ell}^\prime &= \frac{\ell ck}{(2\ell+1)\mathcal{H}} \Theta_{\ell-1}^P - \frac{(\ell+1)ck}{(2\ell+1)\mathcal{H}} \Theta_{\ell+1}^P + \tau^\prime\left[\Theta_\ell^P - \frac{1}{10}\Pi\delta_{\ell,2}\right],\quad\quad 1 \le \ell \lt \ell_{\textrm{max}} \\ \Theta_{P,\ell}^\prime &= \frac{ck}{\mathcal{H}} \Theta_{\ell-1}^P-c\frac{\ell+1}{\mathcal{H}\eta(x)}\Theta_\ell^P+\tau^\prime\Theta_\ell^P, \quad\quad \ell = \ell_{\textrm{max}}\\ \end{align} Neutrino multipoles: \begin{align} \mathcal{N}^\prime_0 &= -\frac{ck}{\mathcal{H}} \mathcal{N}_1 - \Phi^\prime, \\ \mathcal{N}^\prime_1 &= \frac{ck}{3\mathcal{H}} \mathcal{N}_0 - \frac{2ck}{3\mathcal{H}}\mathcal{N}_2 + \frac{ck}{3\mathcal{H}}\Psi \\ \mathcal{N}^\prime_\ell &= \frac{\ell ck}{(2\ell+1)\mathcal{H}} \mathcal{N}_{\ell-1} - \frac{(\ell+1)ck}{(2\ell+1)\mathcal{H}}\mathcal{N}_{\ell+1},\quad\quad 2 \le \ell \lt \ell_{\textrm{max},\nu} \\ \mathcal{N}_{\ell}^\prime &= \frac{ck}{\mathcal{H}} \mathcal{N}_{\ell-1}-c\frac{\ell+1}{\mathcal{H}\eta(x)}\mathcal{N}_{\ell}, \quad\quad \ell = \ell_{\textrm{max},\nu}\\ \end{align} Cold dark matter and baryons: \begin{align} \delta_{\rm CDM}^\prime &= \frac{ck}{\mathcal{H}} v_{\rm CDM} - 3\Phi^\prime \\ v_{\rm CDM}^\prime &= -v_{\rm CDM} -\frac{ck}{\mathcal{H}} \Psi \\ \delta_b^\prime &= \frac{ck}{\mathcal{H}}v_b -3\Phi^\prime \\ v_b^\prime &= -v_b - \frac{ck}{\mathcal{H}}\Psi + \tau^\prime R(3\Theta_1 + v_b) \\ \end{align} Metric perturbations: \begin{align} \Phi^\prime &= \Psi - \frac{c^2k^2}{3\mathcal{H}^2} \Phi + \frac{H_0^2}{2\mathcal{H}^2} \left[\Omega_{\rm CDM 0} a^{-1} \delta_{\rm CDM} + \Omega_{b 0} a^{-1} \delta_b + 4\Omega_{\gamma 0} a^{-2}\Theta_0 + 4\Omega_{\nu 0}a^{-2}\mathcal{N}_0\right] \\ \Psi &= -\Phi - \frac{12H_0^2}{c^2k^2a^2}\left[\Omega_{\gamma 0}\Theta_2 + \Omega_{\nu 0}\mathcal{N}_2\right] \\ \end{align} In the equations above $\Pi = \Theta_2 + \Theta_0^P + \Theta_2^P$ and $R = \frac{4\Omega_{\gamma 0}}{3\Omega_{b 0} a}$ (note that our $R$ is $1/R$ in Dodelson). Note that only one of the potentials are dynamical - $\Psi$ follows from $\Phi$ so you don't have to solve this. If you are a master student you don't have to implement neutrinos and polarization. In that case just ignore the neutrino and polarization equations and put $\mathcal{N}_\ell = 0$ and $\Theta^P_\ell = 0$ in all the other equations above. ### The tight coupling regime In the tight coupling regime, the only differences are 1) that one should only include $\ell = 0$ and 1 for $\Theta_\ell$ (and none for polarization) - all higher moments are given by those, and 2) that the expressions for $\Theta'_1$ and $v_b'$ are quite a bit more involved (see the appendix for how to derive this): \begin{align} q &= \frac{-[(1-R)\tau^\prime + (1+R)\tau^{\prime\prime}](3\Theta_1+v_b) - \frac{ck}{\mathcal{H}}\Psi + (1-\frac{\mathcal{H}^\prime}{\mathcal{H}})\frac{ck}{\mathcal{H}}(-\Theta_0 + 2\Theta_2) - \frac{ck}{\mathcal{H}}\Theta_0^\prime}{(1+R)\tau^\prime + \frac{\mathcal{H}^\prime}{\mathcal{H}} - 1}\\ v_b^\prime &= \frac{1}{1+R} \left[-v_b - \frac{ck}{\mathcal{H}}\Psi + R(q + \frac{ck}{\mathcal{H}}(-\Theta_0 + 2\Theta_2) - \frac{ck}{\mathcal{H}}\Psi)\right]\\ \Theta^\prime_1 &= \frac{1}{3} (q - v_b^\prime) \end{align} In the tight coupling regime, we get the same expressions for the higher-ordered photon moments as given by the initial conditions, \begin{align} \Theta_2 &= \left\{ \begin{array}{l} -\frac{8ck}{15\mathcal{H}\tau^\prime} \Theta_1, \quad\quad \textrm{(with polarization)} \\ -\frac{20ck}{45\mathcal{H}\tau^\prime} \Theta_1, \quad\quad \textrm{(without polarization)} \end{array}\right. \\ \Theta_\ell &= -\frac{\ell}{2\ell+1} \frac{ck}{\mathcal{H}\tau'} \Theta_{\ell-1}, \quad\quad \ell \gt 2\\ \Theta_0^P &= \frac{5}{4}\Theta_2 \\ \Theta_1^P &= -\frac{ck}{4\mathcal{H}\tau'}\Theta_2 \\ \Theta_2^P &= \frac{1}{4}\Theta_2 \\ \Theta_\ell^P &= -\frac{\ell}{2\ell+1} \frac{ck}{\mathcal{H}\tau'} \Theta_{\ell-1}^P, \quad\quad \ell \gt 2 \end{align} ### Initial conditions The initial conditions are given by: \begin{align} \Psi &= -\frac{1}{\frac{3}{2} + \frac{2f_\nu}{5}}\\ \Phi &= -(1+\frac{2f_\nu}{5})\Psi \\ \delta_{\rm CDM} &= \delta_b = -\frac{3}{2} \Psi \\ v_{\rm CDM} &= v_b = -\frac{ck}{2\mathcal{H}} \Psi\\ &\text{Photons:}\\ \Theta_0 &= -\frac{1}{2} \Psi \\ \Theta_1 &= +\frac{ck}{6\mathcal{H}}\Psi \\ \Theta_2 &= \left\{ \begin{array}{l} -\frac{8ck}{15\mathcal{H}\tau^\prime} \Theta_1, \quad\quad \textrm{(with polarization)} \\ -\frac{20ck}{45\mathcal{H}\tau^\prime} \Theta_1, \quad\quad \textrm{(without polarization)} \end{array}\right. \\ \Theta_\ell &= -\frac{\ell}{2\ell+1} \frac{ck}{\mathcal{H}\tau^\prime} \Theta_{\ell-1}\\ &\text{Photon Polarization:}\\ \Theta_0^P &= \frac{5}{4} \Theta_2 \\ \Theta_1^P &= -\frac{ck}{4\mathcal{H}\tau'} \Theta_2 \\ \Theta_2^P &= \frac{1}{4}\Theta_2 \\ \Theta_\ell^P &= -\frac{\ell}{2\ell+1} \frac{ck}{\mathcal{H}\tau^\prime} \Theta_{\ell-1}^P \\ &\text{Neutrinos:}\\ \mathcal{N}_0 &= -\frac{1}{2} \Psi \\ \mathcal{N}_1 &= +\frac{ck}{6\mathcal{H}}\Psi \\ \mathcal{N}_2 &= -\frac{c^2k^2 a^2 (\Phi+\Psi)}{12H_0^2\Omega_{\nu 0}}\\ \mathcal{N}_\ell &= \frac{ck}{(2\ell+1)\mathcal{H}} \mathcal{N}_{\ell-1}, \quad\quad \ell \ge 3 \end{align} where $f_{\nu} = \frac{\Omega_{\nu 0}}{\Omega_{\gamma 0} + \Omega_{\nu 0}}$. If you don't include neutrinos then set $f_\nu = 0$. Note that $\Psi$ is not a dynamical variable in the code and is only used here to set the rest of the variables. Since the equation system is linear we are free to choose the normalization of $\Psi$ as we want when we solve it (the normalization can be done in the end). The particular normalization we use here is such that when we in the next milestone are to compute power-spectra then $\Psi_{\rm true}^2 = \Psi_{\rm ours}^2 \mathcal{P}_\mathcal{R}$ where $\mathcal{P}_\mathcal{R}$ is the usual curvature perturbation power-spectum, the perturbations set up by inflation, $\mathcal{P}_\mathcal{R}(k) = A_s(k/k_{\rm pivot})^{n_s-1}$ with $A_s,n_s,k_{\rm pivot}$ (primoridal amplitude, spectral index and the pivot scale) are the same parameters as is standard in the litterature (and used in codes like CAMB and CLASS). # What you have to do Implement a class/module (Perturbations.h if you use the C++ template) that takes in a BackgroundCosmology and RecombinationHistory object - the ones you created in the previous two milestones - and use this to evolve the perturbations of the Universe. "All" you have to do is to set up the initial conditions, make the function that sets the right hand side of the coupled ODE system and then solve it and store the solution. However one thing that complicates it is that when integrating the perturbations you can't just integrate the full system directly - it is unstable due to the tight coupling between photons and baryons in the very early Universe. You therefore have to start off solving the tight coupling system: here we only have to include two photon multipoles $\Theta_0$ and $\Theta_1$ (and no polarization multipoles). Once tight coupling ends you have to switch to the full system. When doing this you will have to set the initial conditions from the tight coupling solution + you have to give a value to the multipoles we have in the full system, but don't include in the tight-coupling regime (the value of these are given by the same relations as we have in the initial conditions, e.g. $\Theta_2$ is given by the value of $\Theta_1$). This means you will need to make two functions to set initial conditions to your ODE vector - one at the start and one after tight coupling end - and you will have to implement two functions that sets the right hand side of the ODE system - one for tight coupling and one for the full system. Once you have this you will have to make a vector of $k$-values for which we will solve the system and integrate the perturbations from the start till today for all these $k$-values. You have to extract and store the results of the ODE and spline $f(x,k)$ for all of the quantities ($\delta_b, v_b, \Phi,\Theta_0$ etc.) that are required in the line-of-sight integrals. The vector of the perturbations that we integral will have between $10$ and $30$ or so elements depening on how many multipoles we include (and this vector will be different in the two regimes). This means its very important to keep track over which index in the vector corresponds to which quantity. For example if we don't have polarization or neutrinos then one possible way to do this is to place (for the full system): $\delta_{\rm CDM}$ is $[0]$, $v_{\rm CDM}$ is $[1]$, $\delta_b$ is $[2]$, $v_b$ is $[3]$, $\Phi$, is $[4]$ and $\Theta_\ell$ is $[5+\ell]$ for $\ell = 0,1,\ldots,\ell_{\rm max}$ for a total of $6+\ell_{\rm max}$ components. And in the tight coupling regime we just have $7$. You can choose to do this "by hand", however its easy to make a mistake. A much better way, less prone to making index mistakes (plus it would make it much easier to add new components if you needed to do that), is to precompute indices for the different components (in both regimes) and have variables that tells us what is the index of each component. Thus to access $\Phi$ we write something like $y[$index_Phi$]$. In the code template I have addeed such a system with examples for how to use it, but do what you think is best (and understands!). The most useful thing for you is probably going to be comparing to plots shown below, but we can also do some more quantitative checks by comparing to analytical solutions. We don't have exact analytical solutions to the full equation set - it's way to complicated for that - however we can derive some analytical approximation for some of the quantities in certain regimes that we can use to check that we are computing things correctly. Here are just some simple examples: We can check that the temperature multipoles are integrated correctly by comparing it with the analytical approximation we derived in the lectures. In its very simplest form we have (before recombination) $\Theta_0 + \Psi \propto \cos(k\eta/\sqrt{3})$. This is not perfect, but you should find similar looking oscillations. A full analytical approximation is derived in Hu & Sugiyama (1995) which is good to $\sim 10-20\%$. We can check that the gravitational potential is solved correctly by comparing this to analytical expections (see Dodelson or Baumann for expressions). For example one simple and concrete test: if we make a run with just matter and radiation then on super-horizon scales (e.g. $k \lesssim 0.001/$Mpc) the gravitational potential in the radiation dominated regime is related to the gravitational potential in the matter dominated regime via $\Phi_{\rm matter-era} = \frac{9}{10}\Phi_{\rm radiation-era}$. We also have $\Phi \approx \Phi_{\rm ini} \frac{\sin(y) -y \cos(y)}{y^3/3}$ with $y = k\eta/\sqrt{3}$ for small scale modes that enter the horizon in the radiation era ($k\gtrsim 0.1/Mpc$) and we expect $\Phi \approx$ constant in the matter era. We can check that the dark matter perturbations are integrated correctly by comparing to analytical approximations (the Meszaros equation, see Dodelson or Baumann for equations). For example on subhorizon scales $k\gg \mathcal{H}$ we should have $\delta \propto a = e^x$ in the matter era and $\delta\propto \log(a)$ (so basically frozen) in the radiation era. Another useful analytical approximation is that the growth rate of matter perturbations satisfy $f \equiv \frac{1}{\delta}\frac{d\delta}{dx} \approx \Omega_M(x)^{0.55}$ in the matter era. Figure 1: Here are some plots to compare your results to. We show the evolution of the perturbation for a toy-cosmology with parameters $\Omega_{b 0} = 0.05$, $\Omega_{\rm CDM 0} = 0.45$, $\Omega_{\Lambda 0} = 0.5$, $\Omega_{\nu 0} = 0$, $Y_p = 0$ and $h=0.7$. Polarization and neutrions are not included. You can also run my code online here to get results to compare to (NB: cannot guarantee there are no bugs in that code). For the baryon overdensity and velocity we plot the absolute value above (as it can be negative). # Appendix ### Appendix: Tight coupling time For any given $k$ you should switch from the tight coupling equations to the full system when $\left|\frac{d\tau}{dx}\right| < 10 \cdot \text{min}(1, \frac{ck}{\mathcal{H}})$ and no later than than the start of recombination ($z \sim 2000$). ### Appendix: Tight coupling equations The aim here is to derive an (numerically stable) equation for $[3\Theta_1 + v_b]$ in the tight coupling regime. The equations for $\Theta_1$ and $v_b$ are given by $$3\Theta^\prime_1 = \frac{ck}{\mathcal{H}}\left( \Theta_0 - 2\Theta_2 + \Psi \right) + \tau^\prime\left[3\Theta_1 + v_b\right], \\ v_b^\prime = -v_b - \frac{ck}{\mathcal{H}}\Psi + \tau^\prime R[3\Theta_1 + v_b]$$ Summing these gives us $$[3\Theta_1 + v_b]^\prime = \frac{ck}{\mathcal{H}}( \Theta_0 - 2\Theta_2) + \tau^\prime(1+R)\left[3\Theta_1 + v_b\right] - v_b\tag{A}$$ Taking the derivative of this expression, using $R^\prime = - R$, gives us $$[3\Theta_1 + v_b]^{\prime\prime} = -\frac{ck}{\mathcal{H}}\frac{\mathcal{H}^\prime}{\mathcal{H}}( \Theta_0 - 2\Theta_2) + \frac{ck}{\mathcal{H}}( \Theta_0 - 2\Theta_2)^\prime + (\tau^{\prime\prime}(1+R) - R\tau^\prime)\left[3\Theta_1 + v_b\right] + (\tau^\prime(1+R) - 1)\left[3\Theta_1 + v_b\right]^\prime + 3\Theta_1^\prime$$ where we have used $-v_b^\prime = -[3\Theta_1 + v_b]^\prime + 3\Theta_1^\prime$ to get ridd of the $-v_b^\prime$ term. Substituting in the equation for $\Theta_1^\prime$ in this last term we arrive at $$[3\Theta_1 + v_b]^{\prime\prime} = (\tau^{\prime\prime}(1+R) + (1-R)\tau^\prime)\left[3\Theta_1 + v_b\right] + (\tau^\prime(1+R) - 1)\left[3\Theta_1 + v_b\right]^\prime + \frac{ck}{\mathcal{H}}\left( \Theta_0 - 2\Theta_2 + \Psi + \Theta_0' - 2\Theta_2' - \frac{\mathcal{H}^\prime}{\mathcal{H}}(\Theta_0 - 2\Theta_2)\right)$$ Now comes the approximation. In the tight coupling regime one can show that to a good approximation $(3\Theta_1 + v_b) \propto \frac{1}{\tau'} \propto \eta$ since $\tau' \propto \frac{1}{a}$ and $\eta \propto a$ in a radiation dominated Universe. This means that $\frac{d^2}{d\eta^2}(3\Theta_1 + v_b) \approx 0$ or in terms of $x$ $$[3\Theta_1 + v_b]^{\prime\prime} \approx -\frac{\mathcal{H}^\prime}{\mathcal{H}}[3\Theta_1 + v_b]^{\prime}$$ Using this approximation in the expression above and solving for $q\equiv \left[3\Theta_1 + v_b\right]^\prime \to \Theta_1' = \frac{q-v_b'}{3}$ we arrive at $$q = \frac{-(\tau^{\prime\prime}(1+R) + (1-R)\tau^\prime)\left[3\Theta_1 + v_b\right] - \frac{ck}{\mathcal{H}}\left(\Psi + \Theta_0' - 2\Theta_2' + (1- \frac{\mathcal{H}^\prime}{\mathcal{H}})(\Theta_0 - 2\Theta_2)\right)}{\tau^\prime(1+R) + \frac{\mathcal{H}^\prime}{\mathcal{H}} - 1}$$ Further we can show that $\Theta_2^\prime \sim 3\Theta_2 \ll \Theta_0$ so this term can be ignored. Finally using (A) to solve for $\tau^\prime(1+R)\left[3\Theta_1 + v_b\right]$ we get $$\tau^\prime(1+R)\left[3\Theta_1 + v_b\right] = \frac{q - \frac{ck}{\mathcal{H}}( \Theta_0 - 2\Theta_2) + v_b}{1+R}$$ which can be substitute into the equation for $v_b^\prime$ to give us $$v_b^\prime = \frac{1}{1+R} \left[-v_b - \frac{ck}{\mathcal{H}}\Psi + R(q + \frac{ck}{\mathcal{H}}(-\Theta_0 + 2\Theta_2) - \frac{ck}{\mathcal{H}}\Psi)\right]$$ ### Appendix: Parallelizing the integration The integration is the thing that will take most of the time. If the integration is slow and you want to make it go faster here is one very 'easy' way to speed it up by parallelizing it. Most computers these days have more than one core that can do work so we can take advantage of that. The loop over $k$ where we integrate can be done in parallel and adding this to the code is fairly easy: add the compiler flag -fopenmp and before the loop add a OpenMP tag telling the compiler to do it parallel: #pragma omp parallel for schedule(dynamic, 1) for(int i = 0; i < n_k_values; i++){ //... do some work //... do some work //... do some work } Thats it. But... you will have to be very careful with what you do inside that loop or things can go very wrong. You have to make sure that the multiple threads you start don't write to the same place in memory - if that happens things will go wrong. If you want to try this now is the time to go and read an introduction to OpenMP so that you understand what is going on. And for debugging (does not work on a Mac) you can add the compiler flag -fsanitize=thread and it will check for errors. In any case: if you want to try this then only do this when you have made it work. That way you have something to compare to and you won't waste time you don't have.
2022-01-16 10:00:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99958735704422, "perplexity": 746.9164426161685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00354.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1855586?sortMsg=Replies
# Avoid issue while calculating with units? Posted 2 years ago 2039 Views | 3 Replies | 1 Total Likes | (in) UnitConvert[Quantity[2.6, "Hours"], "Minutes"] (out) UnitConvert[QuantityUnitsPrivateToQuantity[QuantityUnitsPrivateUnknownQuantity[2.6, "Hours"]], "Minutes"] I do not understand this output an I cannot find an answer anywhere in the documentation. Any help will be greatly appreciated. 3 Replies Sort By: Posted 2 years ago Hi Fred,This works fine for me UnitConvert[Quantity[2.6, "Hours"], "Minutes"] (* Quantity[156., "Minutes"] *) on "12.0.0 for Mac OS X x86 (64-bit) (April 7, 2019)". What version are you running? Posted 2 years ago That works for me now, too. All the calculations with units work for me now, but they didn't yesterday. Not sure why. Posted 2 years ago Community posts can be styled and formatted using the Markdown syntax.
2021-12-01 01:12:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22437141835689545, "perplexity": 9452.964956079193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00006.warc.gz"}
https://apboardsolutions.guru/ap-6th-class-maths-bits-chapter-6/
Practice the AP 6th Class Maths Bits with Answers Chapter 6 Basic Arithmetic on a regular basis so that you can attempt exams with utmost confidence. AP State Syllabus 6th Class Maths Bits 6th Lesson Basic Arithmetic with Answers I. Choose the correct answer and write it in the brackets. Question 1. Comparison of two numbers by division with same units is called ……………. A) Ratio B) Proportion C) Percentage D) None A) Ratio Question 2. The ratio of a and b is denoted by ……………. A) b : a B) $$\frac{b}{a}$$ C) a : b D) a × b C) a : b Question 3. In the ratio a : b, a is called ……………… A) consequent B) antecedent C) ratio D) proportion B) antecedent Question 4. If a : b = c : d, then they are said to be is …………… A) ratio B) equal C) proportion D) none C) proportion Question 5. a : b = c : d can be read as …………….. A) a : b to c : d B) a : b is as c : d C) c : d to a : b D) c : d is as a : b B) a : b is as c : d Question 6. If a : b = c : d, then ……………… A) a.c = b.d B) a.b = c.d C) bd = ac D) a.d = b.c D) a.d = b.c Question 7. If a, b, c, d are in proportion, then a and d are called ………………. A) Extremes B) Means C) Proportion D) None A) Extremes Question 8. Percent means ……………… A) Out of ten B) Out of two hundred C) Out of hundred D) Out of thousand C) Out of hundred Question 9. Equivalent ratio of a : b is ……………. A) b : a B) a : 2b C) 2a : b D) 4a : 4b D) 4a : 4b Question 10. Simplest form of 25 : 30 is ……………….. A) 2 : 3 B) 2 : 5 C) 5 : 6 D) 5 : 3 C) 5 : 6 Question 11. If three apples cost Rs. 60, then the cost of 5 apples is Rs …………….. A) 40 B) 80 C) 60 D) 100 D) 100 Question 12. 4 times of 8 : 3 = …………….. A) 32 : 3 B) 32 : 12 C) 8 : 12 D) 3 : 8 B) 32 : 12 Question 13. 1 cm : 20 mm = ……………… A) 1 : 20 B) 1 : 2 C) 20 : 1 D) 2 : 1 B) 1 : 2 Question 14. The ratio between 5 kg and 500 gms is ……………… A) 10 : 1 B) 1 : 100 C) 5 : 500 D) 1 : 10 A) 10 : 1 Question 15. Which of the following quantities are in proportions? A) 3, 4, 16, 12 B) 12, 4, 3, 16 C) 4, 16. 3, 12 D) 16, 3, 12, 4 C) 4, 16. 3, 12 Question 16. In a class of 45 students number of girls are 15, then find the ratio of number of boys to that of girls …………………. A) 2 : 1 B) 3 : 1 C) 4 : 1 D) 5 : 1 A) 2 : 1 Question 17. If Swathi earns ₹42,000 per month and her expenditure is ₹ 28,000 per month. Then what is the ratio of her savings and expenditure? A) 2 : 1 B) 2 : 3 C) 1 : 3 D) 1 : 2 D) 1 : 2 Question 18. Fractional form of 30% is ………………. A) $$\frac{30}{10}$$ B) $$\frac{30}{100}$$ C) $$\frac{300}{100}$$ D) $$\frac{100}{30}$$ B) $$\frac{30}{100}$$ Question 19. 1% = ……………… A) 0.01 B) 1.00 C) 0.1 D) 1.01 A) 0.01 Question 20. 25% of 200 is ……………. A) 100 B) 150 C) 50 D) 200 C) 50 Question 21. 8 hours as percent of 3 days is …………….. A) 11$$\frac{1}{9}$$% B) 9$$\frac{1}{11}$$% C) $$\frac{8}{3}$$% D) $$\frac{100}{3}$$% A) 11$$\frac{1}{9}$$% Question 22. 0.35 as percent ……………… A) $$\frac{35}{100}$$% B) $$\frac{35}{100}$$ C) $$\frac{7}{10}$$% D) 35% D) 35% Question 23. The population of a village is 10,600, of this 20% are school going children, then the number of children in the village………… A) 2120 B) 1060 C) 3180 D) 4240 A) 2120 Question 24. Simplest fractional form of 50% is ……………… A) $$\frac{1}{5}$$ B) $$\frac{5}{1}$$ C) $$\frac{1}{2}$$ D) $$\frac{2}{1}$$ C) $$\frac{1}{2}$$ Question 25. Balu spends 75% of his monthly income. If he saves ₹ 5000 per month, then his monthly income …………… A) ₹ 10,000 B) ₹ 15,000 C) ₹ 50,000 D) ₹ 20,000 D) ₹ 20,000 II. Fill in the blanks. 1. a : b is read as ……………… a is to b 2. Another form of a : b is ………………… $$\frac{a}{b}$$ 3. In a : b; b is called …………….. consequent 4. The equality of ratios is called ………………. proportion 5. The symbol :: (is as) introduced by ……………….. 6. If a, b, c, d are in proportion, b and c are called …………………. Means 7. If a, b, c, d are in proportion the product of extremes = …………….. Product of means 8. The method in which first we find the value of one unit and then the value of required number of units is known as ……………….. Unitary method 9. The meaning of percentage in Latin is ………………. Out of one hundred 10. The symbol for percentage is ……………….. % 11. Aditya and Kishore bought 20 pencils. Out of them, Kishore took 5 pencils. The ratio of pencils with Kishore & Aditya is …………….. 3 : 1 12. 30 min : 1 hr = ………………. 1 : 2 13. The simplest form of 180 : 45 is ……………… 4 : 1 14. Weight of Sreekari is 20 kgs. Swathi’s weight is thrice that of Sreekari. The ratio of weight of Sreekari and Swathi is …………….. 1 : 3 15. Decimal form of 20% is …………… 0.2 16. What is the ratio between 24 green balls and 27 blue balls? …………….. 8 : 9 17. An ornament weighs 45 gms. It is a mixture of gold and copper in the ratio 7 : 2. The weight of copper is ………………. 10 gms 18. Aditya runs 12 kms in 45 minutes, then he runs a distance of ……………… kms in 15 minutes. 4 19. In a map, if 1 cm = 200 km, then 20 cm = …………….. kms. 4000 kms 20. Ratio of 1 hour and 15 minutes is ………………. 4 : 1 21. The percentage of 5 multiples and numbers from 1 to 50 is ……………… 20% 22. 35% as decimals ………………… 0.35 23. 15% of 240 = …………….. 36 24. Siva secured 75% marks in maths. If maximum marks in the paper are 80. Marks secured by Siva ………………. 60 25. ₹ 40,000 was shared between Ram and Syam in the ratio 1:3, then Ram’s share is ………………… ₹10,000/- III. Match the following: A) 1) In a : b, a is called a) Product of means 2) In a : b, b is called b) Product of extremes 3) If a: b : : c : d, then a × d = c) Antecedent 4) If a: b : : c : d, then a.d is called d) b.c 5) If a: b : : c : d, then b.c is called e) Consequent 1) In a : b, a is called – c) Antecedent 2) In a : b, b is called – e) Consequent 3) If a: b : : c : d, then a × d = – d) b.c 4) If a: b : : c : d, then a.d is called – b) Product of extremes 5) If a: b : : c : d, then b.c is called – a) Product of means B) 1) 25% in the simplest form a) 0.06 2) 6% in decimal form b) 72 3) 0.46 decimal as percent c) $$\frac {1}{4}$$ 4) 1$$\frac {1}{2}$$ fraction as percent d) 46% 5) 30% of 240 = e) 150 1) 25% in the simplest form – c) $$\frac {1}{4}$$ 4) 1$$\frac {1}{2}$$ fraction as percent – e) 150
2023-03-30 04:56:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7595751881599426, "perplexity": 5054.547664047406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00491.warc.gz"}
http://mathematica.stackexchange.com/questions/288/how-do-you-use-ssh-keys-instead-of-a-password-to-run-a-remote-kernel-over-ssh?answertab=active
# How do you use ssh-keys instead of a password to run a remote-kernel over ssh? On all of the servers that I regularly interact with, I have ssh-keys setup for passwordless access via ssh. Yet, every time I attempt to start a remote-kernel over ssh, I get asked for a password despite having a ssh-keyagent up and running. What do I need to add to the kernel configuration, such as the launch command, to have it use my ssh-key? I'm using version 8.0.4 on MacOS 10.6.8, if it makes a difference. - Mathematica by default uses its own ssh implementation. You can see it in the dialog of the remote kernel configuration dialog in the advanced options: java -jar mathssh. As far as I know, you can safely replace that with the local ssh command (most likely /usr/bin/ssh). You have to select the "Advanced Options" radiobutton to do that (if you first add all the standard options, the rest of the command is already filled in correctly). Edit: By default, Mathematica uses the launch command java -jar "mathssh" user@hostname math -mathlink -LinkMode Connect -LinkProtocol TCPIP -LinkName "linkname" -LinkHost ipaddress to invoke ssh, where user and hostname are filled in via text boxes above. To use the local ssh command, you need to change the above command to ssh user@hostname "math -mathlink -LinkMode Connect -LinkProtocol TCPIP -LinkName linkname -LinkHost ipaddress" This appears to have one flaw, killing the remote kernel via the front-end no longer kills the processes on the remote server. So, that will have to be done by hand. -
2014-08-30 02:32:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23277397453784943, "perplexity": 3906.9606382793095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500833715.76/warc/CC-MAIN-20140820021353-00412-ip-10-180-136-8.ec2.internal.warc.gz"}
http://codeforces.com/problemset/problem/581/B
B. Luxurious Houses time limit per test 1 second memory limit per test 256 megabytes input standard input output standard output The capital of Berland has n multifloor buildings. The architect who built up the capital was very creative, so all the houses were built in one row. Let's enumerate all the houses from left to right, starting with one. A house is considered to be luxurious if the number of floors in it is strictly greater than in all the houses with larger numbers. In other words, a house is luxurious if the number of floors in it is strictly greater than in all the houses, which are located to the right from it. In this task it is assumed that the heights of floors in the houses are the same. The new architect is interested in n questions, i-th of them is about the following: "how many floors should be added to the i-th house to make it luxurious?" (for all i from 1 to n, inclusive). You need to help him cope with this task. Note that all these questions are independent from each other — the answer to the question for house i does not affect other answers (i.e., the floors to the houses are not actually added). Input The first line of the input contains a single number n (1 ≤ n ≤ 105) — the number of houses in the capital of Berland. The second line contains n space-separated positive integers h i (1 ≤ h i ≤ 109), where h i equals the number of floors in the i-th house. Output Print n integers a 1, a 2, ..., a n, where number a i is the number of floors that need to be added to the house number i to make it luxurious. If the house is already luxurious and nothing needs to be added to it, then a i should be equal to zero. All houses are numbered from left to right, starting from one. Examples Input 51 2 3 1 2 Output 3 2 0 2 0 Input 43 2 1 4 Output 2 3 4 0
2020-08-11 23:25:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46103614568710327, "perplexity": 643.2987078096478}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00091.warc.gz"}
http://cpr-condmat-statmech.blogspot.com/2013/07/13077356-jens-boberski-et-al.html
## Evolution of the force distributions in jammed packings of soft particles    [PDF] Jens Boberski, M. Reza Shaebani, Dietrich E. Wolf The evolution of the force distributions during the isotropic compression of two dimensional packings of soft frictional particles is investigated numerically. Regardless of the applied deformation, the normal contact force distribution $P(f_n)$ can be fitted by the product of a power-law, and a stretched exponential, while the tangential force distribution $P(f_t)$ is well fitted by a Gaussian. With increasing strain, both $P(f_n)$ and $P(f_t)$ exhibit a broadening, while, when scaled with the average forces, their widths decrease. Thus, a more homogeneous force network is observed for packings under large deformation. Furthermore, the distribution of friction mobilization $P(\eta)$ is a decreasing function of $\eta=|f_t|/(\mu f_n)$, except for an increased probability of fully mobilized contacts ($\eta=1$). The excess coordination number of the packings increases with the applied strain, indicating that the more a packing is compressed the more stable it becomes. View original: http://arxiv.org/abs/1307.7356
2018-04-24 08:19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6445806622505188, "perplexity": 1817.097794557156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946578.68/warc/CC-MAIN-20180424080851-20180424100851-00428.warc.gz"}
https://stacks.math.columbia.edu/recent-comments
Comments 1 to 20 out of 3989 in reverse chronological order. On Dario Weißmann left comment #4223 on Remark 55.6.10 in Crystalline Cohomology Concerning the definition of the maps $\nabla:M\otimes \Omega^i \to M\otimes \Omega^{i+1}$: Several $\otimes$ should be $\wedge$, i.e., On Dario Weißmann left comment #4222 on Lemma 55.6.3 in Crystalline Cohomology Concerning the second paragraph of the proof. I suggest replacing "We will show this by induction ...." (till the end of the paragraph) by the shorter (and easier) direct calculation: The claim is clear for $n=1$. Assume $n>1$. Note that $\delta_i(z-y)$ lies in $(K\cap J(1))^{[2]}$ for $i>1$. Calculating modulo $K^2 + (K\cap J(1))^{[2]}$ we have The claim follows. On left comment #4221 on Lemma 54.70.8 in Étale Cohomology Most important and interesting post you have shared with us. I would recommend everyone to read your posts to get interesting ideas. Thanks for sharing On Dario Weißmann left comment #4219 on Remark 55.13.2 in Crystalline Cohomology Typo in the definition of the multiplication: $f \omega_2'+f'\omega_2'$ has too many $'$s There is also an issue how the multiplication formula is displayed. It overlaps with the sidebar. Works fine in the pdf version though. On left comment #4217 on Lemma 54.78.7 in Étale Cohomology (And I gave a short proof of the fact that I don't know how to count...) On left comment #4216 on Lemma 54.78.7 in Étale Cohomology 4th sentence of the proof: sch -> such. On Frank left comment #4215 on Section 22.23 in Differential Graded Algebra It seems better to clarify the second property of graded tensor product: the $p,q$ in $x\in M^p,y\in N^q$ are different from those in the direct sum $\bigoplus_{p+q=n}$. In other words, here $p+q$ is only assumed to be no greater than $n$. It seems better to choose different names for them. On David Holmes left comment #4214 on Lemma 60.4.6 in Properties of Algebraic Spaces I came here to make the same comment as Wessel, but then I saw Wessel's comment, so I won't (maybe I just did...). So maybe count this as another vote for changing it?? On 羽山籍真 left comment #4213 on Lemma 62.9.1 in Decent Algebraic Spaces For the last sentence of the proof, $V$ has been used to denote an open of $Y$, so it's not a subspace of $Z$ indeed, maybe replace it by $V\prime$? On Sean Cotner left comment #4212 on Lemma 34.20.16 in Descent Small point: after replacing S by f(X), you still use the assumption that S is affine. I think this can be fixed by instead base changing to an arbitrary affine open contained in f(X), after which the rest of the proof goes through. On Aaron left comment #4211 on Lemma 10.29.5 in Commutative Algebra I think it should be: $\text{Spec}(S)\to\text{Spec}(R)$ hits all the minimal primes. On Che Shen left comment #4210 on Section 30.14 in Divisors 0B3P follows immediately from 01R3 (1). Maybe we can use this as proof instead of "omitted"? On Zhiyu Zhang left comment #4209 on Section 48.16 in Local Cohomology Kunz theorem is pretty good, will other properties about Frobenius action (and singularity) be added in the future? For example, Remark 13.6. in Ofer Gabber. Notes on some t-structures. In Geometric aspects of Dwork theory. Vol. I, II, pages 711–734. Walter de Gruyter GmbH & Co. KG, Berlin, 2004. shows that F-finiteness on a noetherian ring $A$ with $p=0$ will imply $A$ is quotient by a regular ring hence the existence of dualizing complex. The proof is very short and elegant. On 羽山籍真 left comment #4208 on Lemma 32.41.3 in Varieties The second last paragraph: "we obtain a specialization \eta' to t_j with \eta' \in X^0". Here \eta' is in T^0 (not X^0), right? On HayamaKazuma(羽山籍真) left comment #4207 on Lemma 32.41.1 in Varieties (2)Is it be f|_{U_i}: U_i \rightarrow Y ? On slogan_bot left comment #4206 on Lemma 15.50.7 in More on Algebra Suggested slogan: "Henselization of a ring inherits good properties of formal fibers" On Pierre left comment #4205 on Section 10.130 in Commutative Algebra In lemma 00RW, at some point the conclusion is that $\Omega _{S \otimes _ R S/S} = \Omega _{S/R} \otimes _ S (S \otimes _ R S)$ which is said to follow from lemma 00RV, but in lemma 00RV the tensor product is over the base change $R\to S$, and in the 00RW the tensor product is not over the base change but over the induced map $S\to S\otimes_R S$. Is that a problem or am I missing something? On Nicolas Müller left comment #4204 on Lemma 54.80.8 in Étale Cohomology I have a hard time understanding how Lemma 0959 is used here to prove that $j_!f_* f^{-1}\mathcal{G} = \overline{f}_* j'_!f^{-1}\mathcal{G}$. Instead, wouldn't it be better to combine Propositions 03QP and 03S5 to show that both sides agree on the stalks? Then one also doesn't need to argue that the diagram is cartesian. (Also, at least in the preview, the LaTeX doesn't render properly in this comment.) On left comment #4203 on Lemma 10.95.1 in Commutative Algebra In Lemma 81.4.5 in Section 81.4 the topology on the submodule $K$ is the topology inherited from $M$ (it is given by the submodules $K \cap I^nM$). But in the current Section 10.95 there is no mention whatsoever of topologies or completion with respect to any topology. We are just considering $I$-adic completion straight up. Maybe we should be a little bi more careful in the statement of Lemma 81.4.5. Thanks for the comment. On left comment #4202 on Section 10.96 in Commutative Algebra @#4200 Well, it may be that we can shorten the proof at the end there, but flat surjections of rings aren't necessarily isomorphisms.
2019-05-20 07:41:07
{"extraction_info": {"found_math": true, "script_math_tex": 32, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.96629399061203, "perplexity": 1656.0055606854928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00532.warc.gz"}
http://www.newton.ac.uk/seminar/20191029143015301
The lower tail of the KPZ equation via a Riemann-Hilbert approach Presented by: Tom Claeys Université Catholique de Louvain Date: Tuesday 29th October 2019 - 14:30 to 15:30 Venue: INI Seminar Room 1 Abstract: Fredholm determinants associated to deformations of the Airy kernel are closely connected to the solution to the Kardar-Parisi-Zhang (KPZ) equation with narrow wedge initial data, and they also appear as largest particle distribution in models of positive-temperature free fermions. I will explain how logarithmic derivatives of the Fredholm determinants can be expressed in terms of a $2\times 2$ Riemann-Hilbert problem, and how we can use this to derive asymptotics for the Fredholm determinants. As an application of our result, we derive precise lower tail asymptotics for the solution of the KPZ equation with narrow wedge initial data which refine recent results by Corwin and Ghosal. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
2020-01-26 20:04:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46782881021499634, "perplexity": 629.0127007453435}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00374.warc.gz"}
http://openstudy.com/updates/505a0cf8e4b0cc122893813b
## moser90 2 years ago Use the center, vertices, and asymptotes to graph the hyperbola. (x - 1)2 - 9(y - 2)2 = 9 1. moser90 2. moser90 I am totally stuck if someone can help me find the center I can figure out the rest 3. moser90 is the center 0,0 4. theEric 5. moser90 I am just really confused because there is nothing on the bottom for a denominator 6. theEric I'm not sure myself, but it looks like it's (1,2) for the center. 7. waleed_imtiaz First divide the whole equation by 9... (x - 1)^2/(9) - (y - 2)^2 =1 now U know a=3 and b=1 So focus is (+-c,o) because it is on x-axis..... Can u do now ? 8. moser90 so the center is (1,3) 9. theEric $\frac{9}{1}=\frac{1}{\frac{1}{9}}=\frac{1}{(\frac{1}{3})^2}$ 10. moser90 so this would make it the last picture right 11. moser90 this one 12. theEric If the center is (1,2) that is your only option! 13. theEric Did you check out the link? http://tutorial.math.lamar.edu/Classes/Alg/Hyperbolas.aspx 14. moser90 yes 15. waleed_imtiaz centre would be (1,2) i think so 16. theEric Well that's two of us. I say it's a good bet. 17. moser90 sometimes the pictures are just hard to go by 18. moser90 but we know that it is not (0,0) or (-1,2) so the last one is the best 19. theEric When you look at formulas that have something like $(x+h)$, you often want x to be modified only by addition or subtraction. If x is multiplied or divided by anything, get it out of the parenthesis! Anything multiplied or divided by $(x+h)$is then something that can really be expressed as just division if you want. By doing so, the equation you have will start to match up with the general formula for the shape of the curve. 20. moser90 thank you 21. theEric We are sure it is (1,2) when we compare it to the general formula. The position of the center of any shape can be found when you see how all x's and all y's are modified with addition or subtraction. This seems to "shift" graphs. 22. theEric You're welcome! :) Back to the "shifting", if you have y = (x), then y = (x+5) is looks to be shifted 5 up. Same with y = 9(x) seeming to shift 5 up when you look at y = 9(x+5). 23. theEric When you look at $(x-h)^2$, $(x-h)$is how you are modifying x. You are finding the difference between them with subtraction. Then that difference is squared, so only the difference between them matters, not at all whether x>h or x<h. 24. theEric Lastly, for your future typed-up math discussions, it helps to express "to the power of 2" as "^2". It's a very common notation used on the internet.
2015-07-02 12:37:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5926804542541504, "perplexity": 1175.97250113572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095557.73/warc/CC-MAIN-20150627031815-00077-ip-10-179-60-89.ec2.internal.warc.gz"}
https://proofwiki.org/wiki/Symbols:Greek/Alpha
# Symbols:Greek/Alpha ## Alpha The $1$st letter of the Greek alphabet. Minuscule: $\alpha$ Majuscule: $\Alpha$ The $\LaTeX$ code for $\alpha$ is \alpha . The $\LaTeX$ code for $\Alpha$ is \Alpha .
2019-05-19 23:02:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999885559082031, "perplexity": 11002.237561959255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00246.warc.gz"}
https://www.tutorialspoint.com/How-can-I-create-a-Python-tuple-of-Unicode-strings
# How can I create a Python tuple of Unicode strings? PythonProgramming #### Beyond Basic Programming - Intermediate Python Most Popular 36 Lectures 3 hours #### Practical Machine Learning using Python Best Seller 91 Lectures 23.5 hours #### Practical Data Science using Python 22 Lectures 6 hours You can create a tuple of unicode strings in python using the u'' syntax when defining this tuple. ## example a = [(u'亀',), (u'犬',)] print(a) ## Output This will give the output [('亀',), ('犬',)] Note that you have to provide the u if you want to say that this is a unicode string. Else it will be treated as a normal binary string. And you'll get an unexpected output. ## example a = [('亀',), ('犬',)] print(a) ## Output This will give the output [('\xe4\xba\x80',), ('\xe7\x8a\xac',)] Updated on 05-Mar-2020 06:06:15
2022-09-29 21:18:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22879955172538757, "perplexity": 6283.374973317965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00749.warc.gz"}
https://web2.0calc.com/questions/let-f-x-3x-4-7x-3-2x-2-bx-1-for-what-value-of-b-is-f-1-1
+0 # Let f(x)=3x^4-7x^3+2x^2-bx+1. For what value of b is f(1)=1 0 71 1 Let f(x)=3x^4-7x^3+2x^2-bx+1. For what value of b is f(1)=1 Jun 2, 2020 #1 0 First, you substitute 1 in for x, and you will get $$3\cdot1^4-7\cdot1^3+2\cdot1^2-b+1=1$$ Simplify.$$3-7+2-b+1=1$$ $$-1-b=1$$ $$b=-2$$ Jul 20, 2020
2020-09-26 02:06:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837120771408081, "perplexity": 2806.4711111336665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400232211.54/warc/CC-MAIN-20200926004805-20200926034805-00584.warc.gz"}
https://en.wikiquote.org/wiki/Carl_Barus
# Carl Barus Carl Barus (1896) Carl Barus (February 19, 1856 – September 20, 1935) was an American physicist and the maternal great-uncle of the American novelist Kurt Vonnegut. He was dean of the Brown University Graduate Department from 1903 until his retirement in 1926. In 1905 he became a corresponding member of Britain, a member of the First International Congress of Radiology and Electricity at Brussels, and a member of the Physical Society. Beginning in 1906 he was on the advisory board of physics at the Carnegie Institution in Washington state. He died in Providence, Rhode Island, U.S.A. See also "The Mathematician in Modern Physics" ## Quotes • [L]et me refer to my original work. Naturally, if a student has been hammering away ever since 1979... he must have accumulated a lot of litter, much of which, perhaps, should have long since been swept away. But the fates are not to be bribed either by pother or importunity. Out of 1,000 men who are called, one (probably the ratio is much smaller) is chosen to do glorious scientific work. The others? Their lot is failure. They may be equally or even more industrious, they may have equal or even greater brain power—the other 999 exist merely to make the illustrious one in whom they culminate, possible. After that, the world will say to each in words of poetic brevity: "The man has done his duty, the man can go." And they do, pretty quickly, to a gentler lethe, flowing between the banks of amaranth and asphodel. Gentlemen, I am one of the 999 about to be forgotten. • Prof. Barus' Retirement dinner speech, Brown University (1926) as quoted by in One of the 999 about to be Forgotten: Memoirs of Carl Barus, 1865-1935 (2005) ed., Axel W.-O. Scmidt. ### "On the Thermo-Electric Measurement of High Temperatures" (April 8, 1889) Department of the Interior, Bulletin of the Geological Survey No. 54, source • [F]ew important steps in dynamical geology will be made until the methods for the accurate measurement of high temperatures and high pressures have not only been perfected but rendered easily available. On the basis of this conviction the present memoir on high temperatures has been prepared... [I]f the investigation be of any fullness, it is almost essential that the observer master the component parts of his research separately; and not until he has satisfactorily done this can he apply them conjointly. • [T]he rooms which had been placed at my disposal by the American Museum of New York became temporarily unavailable. ...[W]e determined to rent a house in New Haven, Conn., and thither the laboratory was removed in November, 1882. ...[T]he city offered excellent library and other facilities for scientific work, such as can be met only in the immediate vicinity of a large university [Yale College]. ...The work in New Haven was not satisfactorily completed. In July, 1883, with the appointment of Prof. F. W. Clarke as chief chemist of the Geological Survey, our laboratory was officially connected with the chemical laboratory. Conformably with the further decision of the Director, by which the divers laboratories of the Geological Survey were united in one central laboratory in Washington, it was again necessary to change our basis of operations, this time... from New Haven to Washington. In the quarters assigned to us in the U. S. National Museum, temperature work on so large a scale... appeared impracticable, and it was therefore abandoned. ...In place of the dangerous and cumbersome apparatus of the former laboratory, the endeavor is made to reduce all apparatus to the smallest dimensions compatible with reasonable accuracy of measurement. • I make... a cursory survey of certain pyro-electric properties of the alloys of platinum. Curiously... the data... led to a striking result.. it appears that the zero resistance ${\displaystyle f(0)}$, if the resistance at ${\displaystyle t^{O}}$ be ${\displaystyle r=f(t)}$, and the zero coefficient ${\displaystyle f^{\prime }(0)/f(0)}$, are related to each other by a law which during the stages of low percentage alloying is independent of the ingredients of the alloy, except in so far as they modify its electrical conductivity. • I develop a method for the direct and expeditious comparison of the thermo-couple with the air thermometer. A comparison of the data... gives me a criterion of the accuracy with which the data in the region of high temperature are known. This indirect method... is not apparently as rigorous as their direct evaluation by means of the air thermometer; but the indirect method requires much smaller quantities of substance and may be conveniently extended to much higher temperatures. Taking all liabilities to error into consideration, its inferior accuracy is only apparent. ### "The Mathematical Theory of the Top" (April 8, 1898) Science New Series Vol. VII, No. 1, pp. 469-474, source. A review of Felix Klein's "Lectures delivered on the sesquicentennial celebration of Princeton University," pp. 1-74, edited by Professor H. B. Fine. New York, Charles Scribner's Sons, 1897. • Looking over such famous old books as Montmort's 'Analyse des jeux de hasard' or Moivre's 'Doctrine of Chances' one regrets that so much excellent mathematics should have been wasted on games most of which are wholly obsolete. Coriolus in his '[Théorie Mathématique des Effets du] Jeu de billard' (1835) fared better, for the game is still very much alive and its dynamical terrors unsubdued. • In even greater measure is this true of the top. The top has been everybody's toy and must, therefore, at one time or another have piqued everybody's curiosity. Lagrange, Poinsot, Jacobi, not to mention other great names, have in turn paid their tribute; yet the top may be set spinning to-day, unhampered by a completed theory to account for its evolutions. • Among recent contributions we may refer in particular to Professor A. G. Greenhill's noteworthy papers... when one remembers that these complex curves reach only especially simple cases of gyroscope motion, one may get some notion of the difficulty of the problem involved. • Footnote: Greenhill: Applications of Elliptic Functions, Proc. Lond. Math. Soc., 1895, 1896; Engineering, July, 1896. • Turning to Klein's little book, one is astonished in finding the most general aspects of the subject treated almost without computation and in so little space. ...It would have cost little to give the expanded form of the σ-function. ...Weierstrass's original notation was in terms of Abelian functions. The tremendous development of elliptic functions is out of proportion with their application to natural phenomena. Meeting them rarely one forgets them. Memory peters out like the infinite series of a ζ-function. • Mathematicians will do well to observe that a reasonable acquaintance with theoretical physics at its present stage of development, to mention only such broad subjects as electricity, elastics, hydrodynamics, etc., is as much as most of us can keep permanently assimilated. It should also be remembered that the step from the formal elegance of theory to the brute arithmetic of the special case is always humiliating, and that this labor usually falls to the lot of the physicist. • The lecture concludes with a demonstration showing that a free body in hyperbolic non Euclidean space may be so fashioned as in real time to carry out the actual motions of the top. The form of such a body and the forces to actuate it are specified. Klein lays great stress on the beauty of this generalization. ...The full geometry of this case is not carried out in these lectures, however, and Klein regrets that the development of the automorphic functions has recently fallen into abeyance. • The reviewer is aware... he has given an imperfect account of this remarkable book. That Klein's researches constitute a splendid advance in dynamics of the rotation of a rigid body there can be no question. One cannot but hope that the outline given in these Princeton lectures may soon be expanded and put in shape more easily assimilated by persons more moderately versed in the theory of elliptic functions. • The boon of an appropriate lemma is ideal generosity, and not even a mathematician can scorn its almost mathematical elegance. • A man may be a thoroughgoing soldier enough on land; but put him in the foot ropes of the flying jibboom in a storm, and he is apt to cut a most ludicrous figure. Shift a physicist's foothold of Cartesian differential coordinates, suspend him over an abyss of non-Euclidean space, and he will kick sturdily. Poor policy this, for a missionary! ## Quotes about Carl Barus • The presiding officer of this [Physics] section was Prof. Carl Barus, who fills the chair of Physics in Brown University. His inaugural address was on "Long Range Temperature and Pressure Variables in Physics." He began by giving a history of the various attempts to provide suitable apparatus for high-temperature measurement. Fusion first played an important part in the manufacture of thermoscopes, and later those instruments based on specific heat showed an advantage over the fusion instruments. The gas thermometer was referred to as the only fruitful method of absolute pyrometry. The speaker dwelt at length on high-temperature work, the first thorough-going instance of which was by Prinsep in 1829. Then the experiments down to 1887 were considered in detail, and the conclusion reached that the data furnished by the Reichsanstalt will eventually be standard. ...Turning to the applications of pyrometry, he referred to the variation of metallic ebullition with pressure. Results already attained show an effect of pressure regularly more marked as the normal boiling point is higher. Igneous fusion was considered in its relation to pressure and with regard to the solidity of the earth, and the inference was drawn that the interior solidity of the earth, now generally admitted, is due only to superincumbent pressure, withholding fusion. The question of heat conduction was next taken up, and the results deduced by various writers as to the age of the earth discussed. High pressure measurement was lengthily dealt with. Passing from this subject, the entropy of liquids was considered. ...The paper ended with a reference to isothermals and several kindred subjects. • Marcus Benjamin, "Associations for the Advancement of Science" (1898) Appletons' Annual Cyclopaedia and Register of Important Events 3rd series, Vol. II (Whole series, Vol. XXXVII) pp. 33-34. • In the decade between 1882 and 1892 contributions to gas thermometry and the measurement of high temperatures are few and unimportant, but work was begun in those years on both sides of the Atlantic which, for the experimental skill and persistence with which the experimental difficulties and limitations were pursued and successively overcome, surpasses any effort which has been made either before or since that time. These were the investigations of Barus at the U.S. Geological Survey in Washington and of [Ludwig] Holborn and his colleagues at the Reichsanstalt in Charlottenburg. Barus (1889) recognized as no observer who preceded him had done, the superlative importance of a uniform temperature distribution about the gas thermometer bulb for purposes of high-temperature measurement, and he took the most extraordinary precautions to maintain it. A temperature of 1000° C or more is not attained without very steep temperature gradients in the region immediately surrounding the zone of highest temperature. It is therefore a problem of great difficulty to introduce a bulb of from 10 to 20 cm. in its largest dimension into this hot zone without leaving some portion of it projecting out into a region 200° or 300° lower in temperature. Burning mixtures of gas and air for heating purposes also contributed to the irregularity and uncertainity of the temperature distribution about the bulb. Barus sought to avoid this by a method of great ingenuity, but also of great technical difficulty. He inclosed his bulb within a rapidly revolving muffle which by its motion protected every portion of the bulb from direct exposure to a particularly hot or a particularly cold portion of the adjacent furnace. This complicated furnace structure and consequently inaccessible position of the bulb made it impossible to introduce into the region about the bulb the substances whose temperature constants were to be measured and compelled him to use thermo-elements which were first calibrated by exposure in the furnace with the bulb and then used independently to measure other desired temperatures. The thermo-element has continued in general use in this intermediary rôle since that time. In the preparation and use of thermo-elements Barus also made much more extensive and elaborate studies than any one who has followed him. ...It is an unfortunate accident that history has failed to record Barus's name along with that of Le Chatelier in the development of the thermo-element for purposes of high-temperature measurement. It hardly admits of question that Barus contributed incomparably more to our knowledge of the thermo-electric properties of the different metals and their use than his distinguished French contemporary, but the 10 per cent iridium alloy which he finally selected proved to be less serviceable than the 10 per cent rhodium alloy developed by Le Chatelier... And so we find the Le Chatelier platin-rhodium thermo-element in use to-day the world over, while the magnificent pioneer work of Barus remains but little known. • [T]he author, whose work on nuclei is well known, describes a number of investigations carried out with his fog-chamber apparatus. The apparatus having been sufficiently improved, it was used for various experiments, including the growth of persistent nuclei, the production of water nuclei by evaporation, the results obtained when X-rays are allowed to strike the fog-chamber from different distances, the effect due to radium, &c. Other problems dealt with in the book are the distribution of colloidal nuclei and of ions in media other than air-water, the simultaneous variation of the nucleation and the ionization of the atmosphere of Providence, and the variations of the colloidal nucleation of dust-free air in course of time. • John Jolly, William Francis, ed., "Notices respecting new books" (April, 1908) review of Condensation of Vapor as induced by Nuclei and Ions (1907) by Carl Barus, in No. LXXXVIII, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Jan-June, 1908) Vol XV, 6th Series, p. 569. • When the history of the progress of physics in the United States during the late nineteenth and early twentieth century is written, the name of Carl Barus will occupy an important place. • Robert Bruce Lindsay, "Biographical memoir of Carl Barus, 1856-1935" (1941) Biographical memoir, National Academy of Sciences, Vol. XXII, ninth memoir. • At Brown University Carl Barus and Alpheus Packard are undoubtedly the most eminent scientists who ever occupied faculty chairs. Professor Barus was a hero-worshipper and in his home was a genius corner from which pictured faces of great scientists looked down upon him. ...The breadth of his interest and achievements was extraordinar—recall his reading of Greek tradedies in the original, his knowledge of French and Italian literatures, and the proficiency he attained in playing the violin, flute, clarinet, oboe, cornet, trumpet and trombone, in addition to the piano and organ. The brilliancy of his intellect, the modesty of his bearing, the beauty of his personality, and the kindliness of his spirit have left most precious and inspiring memories... • Robert Bruce Lindsay, "Biographical memoir of Carl Barus, 1856-1935" (1941) Biographical memoir, National Academy of Sciences, Vol. XXII, ninth memoir. • Carl Barus loved music, and composed about fifty compositions, among them a March to Pembrok Hall... and an Ode to the Steam Shovel, inspired by the daily noise outside his laboratory and was presented by him to President Faunce. • Axel W.-O. Scmidt, Forward, One of the 999 about to be Forgotten: Memoirs of Carl Barus, 1865-1935 (2005)
2020-10-25 03:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46943289041519165, "perplexity": 2791.3451193200513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107885126.36/warc/CC-MAIN-20201025012538-20201025042538-00311.warc.gz"}
http://mathoverflow.net/questions/129147/complete-bipartite-subgraph-of-dense-bipartite-subgraph
# Complete Bipartite Subgraph of Dense Bipartite Subgraph Q1: Consider a $2^n$ by $2^n$ bipartite graph with at least $(1-\epsilon)2^{2n}$ edges. For any $\epsilon > 0$ and $n$ large enough, is it always possible to find a $2^{(1-f(\epsilon))n}$ by $2^{(1-f(\epsilon))n}$ complete bipartite subgraph, where $\underset{\epsilon\rightarrow0^+}{lim}f(\epsilon)=0$? - I suspect not. Consider a matching of M 2^n edges. Any complete A-B bipartite subgraph where the sizes of A and B add up to more than 2^n will contain an edge of M. Now as n grows, 2^n will drop below the epsilon fraction, but the f(epsilon) will be forced to remain above 1/2. Gerhard "Ask Me About System Design" Paseman, 2013.04.29 –  Gerhard Paseman Apr 29 '13 at 21:16 By the way, if this is homework, you should mention MathOverflow when using the above comment in your answer. Gerhard "Credit Where Credit Is Due" Paseman, 2013.04.29 –  Gerhard Paseman Apr 29 '13 at 21:19 Hi Gerhard, thanks for your reply! I did not understand why we should consider the case where A and B add up to more than 2^n. Also could you explain a little more why f(epsilon) will be forced to remain above 1/2? –  Patrick Apr 29 '13 at 21:58 I'm longing for Q2... –  François G. Dorais Apr 29 '13 at 22:10 Patrick: take a complete (and balanced) bipartite graph with 2^2n edges, and color 2^n edges red when they are part of a particular matching. Pick a subset A of one side of the vertices. What can you say about a subset B of the other side if B is such that there are no red edges between any member of A and any member of B? Also, I would like to see some motivation for this problem. It would also be good to know what type of class might assign this as a problem. Gerhard "Will Know How To Answer" Paseman, 2013.04.29 –  Gerhard Paseman Apr 29 '13 at 22:25 No. The correct bound for the largest guaranteed balanced complete bipartite subgraph is $\Theta_{\epsilon}(n)$, where the implied constant depending on $\epsilon$ tends to infinity as $\epsilon \to 0$, so it is only logarithmic in the total number of vertices. For the upper bound, consider the random bipartite graph with parts of order $2^n$ where each edge appears with probability $1-\epsilon/2$. By Chernoff's inequality, with high probability, this graph will have at least a $1-\epsilon$ fraction of the pairs as edges, and a simple union bound over all possible $K_{t,t}$ with $t=g(\epsilon)n$ for an appropriate choice of $g(\epsilon)$ shows that this also will be $K_{t,t}$-free with high probability. This is essentially the same argument as given by Erdos in his classical lower bound on Ramsey numbers from 1947. For the lower bound, suppose we are trying to show that there is a $K_{t,t}$. Then count the number of pairs $(v,T)$ consisting of one vertex from the first part and a set $T$ of $t$ vertices from the second part which are all neighbors of $v$. The number of such pairs is $\sum_{v}{\textrm{deg}(v) \choose t}$. One can lower bound this using the number of edges of the graph and Jensen's inequality. On the other hand, if there is no $K_{t,t}$ each $T$ is in at most $t$ pairs and hence the number of such pairs is at most $(t-1){2^n \choose t}$. One gets a contradiction to there being no $K_{t,t}$ if $t$ is too small.
2015-07-06 00:56:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307729363441467, "perplexity": 156.05806645794817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097757.36/warc/CC-MAIN-20150627031817-00285-ip-10-179-60-89.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/432864/interaction-term-is-significant-without-main-affects-and-main-effects-are-sig
# Interaction term is significant WITHOUT main affects... AND main effects are significant WITHOUT interaction term? I am trying to determine the effect of a person's weight and the incline that they are running over on their running speed. I'm just using a simple linear model in R, but I get a weird situation where these two main effects (when viewed without an interaction term) are both significant (and interaction isn't), but when I view the interaction term by itself without main effects, then IT becomes significant! How do I choose between these two conflicting models? Here's the full model, where neither predictor variable appears significant. Call: lm(formula = speed ~ actual.weight * incline, data = wow) Residuals: Min 1Q Median 3Q Max -0.311468 -0.101650 0.000843 0.092570 0.307654 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.2301738 0.0353404 34.809 <2e-16 *** actual.weight -0.0247079 0.0230644 -1.071 0.287 incline -0.0004380 0.0005993 -0.731 0.467 actual.weight:incline -0.0005566 0.0003970 -1.402 0.164 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1288 on 102 degrees of freedom Multiple R-squared: 0.1859, Adjusted R-squared: 0.162 F-statistic: 7.766 on 3 and 102 DF, p-value: 0.0001011 Since nothing seems to be significant in the full model, I remove the interaction term and see what if things look different: Call: lm(formula = speed ~ actual.weight + incline, data = wow) Residuals: Min 1Q Median 3Q Max -0.31216 -0.10062 0.00313 0.08915 0.31215 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.2618681 0.0272936 46.233 < 2e-16 *** actual.weight -0.0496668 0.0147356 -3.371 0.00106 ** incline -0.0011274 0.0003442 -3.275 0.00144 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1294 on 103 degrees of freedom Multiple R-squared: 0.1703, Adjusted R-squared: 0.1541 F-statistic: 10.57 on 2 and 103 DF, p-value: 6.693e-05 However, I have some reason to believe that there might be a lone interaction term without main effects. I tested this, just to be safe, and there was significance! Call: lm(formula = speed ~ actual.weight:incline, data = wow) Residuals: Min 1Q Median 3Q Max -0.30143 -0.09795 -0.00455 0.09431 0.31798 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.1981665 0.0159965 74.902 < 2e-16 *** actual.weight:incline -0.0008925 0.0001889 -4.726 7.22e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.1283 on 104 degrees of freedom Multiple R-squared: 0.1768, Adjusted R-squared: 0.1689 F-statistic: 22.33 on 1 and 104 DF, p-value: 7.218e-06 These models aren't nested, and I'm really confused how to distinguish between them. How are weight and incline really affecting speed? Also note that the results of these models do not conflict with each other: The marginal effects do not have the same interpretation as the main effects. The model without interaction estimates an effect of actual.weight and incline, while the model with interaction estimates an effect of either covariate where the other is equal to zero, and an effect for how a change in one affects the slope of the other. Lastly, all models explain a little variance in the response variable: Your $$\text{R}^2$$ ranges from 17% to 19%. That means that even if all presumed effects were significant, they don't have a substantial effect. With that in mind, there are several things to note about the model coefficients. In the interaction model, the interaction effect and the marginal effects (especially that of incline) are both very small. In the model with only main effects, the effects may be significant, but you should really also consider their effect size, which is again probably less than can be considered relevant, although that depends on the scale at which you measured these variables. Unless you used a very small scale for incline, that means that incline has an almost negligible effect compared to weight.
2022-01-25 00:40:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.664637565612793, "perplexity": 659.981614452761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00412.warc.gz"}
http://mathhelpforum.com/math-topics/133903-m1-question-resolving-frictional-forces-print.html
# M1 question- Resolving frictional forces • Mar 15th 2010, 08:07 AM fishkeeper M1 question- Resolving frictional forces Hi I am having a problem with this question, and was wondering whether I could have a bit of a nudge in the correct direction? I cannot do part 'iv'; however, I think that I may have worked out part iv for question iii though Im not sure http://i213.photobucket.com/albums/c...00698Small.jpg The answers I have so far are: A) i) Resolve horizontally- $80*cos 25 = 72.5N$ ii) Fmax= Coefficient of friction x R: $Fmax= 0.32*80 = 25.6N$ iii) Mass= $80/9.8= 8.16kg$ -> $T + 8.16*cos 25 = 80*sin 25$ $T= 80*sin 25 - 8.16*cos 25 = 26.14N$ v) $80/9.8= 8.16kg$ Any help is greately appreciated concerning parts iii and iv • Mar 15th 2010, 12:40 PM skeeter Quote: Originally Posted by fishkeeper Hi I am having a problem with this question, and was wondering whether I could have a bit of a nudge in the correct direction? I cannot do part 'iv'; however, I think that I may have worked out part iv for question iii though Im not sure http://i213.photobucket.com/albums/c...00698Small.jpg The answers I have so far are: A) i) Resolve horizontally- $80*cos 25 = 72.5N$ ii) Fmax= Coefficient of friction x R: $Fmax= 0.32*80 = 25.6N$ iii) Mass= $80/9.8= 8.16kg$ -> $T + 8.16*cos 25 = 80*sin 25$ $T= 80*sin 25 - 8.16*cos 25 = 26.14N$ v) $80/9.8= 8.16kg$ Any help is greately appreciated concerning parts iii and iv iii) $T_{min} = mg\sin{\theta} - f_{s \, max}$ iv) $T_{max} = mg\sin{\theta} + f_{s \, max}$ • Mar 15th 2010, 01:07 PM fishkeeper thankyou Skeeter That helps enourmously!
2016-09-26 04:18:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7148516178131104, "perplexity": 3760.7244899316856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660602.38/warc/CC-MAIN-20160924173740-00054-ip-10-143-35-109.ec2.internal.warc.gz"}
http://www.jiskha.com/display.cgi?id=1258058478
Friday July 25, 2014 # Homework Help: sigma notation Posted by Reen on Thursday, November 12, 2009 at 3:41pm. find the value of the sum n sigma i=1 (2-5i) Related Questions math - I just need help with this. Write the series in sigma notation. 1/4 + 1/2... math - I just need help with this. Write the series in sigma notation. 1/4 + 1/2... Maths - Express the sum using summation notation: 1 - 1/2 + 1/3 - 1/4 + 1/5... Maths - Express the sum using summation notation: 1 - 1/2 + 1/3 - 1/4 + 1/5... Maths - Express the sum using summation notation: 1 - 1/2 + 1/3 - 1/4 + 1/5... math - hi i startted the problem below by writing a sigma notation of sigma[(2^(... math - hi i startted the problem below by writing a sigma notation of sigma[(2^(... Pre-CalcPlease check - Please check: 1.Write in expanded form: this is a sigma ... Algebra - find the indicated sum not sure how to type the sigma sign, but here ... Geometry - The vertices of a regular 10-gon are labeled V_1, V_2, \ldots V_n, ... Search Members
2014-07-25 16:29:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978317141532898, "perplexity": 1208.0753394057563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894378.97/warc/CC-MAIN-20140722025814-00210-ip-10-33-131-23.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2738895/what-is-frac-partial-gamma-partial-nu-on-partial-b-rho-y
# What is $\frac {\partial \Gamma} {\partial \nu}$ on $\partial B_{\rho} (y)$? I am studying Laplace's equation from the book "Elliptic partial differential equations of second order" written by Gilbarg and Trudinger. Here I am struggling to grasp a concept regarding the fundamental solution of Laplace's equation. Let $n \geq 3$ then the fundamental solution of Laplace's equation at a point $y \in \Omega$ is given by $$\Gamma (x-y) = \Gamma (|x-y|) = \frac {1} { n (2-n)\omega_n} |x-y|^{2-n},$$ where $x \in \Omega \setminus \{y \}$.Let $B_{\rho} (y)$ denote an open ball centered at $y$ having some small radius $\rho$ . Then this book claims that $\frac {\partial \Gamma} {\partial \nu} = -\Gamma'(\rho)$ on $\partial B_{\rho} (y)$ (where $\nu$ is the unit outward normal to $\partial (\Omega-B_{\rho}))$ just before the equation $(2.16)$ but I couldn't figure out why it should be so. • So you are the same person posting this question? – user99914 Apr 15 '18 at 21:22 • No he is my friend.I also don't understand this part. So I opted to post it separately. – Dbchatto67 Apr 15 '18 at 21:24 At $x\in \partial B_\rho (y)$, the vector $v_x$ is given by $$v_x =- \frac{ x-y}{|x-y|}.$$ Thus $$\frac{\partial \Gamma}{\partial v} = v \cdot \nabla \Gamma =-\frac{x-y}{|x-y|} \cdot \left( \frac{n}{\omega_n} |x-y|^{1-n} \frac{x-y}{|x-y|}\right) = -\frac{n}{\omega_n} |x-y|^{1-n}.$$ On the other hand, writing $\rho = |x-y|$, so $$\Gamma'(\rho) = \left( \frac{n}{(2-n)\omega_n} \rho^{2-n}\right)' = \frac{n}{\omega_n} \rho^{1-n}.$$ So $\frac{\partial \Gamma}{\partial v} = - \Gamma'(\rho)$.
2019-08-24 11:09:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649677276611328, "perplexity": 127.56741006706372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00369.warc.gz"}
https://vitalab.github.io/article/2019/07/25/SphericalCNNCortexParcellation.html
# Highlights Authors present a cortical surface parcellation method using spherical deep convolutional neural networks (CNNs). Key contributions include • Novel features optimized over cortical parcel boundaries. • Data augmentation driven by their intermediate deformation fields. Their method outperforms traditional multi-atlas and naive spherical U-Net approaches. # Introduction For region-based morphological analysis, cortical surfaces need to be consistently subdivided into regions based on cortical parcellation protocols. Consistent labeling of cortical regions is challenging due to the complicated cortical folds and inter-subject variability. Multi-atlas cortical parcellation approaches tend to provide better performance as the number of atlases increases. Unfortunately, inter-subject registration is unavoidable in this approach to align multiple atlases. Traditional CNN architectures are still immature in handling non-uniform data with high complexity due to the Euclidean space coherence incorporated with existing deep architectures. Spherical CNNs have recently emerged to deal with spherical domain data. The authors propose a novel cortical parcellation approach using a deep spherical U-Net encoding surface mesh features. # Methods To compute the input features to the CNN, the following features are computed: • The deformation field: a spherical surface registration method that reconstructs the deformation field by a linear combination of spherical harmonics coefficients (with degrees $$l=0 \ldots 10$$). • The boundary map: deformation fields that align parcel boundaries for a more accurate prediction. The deformed data features used to feed to the network are then: • The mean curvature ($iH$) from inflated surface. • The sulcal depth ($$SD$$). • The mean curvature from cortical surface ($$H$$). To create a template, they co-register training samples in an iterative manner and compute a distance map of the mode (most frequent) cortical labels across the training set after their registration to the template using the three geometric features. They then register the normalized distance map to the template distance map to produce the deformation fields. The authors used a spherical U-net architecture designed for segmentation tasks. They provide the described geometric features to the input channels and the corresponding labels to the output channels. Data augmentation is performed by using all the deformations of the spherical harmonics between $$l=0 \ldots 10$$. At the end of the testing stage, they refine predicted parcellation maps with a standard graph-cut method to remove potential isolated regions and to create smooth parcel boundaries. ## Data 427 T1-weighted 3T MRI images. # Results The cortical surfaces and their spherical mappings were reconstructed via a standard FreeSurfer pipeline. The baselines used were: • A spherical U-Net model driven only by non-rigid deformation information. • A multi-atlas and spherical U-Net model with the rigid deformation information. For a fair comparison, the same graph-cut technique was applied to all the baseline methods. The authors’ approach outperforms multi-atlas (46 regions) and spherical U-Net (24 regions). No regions were found with significantly reduced Dice overlap. # Conclusions The authors presented a cortical parcellation method using a spherical U-Net with novel features optimized over cortical parcellation boundaries. The proposed method achieves qualitatively and quantitatively better performance compared to the baselines used.
2022-12-01 11:22:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45896416902542114, "perplexity": 4292.543308531409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00619.warc.gz"}
https://www.physicsforums.com/threads/is-there-an-equation-for-the-energy-loss-of-a-photon-in-different-media.570718/
# Is there an equation for the energy loss of a photon in different media? 1. Jan 25, 2012 ### Ralphonsicus Thanks. 2. Jan 25, 2012 ### Staff: Mentor If a photon is absorbed, it loses 100% of its energy. If it doesn't get absorbed, it loses 0% of its energy. What you can talk about is the probability of absorption for a single photon, which translates into the (average) fraction of a large number of photons that gets absorbed when passing through something. Typically you have an exponential absorption law, in which the number of surviving photons decreases with thickness of the absorbing material according to an "absorption coefficient" which depends on photon energy and the type of absorber: $$N = N_0 e^{-\alpha \Delta x}$$
2018-01-18 10:55:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.522596001625061, "perplexity": 559.9486951633938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887224.19/warc/CC-MAIN-20180118091548-20180118111548-00618.warc.gz"}
https://slideplayer.com/slide/4646116/
# ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009. ## Presentation on theme: "ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009."— Presentation transcript: ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009 Statistics Example Reduction of Cholesterol Level Example (Cont’d) Sample Mean Sample Median Sample Median (Cont’d) Sample Mean vs. Sample Median Percentile Location of Data Variability Averages Sample Variance Statistics Standard Deviation Sample Range Interquartile Range Averaging? Data Handling Dot Plots Histogram Example Histogram (Cont’d) Confidence interval Consider an estimator for unknown  We fix a confidence level, 1-  For every  replace the single point estimator with a lower estimate and upper one s.t. We call, a 1-  confidence interval Confidence interval - example Observations Xi’s are i.i.d normal with unknown mean  and known variance  /n Let  =0.05 Find the 95% confidence interval Confidence interval (CI) Wrong: the true parameter lies in the CI with 95% probability…. Correct: Suppose that  is fixed We construct the CI many times, using the same statistical procedure Obtain a collection of n observations and construct the corresponding CI for each About 95% of these CIs will include  A note on Central Limit Theorem (CLT) Let X 1, X 2, X 3,... X n be a sequence of n independent and identically distributed RVs with finite expectation µ and variance σ 2 > 0 CLT: as the sample size n increases, PDF of the sample average of the RVs approaches N(µ,σ 2 /n) irrespective of the shape of the original distribution CLT A probability density functionDensity of a sum of two variables Density of a sum of three variablesDensity of a sum of four variables CLT Let the sum of n random variables be S n, given by S n = X 1 +... + X n. Then, defining a new RV The distribution of Z n converges towards the N(0,1) as n approaches  (this is convergence in distribution),written as In terms of the CDFs Confidence interval approximation Suppose that the observations X i are i.i.d with mean  and variance  that are unknown Estimate the mean and (unbiased) variance We may estimate the variance  /n of the sample mean by the above estimate For any given , we may use the CLT to approximate the confidence interval in this case From the normal table: Confidence interval approximation Two different approximations in effect: – Treating the sum as if it is a normal RV – The true variance is replaces by the estimated variance from the sample Even in the special case where the X i ’s are i.i.d normal, the variance is an estimate and the RV T n (below) is not normally distributed Download ppt "ELEC 303 – Random Signals Lecture 18 – Statistics, Confidence Intervals Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 10, 2009." Similar presentations
2020-04-06 12:58:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894693851470947, "perplexity": 2712.3092446244655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00005.warc.gz"}
https://developer.aliyun.com/article/414164
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 InnoDB: Warning: a long semaphore wait: --Thread 140593224754944 has waited at btr0cur.c line 528 for 241.00 seconds the semaphore: X-lock on RW-latch at 0x7fd9142bfcc8 created in file dict0dict.c line 1838 a writer (thread id 140570526021376) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.c line 535 Last time write locked in file /pb2/build/sb_0-10180689-1378752874.69/mysql-5.5.34/storage/innobase/btr/btr0cur.c line 528 InnoDB: Warning: a long semaphore wait: --Thread 140570431108864 has waited at btr0cur.c line 528 for 241.00 seconds the semaphore: X-lock on RW-latch at 0x7fd9142bfcc8 created in file dict0dict.c line 1838 a writer (thread id 140570526021376) has reserved it in mode exclusive number of readers 0, waiters flag 1, lock_word: 0 Last time read locked in file btr0cur.c line 535 Last time write locked in file /pb2/build/sb_0-10180689-1378752874.69/mysql-5.5.34/storage/innobase/btr/btr0cur.c line 528 …………………… END OF INNODB MONITOR OUTPUT ============================ InnoDB: ###### Diagnostic info printed to the standard error stream InnoDB: Error: semaphore wait has lasted > 600 seconds InnoDB: We intentionally crash the server, because it appears to be hung. 140101 4:32:58 InnoDB: Assertion failure in thread 140570570065664 in file srv0srv.c line 2502 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/...-recovery.html InnoDB: about forcing recovery. 20:32:58 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=608 max_threads=1600 thread_count=516 connection_count=515 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 444459 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0 thread_stack 0x30000 /usr/local/mysql/bin/mysqld(my_print_stacktrace+0x35)[0x7a5f15] /usr/local/mysql/bin/mysqld(handle_fatal_signal+0x403)[0x673a13] /lib/libpthread.so.0(+0xef60)[0x7fde6901cf60] /lib/libc.so.6(gsignal+0x35)[0x7fde68219165] /lib/libc.so.6(abort+0x180)[0x7fde6821bf70] /usr/local/mysql/bin/mysqld[0x7ff2ce] /lib/libpthread.so.0(+0x68ba)[0x7fde690148ba] /lib/libc.so.6(clone+0x6d)[0x7fde682b602d] The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 131231 04:34:11 mysqld_safe Number of processes running now: 0 131231 04:34:11 mysqld_safe mysqld restarted InnoDB: Warning: a long semaphore wait --Thread 140570431108864 has waited at btr0cur.c line 528 for 241.00 seconds the semaphore: X-lock on RW-latch at 0x7fd9142bfcc8 created in file dict0dict.c line 1838 1 You can monitor the use of the adaptive hash index and the contention for its use in the SEMAPHORES section of the output of the SHOW ENGINE INNODB STATUS command. If you see many threads waiting on an RW-latch created in btr0sea.c, then it might be useful to disable adaptive hash indexing. 1 Sometimes, the read/write lock that guards access to the adaptive hash index can become a source of contention under heavy workloads, such as multiple concurrent joins. 1 set global innodb_adaptive_hash_index = 0;
2023-02-01 08:29:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21727395057678223, "perplexity": 2566.644092751177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00485.warc.gz"}
https://physics.paperswithcode.com/paper/testing-the-limits-of-the-maxwell
# Testing the limits of the Maxwell distribution of velocities for atoms flying nearly parallel to the walls of a thin cell 30 Oct 2017 Todorov Petko Bloch daniel For a gas at thermal equilibrium, it is usually assumed that the velocity distribution follows an isotropic 3-dimensional Maxwell-Boltzmann (M-B) law. This assumption classically implies the assumption of a "cos theta" law for the flux of atoms leaving the surface, although such a law has no grounds in surface physics... (read more) PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories • ATOMIC PHYSICS
2021-05-11 07:44:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9341942071914673, "perplexity": 3922.8460694436667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00270.warc.gz"}
https://physicscatalyst.com/article/density-examples-from-day-to-day-life/
# Density examples from day to day life Density of any substance is mass per unit volume. This concept is able to explain some phenomenan that occur around us. Learn about few density examples in this article. Density is a term that we use in our day to day life. The word density is not always used in a scientific sense the way we use in physics and chemistry. For example, if there are too many trees in a forest and they are close to each other then we say that the forest is dense. Similarly, a car parking lot can be dense or less dense depending on a number of cars being parked there. Again from a scientific point of view concept of density is very important. It is used in various experiments of physics and chemistry where precise measurement of the density of a substance is required to carry out necessary calculations. From density formula, if we have the knowledge of mass and volume of a substance we can easily calculate its density as density is defended as mass per unit volume. In this article, we will not concern us about how to find density and about calculating density. Here we would rather look at density examples where we have applied the concept of density to explain some phenomenon around us. The density of any material also depends on its temperature for example if we continuously heat iron it can change its state. It takes extremely high temperatures to change iron from solid to liquid. Now when iron changes from solid to liquid its density also changes with the increase in temperature. So, the increase in temperature of a substance usually results in a decrease in its density and decrease in density results in increased volume. Density is also affected by pressure and this dependence is most pronounced in gaseous states. When we increase pressure on say, gas in a container, its volume decreases thereby decreasing its volume. Density difference has profound effects in phenomenon deriving the world around us. For instance, take an example of monsoon in India. We know that the process of convection is responsible for the occurrence of monsoons. But it is the difference in temperature and density characteristics of air passing over the land surface and water bodies. Given below are some of the examples of density. ### The density of oil and water explanation Have you ever tried mixing oil with water? If you try to mix them then denser one falls at the bottom and lighter one floats above the denser liquid. In the case of oil and water, oil floats above water when we try to mix them. This happens because the density of water is $\text{1gm/c}{{\text{m}}^{3}}$ which is higher than that of oil. (density of vegetable oil is 0.93 $\text{gm/c}{{\text{m}}^{3}}$) Due to this fact that oil does not dissolve in water, it makes cleanups possible after large oil spills in sea water. Such cleanup system involves the scraping or skimming the top layer of oil off the ocean’s surface. Another such example can be seen in salad dressing where oil and vinegar do not mix together as vinegar is denser than oil. ### Why do helium balloons float in the air? You might have seen hawkers selling balloons that float in the air. Now the question is what makes these balloons float in the air. When the balloons we fill with air using a pump does not float in the air. The answer lies in the difference in densities of matter used to fill these balloons. The balloon that floats in the air is filled with a gas called helium. That is why these balloons are also called helium balloons. Now this helium gas less dense than the air around it. This difference is the density of helium. gas and air around it make it float in the air. One side note:- Air around us consists of nitrogen and oxygen molecules which are heavy as compared to helium molecules. Hydrogen is even lighter than helium but we do not fill balloons with hydrogen because it is highly flame able. ### Floating ice cubes in water While drinking cold beverages with ice you must have noticed that ice cubes float. If you put ice inside a glass filled with water than you could notice that ice cubes float in water. This is because water is one of the few substances that is lightly denser in its liquid form as compared to its solid form which is Ice. ### Icebergs floating in ocean water We have already established that with the decrease in temperature, ice floats in water. Icebergs also float in the ocean. Icebergs are made of fresh water, and they also contain lots of air. They have trapped air bubbles inside them. Oceans, as we all know, are salty and have a density slightly higher than fresh water. This is the reason why icebergs that are frozen and are made up of fresh water, float.
2019-03-18 16:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43551865220069885, "perplexity": 511.2686137292107}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00154.warc.gz"}
https://www.hackmath.net/en/math-problem/446
Cuboid Determine the dimensions of cuboid a, b, c; if diagonal d=9 dm has angle with edge a α=55° and has angle with edge b β=58° Result a =  5.16 dm b =  3.91 dm c =  6.25 dm Solution: $a = 9 \cdot cos(55 ^\circ ) = 5.16 \ \text{dm}$ $b = 9 \cdot sin(55 ^\circ ) \cdot cos(58 ^\circ ) = 3.91 \ \text{dm}$ $c = 9 \cdot sin(55 ^\circ ) \cdot sin(58 ^\circ ) = 6.25 \ \text{dm}$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Do you want to convert length units? Most natural application of trigonometry and trigonometric functions is a calculation of the triangles. Common and less common calculations of different types of triangles offers our triangle calculator. Word trigonometry comes from Greek and literally means triangle calculation. You need to know the following knowledge to solve this word math problem: We encourage you to watch this tutorial video on this math problem: Next similar math problems: 1. Cuboid - edges The sum of all edges cuboid are 8 meters. However, the width is twice shorter than the length and height is seven times longer than the width. Determine the dimensions of the cuboid. 2. Cotangent If the angle α is acute, and cotg α = 1/3. Determine the value of sin α, cos α, tg α. 3. Slope Find the slope of the line: x=t and y=1+t. 4. Reference angle Find the reference angle of each angle: 5. Cosine The point (8, 6) is on the terminal side of angle θ. cos θ = ? 7. Trigonometry Is true equality? ? 8. Theorem prove We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started? 9. Sines In ▵ ABC, if sin(α)=0.5 and sin(β)=0.6 calculate sin(γ) 10. Linear system Solve a set of two equations of two unknowns: 1.5x+1.2y=0.6 0.8x-0.2y=2 11. First man What is the likelihood of a random event where are five men and seven women first will leave the man? 12. Two equations Solve equations (use adding and subtracting of linear equations): -4x+11y=5 6x-11y=-5 13. One half One half of ? is: ? 14. Factory and divisions The factory consists of three auxiliary divisions total 2,406 employees. The second division has 76 employees less than 1st division and 3rd division has 212 employees more than the 2nd. How many employees has each division? 15. Percentage increase Increase number 400 by 3.5% 16. Volleyball 8 girls wants to play volleyball against boys. On the field at one time can be six players per team. How many initial teams of this girls may trainer to choose? 17. 6 terms Find the first six terms of the sequence. a1 = 7, an = an-1 + 6
2020-06-04 01:30:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6601076722145081, "perplexity": 2870.395328229524}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436828.65/warc/CC-MAIN-20200604001115-20200604031115-00223.warc.gz"}
http://mathoverflow.net/questions/37924/elliptic-regularity-for-the-neumann-problem?sort=newest
# Elliptic regularity for the Neumann problem I'm trying to understand how to establish regularity for elliptic equations on bounded domains with Neumann data. For simplicity, let's presume we are focusing on $-\Delta u = f$ in $\Omega$ and $\frac{\partial u}{\partial \nu} = 0$ on $\partial \Omega$. Interior regularity works the same as always. When proving boundary regularity, for the dirichlet boundary case we first consider some ball $B(0,1) \cap \mathbb{R}_+^n$ and let $\xi = 1$ on $B(0,1/2)$, $\xi = 0$ on $\mathbb{R}^n - B(0,1)$ and then estimate all derivatives $\frac{\partial^2 u}{\partial x_i \partial x_j}$ except $\partial^2 u/\partial x_n^2$. Two main points are needed 1) $\xi$ vanishes on the curved part of $B(0,1) \cap \mathbb{R}_+^n$\ 2) $u=0$ on $\{x_n=0\}$. This allows us to let $-\partial_{x_i} (\xi \partial_{x_j} u)$ (with derivatvies replaced by difference quotients) be an admissible test function for our weak definitoin of a solution. I presume the main difficulty in neumann boundary data is making your test function be admissible. In other words, we would need $\int v = 0$ since our existence was established on $H^1(\Omega)$ restricted to mean value zero functions. So in order to proceed, can we just subtract off a constant from our original $-\partial_{x_i}(\xi \partial_{x_j}u)$? Is there some more natural way to establish regularity in this case? I do not want to take advantage of the fact that we have a green's function in this case however as I only chose the Laplace equation for simplicity. - In the case that you mentioned, we want to avoid this cut-off/difference quotients approach, since it could be hard to prove that $\partial_{x_i} (\xi \partial_{x_j} u)$ is a valid test function. In general, when working with regularity theory, another standard approach is to use an 'approximated problem'. However, the kind of the approximated problem, of course, depends on the PDE. For the Neumann like problem I suggest the following approximation: First observe that since $\int_\Omega f = 0$, we can easily construct a sequence $\{f_n\} \subset C^\infty(\Omega)$ such that $f_n \to f$ in $L^2(\Omega)$ and $\int_\Omega f_n=0, \ \forall n \in \mathbb{N}$. Then we consider $u_n \in C^\infty(\Omega)$ such that $(*) -\Delta u_n + \frac{1}{n} u_n = f_n, \mbox{ in } \Omega \ \$ and $\dfrac{\partial u_n}{\partial \nu}=0 \ , \mbox{on }\partial \Omega.$ The sequence $\{ u_n \}$ can be obtained by the use of Theorem 2.2.2.5, p.91 and Theorem 2.5.1.1, p. 121 of Grisvard's Book p. 91. In fact you just need to use a boostrap argument to $-\Delta u_n = -\frac{1}{n} u_n + f_n$. Notice that $\int_\Omega u_n=n\int_\Omega f_n = 0$. Now, you use $u_n$ as your test functions and obtain the following estimate: $(**) \|\nabla u_n\|_{L^2}^2 \leq \|f_n\|_{L^2}^2$, $\forall \ n \in \mathbb{N}$ Now you use $-\Delta u_n$, as a test function to your PDE ( observe that $-\Delta u_n$ is a valid test function, anyway we don't need to worry about it since the approximated equation holds everywhere). After integrating by parts, by using $(**)$ with some standard manipulations with your boundary terms you end up with $\|D^2 u_n \|_{L^2}^2 \leq C(\partial \Omega)\|f_n\|_{L^2}^2$, $\forall \ n \in \mathbb{N}$. (For instance, see Grisvard's book p.132-138, in particular eq. 3.1.1.5) The key point for the above estimation is to control the boundary elements in terms of the mean curvature of $\partial \Omega$. Now, since $\int_\Omega u_n =0$ we conclude that $\|u_n \|_{H^2}^2 \leq C(\partial \Omega)\|f_n \|_{L^2}^2$ so that $\|u_n \|_{H^2}^2 \leq C(\partial \Omega) \|f\|^2$, the $L^2$ norm of $f$. In this way we obtain $u\in H^2$ such that $u_n \to u$ weakly in $H^2$ and strongly in $H^1$. Observe that the latter convergence is sufficient to handle the term $\dfrac{1}{n}u_n$. Then, we can pass to the limit in equation (*) so that $u$ is a strong solution of $-\Delta u = f$ in $\Omega$ $\dfrac{\partial u}{\partial \nu}=0$ on $\partial \Omega$ with $\|u\|_{H^2} \leq C(\partial \Omega) \|f\|$, the $L^2$ norm of $f$. -
2016-05-06 17:20:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439264535903931, "perplexity": 153.69022649952262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861848830.49/warc/CC-MAIN-20160428164408-00039-ip-10-239-7-51.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/239784/number-of-solutions-to-a-modular-congruence
# Number of solutions to a modular congruence What methods are there for determining the number of solutions to modular congruences of the form $a^m \equiv b^n k \pmod{p}$ with $1 \leq a,b \leq p-1$ where $p$ is a prime? In the case $m,n$ are coprime we can do the following. We see that the condition in question is equivalent to $a^m b^{-n} \equiv k \pmod{p}$ from which we obtain a bijective mapping $(a,b) \mapsto (a,b^{-1})$ and so an equivalent number of solutions to $a^m b^n \equiv k \pmod{p}$. Lemma. For any prime $p$ and integers $k,m,n$ with $k \nmid p$ and $(m,n)=1$, the number of solutions to $a^mb^n \equiv k \pmod p$ with $1 \le a,b \le p-1$ is equal to $p-1$. Proof. Let $s_k$ be the number of solutions to this equation. We have $\sum_{k} s_k=(p-1)^2$ and since $m,n$ are coprime we obtain using Bézout's Identity that there exist integers $x,y$ such that $mx+ny = 1$. Now letting $k,k'$ be arbitrary integers with $q \equiv \dfrac{k'}{k} \pmod{p}$ and $1 \leq k' \leq p-1$, we see that for any solution $(a,b)$ to $X^m Y^m \equiv k \pmod{p}$, $(aq^x, bq^y)$ is a solution to $X^m Y^n \equiv k' \pmod{p}$. Since the mapping $(a,b) \mapsto (aq^x,bq^y)$ is bijective we conclude that $s_k = s_k'$. Thus, all the number solutions for $k$ are equal and we have $(p-1)s_k = (p-1)^2$ and thus $s_k = p-1$. $\square$ Question: What can we do in the case $m,n$ aren't coprime? • What is fixed, and what is moving? Are $m,n,k,p$ all given, and the question is how many pairs $(a,b)$? – Gerry Myerson May 25 '16 at 23:24 • @GerryMyerson That's right. Also, I made a typo and meant $m,n$ to be coprime. – user19405892 May 25 '16 at 23:28 If $m=m'd, n=n'd$ with $(m',n')=1$ then $a^{m'}b^{-n'}$ is equidistributed on $\{1,...,p-1\}$. So, $a^mb^{-n} = (a^{m'}b^{-n'})^d$ is equidistributed on the nonzero $d$th powers. • How can you conclude $(a^{m'}b^{-n'})^d$ is equidistributed on the nonzero $d$th powers? – user19405892 May 26 '16 at 17:50 • Units that are $d$th powers are $d$th powers in exactly $(d,p-1)$ ways. Since this is constant on the nonzero $d$th powers, equidistribution on $\{1,...,p-1\}$ pushes forward to equidistribution on nonzero $d$th powers. – Douglas Zare May 26 '16 at 17:57
2019-04-25 15:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471439719200134, "perplexity": 111.43888876728732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00339.warc.gz"}
http://reproducible.predictiveecology.org/reference/mergeCache.html
All the cacheFrom artifacts will be put into cacheTo repository. All userTags will be copied verbatim, including accessed, with 1 exception: date will be the current Sys.time() at the time of merging. The createdDate column will be similarly the current time of merging. mergeCache(cacheTo, cacheFrom) # S4 method for ANY mergeCache(cacheTo, cacheFrom) ## Arguments cacheTo The cache repository (character string of the file path) that will become larger, i.e., merge into this The cache repository (character string of the file path) from which all objects will be taken and copied from ## Value The character string of the path of cacheTo, i.e., not the objects themselves. ## Details This is still experimental
2018-06-25 05:47:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24960441887378693, "perplexity": 6771.805504249602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867493.99/warc/CC-MAIN-20180625053151-20180625073151-00518.warc.gz"}
https://skyciv.com/tutorials/calculate-the-centroid-of-a-beam-section/
1,680,343 Projects Solved... # How to Calculate the Centroid of a Beam Section The centroid or center of mass of beam sections is useful for beam analysis when the moment of inertia is required for calculations such as shear/bending stress and deflection. Beam sections are usually made up of one or more shapes. So to find the centroid of an entire beam section area, it first needs to be split into appropriate segments. After this, the area and centroid of each individual segment needs to be considered to find the centroid of the entire section. Consider the I-beam section shown below. To calculate the vertical centroid (in the y-direction) it can be split into 3 segments as illustrated: Now we simply need to use the formula for calculating the vertical (y) centroid of a multi-segment shape: $\bar{y}=\frac{\sum{y}_{i}{A}_{i}}{\sum{A}_{i}}\textup{&space;where:}\\\\&space;{A}_{i}&space;=&space;\textup{The&space;individual&space;segment's&space;area}\\&space;{y}_{i}&space;=&space;\textup{The&space;individual&space;segment's&space;centroid&space;distance&space;from&space;a&space;reference&space;line&space;or&space;datum}$ We will take the datum or reference line from the bottom fo the beam section. Now let's find Ai and yi for each segment of the I-beam section shown above so that the vertical or y centroid can be found. $\\&space;\textup{Segment&space;1:}\\&space;{A}_{1}&space;=&space;250\times38&space;=&space;9500&space;{\textup{&space;mm}}^{2}\\&space;{y}_{1}&space;=&space;38+300+\tfrac{38}{2}&space;=&space;357&space;\textup{&space;mm}\\\\&space;\textup{Segment&space;2:}\\&space;{A}_{2}&space;=&space;300\times25&space;=&space;7500&space;{\textup{&space;mm}}^{2}\\&space;{y}_{2}&space;=&space;38+\tfrac{300}{2}&space;=&space;188&space;\textup{&space;mm}\\\\&space;\textup{Segment&space;3:}\\&space;{A}_{3}&space;=&space;38\times150&space;=&space;5700&space;{\textup{&space;mm}}^{2}\\&space;{y}_{3}&space;=&space;\tfrac{38}{2}&space;=19&space;\textup{&space;mm}\\\\&space;\therefore&space;\bar{y}=\frac{\sum{y}_{i}{A}_{i}}{\sum{A}_{i}}=\frac{{y}_{1}{A}_{1}&space;+&space;{y}_{2}{A}_{2}&space;+&space;{y}_{3}{A}_{3}}{{A}_{1}+{A}_{2}+{A}_{3}}=\frac{(357\times9500)+(188\times7500)+(19\times5700)}{9500+7500+5700}\\\\&space;\bar{y}=216.29\textup{&space;mm}$ Of course you don't need to do all these calculations manually because you can use our fantastic Free Moment of Inertia Calculator to find the vertical (y) and horizontal (x) centroids of beam sections. Visit the next step: How to Calculate the Moment of Inertia of a Beam Section. # Free Structural Analysis Software Registration Options Create Profile Complete Registration By signing up, you agree to the Terms & Conditions. • Safi Hi dear! I have problem to find d value. Is there anyone to help me? • Alex Martin When applying this equation for the y-centroid it works perfectly, but I can’t seem to get this to work for the x-centroid. Is there anything different when calculating the x-centroid (other than values)? • Hi Alex. It should all be the same. I guess be careful where your datum (reference) line is. Share a pic of your section/shape and I can put the working down so you can see if/where you’re going wrong
2017-01-24 09:01:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7593434453010559, "perplexity": 767.9945866636008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00091-ip-10-171-10-70.ec2.internal.warc.gz"}
https://modwiki.dhewm3.org/Seta_%28console_command%29
# Seta (console command) ## Description This command sets the specified CVar to a given value and archives it (hence the ‘a’ in the command) in the user’s config file. ## Usage At the console type… seta <CVar> <value> ## Parameters • <CVar> - The CVar to modify. • <value> - the value to set the CVar to. Values can be written as a string. ## Notes If your value is a string, don’t forget to put quotes around it. You can sometimes do without, but results may vary if the string contains space characters.
2019-06-26 15:22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17953184247016907, "perplexity": 6331.927473311088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000353.82/warc/CC-MAIN-20190626134339-20190626160339-00437.warc.gz"}
https://tex.stackexchange.com/questions/309072/how-to-add-abstract-for-selected-chapters-in-my-thesis/309201
# How to add abstract for selected chapters in my thesis I am trying to add abstract for some selected chapters in my thesis. I am using shareLatex and I selected a template they offer for free (the Cambridge one). The problem is that it has lots of folders in that file and you have to work in each file independently of the other. I am still somehow new to latex in this way and I would like a detail explanation please. For example, in this shareLatex one of the folders is called "Macros" the other are called "Classes" and so on. The thesis.tex has the following %input macros (i.e. write your own macros file called MacroFile1.tex) %\include{Macros/MacroFile1} \documentclass[oneside,12pt]{Classes/CUEDthesisPSnPDF} \ifpdf \pdfinfo { /Title (CUED PhD and MPhil Thesis Classes) /Creator (TeX) /Producer (pdfTeX) /Author (######@gmail.com) /CreationDate (D:20030101000000) %format D:YYYYMMDDhhmmss /ModDate (D:20030815213532) /Subject (Writing a PhD thesis in LaTeX) /Keywords (PhD, Thesis)} \pdfcatalog { /PageMode (/UseOutlines) /OpenAction (fitbh) } \fi \title{Writing a PhD Thesis\\[1ex] in \LaTeXe} \ifpdf \author{\href{mailto:####5@gmail.com}{######}} \collegeordept{\href{http://business-school.exeter.ac.uk/research/areas/topics/economics}{Department of Economics}} \university{\href{http://www.exeter.ac.uk}{University of Exeter}} % insert below the file name that contains the crest in-place of 'UnivShield' \crest{\includegraphics[width=90mm]{UnivShield}} \else \author{####} \collegeordept{######} \university{#######} % insert below the file name that contains the crest in-place of 'UnivShield' \crest{\includegraphics[bb = 0 0 292 336, width=30mm]{UnivShield}} \fi % % insert below the file name that contains the crest in-place of 'UnivShield' % \crest{\IncludeGraphicsW{UnivShield}{40mm}{14 14 73 81}} % %\renewcommand{\submittedtext}{change the default text here if needed} \degree{Doctor of Philosophy} \degreedate{Yet to be decided} % turn of those nasty overfull and underfull hboxes \hbadness=10000 \hfuzz=50pt % Put all the style files you want in the directory StyleFiles and usepackage like this: \usepackage{StyleFiles/watermark} % Comment out the next line to get single spacing \onehalfspacing \usepackage{natbib} \usepackage{mathtools} \usepackage{amsfonts} \usepackage{amsthm} \usepackage{etex,etoolbox} \usepackage{amsthm,amssymb} \usepackage{thmtools} \usepackage{environ} \usepackage{thmtools,thm-restate} % ------------------------------------------------------------------------ \bibliographystyle{agsm} % ------------------------------------------------------------------------ \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{claim}{Claim} \newtheorem{fact}{Fact} \newtheorem{corollary}{Corollary} \newtheorem{assumption}{Assumption} \newtheorem{algoritheorem}{Algoritheorem} \newtheorem{remark}{Remark} \newtheorem{definition}{Definition} \newtheoremstyle{named}{}{}{\itshape}{}{\bfseries}{.}{.5em}{\thmnote{#3's }#1} \theoremstyle{named} \newtheorem*{namedtheorem}{Theorem} % ------------------------------------------------------------------------ \newcommand\equDis{\,{\buildrel d \over =}\,} \newcommand\AsyDis{\xrightarrow[]{d}} \newcommand\AsyArrow{\xrightarrow[]{}} % ------------------------------------------------------------------------ % ------------------------------------------------------------------------ \DeclareMathOperator{\PR}{\mathbb{P}} \DeclareMathOperator{\RNB}{\mathbb{R}} % Real NUmber Bold \DeclareMathOperator{\sign}{sign} % sign Operator % ------------------------------------------------------------------------ \begin{document} %\language{english} % A page with the abstract on including title and author etc may be % required to be handed in separately. If this is not so, then comment % the below 3 lines (between '\begin{abstractseparte}' and % 'end{abstractseparate}'), normally like a declaration ... needs some more % work, mind as environment abstracts creates a new page! % \begin{abstractseparate} % \input{Abstract/abstract} % \end{abstractseparate} % Using the watermark package which is in StyleFiles/ % and to remove DRAFT COPY ONLY appearing on the top of all pages comment out below line %\watermark{DRAFT COPY ONLY} \maketitle %set the number of sectioning levels that get number and appear in the contents \setcounter{secnumdepth}{3} \setcounter{tocdepth}{3} \frontmatter % book mode only \pagenumbering{roman} \include{Dedication/dedication} \include{Acknowledgement/acknowledgement} \include{Abstract/abstract} \tableofcontents \listoffigures \printnomenclature %% Print the nomenclature \addcontentsline{toc}{chapter}{Nomenclature} \mainmatter % book mode only \include{Introduction/introduction} \include{Chapter1/chapter1} \include{Chapter2/chapter2} \include{Chapter3/chapter3} \include{Conclusions/conclusions} \backmatter % book mode only \appendix \include{Appendix1/appendix1} \include{Appendix2/appendix2} \renewcommand{\bibname}{References} % changes default name Bibliography to References \bibliography{References/references} % References file \end{document} • Is that template confusing? Yes it is. But your real question, which seems to be in the title, isn't explained further. Can you elabrate a bit on that? – Johannes_B May 11 '16 at 17:24 • Like I basically am dealing with sharelatex.com/project there is templates that I like and I am using is great I just want to amend it such that I may insert abstract in each chapter for a certain reason it is not as easy. If you have an account in sharelatex the template is that of university of Cambridge sharelatex.com/templates/thesis/… Like for example the first chapter Then it has \begin{abstract} Then chapter 2 \begin{abstract} ... – rsc05 May 11 '16 at 18:21 ## 1 Answer Going through the internet, it seems evident that each latex file has its own document class (\documentclass) in latex. In the template, I was using I should have to access a folder called "Classes" and follow "CUEDthesisPSnPDF". In the file "CUEDthesisPSnPDF" I have searched for abstract under this I have written the following environment \newenvironment{chapabstract} { \begin{center}% \vspace*{1.5cm} \Large \bfseries Abstract \vspace{-.5em}\vspace{0pt} \end{center} \list{}{% \setlength{\leftmargin}{5mm}% <---------- CHANGE HERE \setlength{\rightmargin}{\leftmargin}% }% \item\relax } {\par} \makeatother Therefore, in each chapter, I have done the following \chapter{Hello} \begin{chapabstract} I am good \end{chapabstract} \section{ One} • That really should go into your main tex file, not the class file. – Johannes_B May 12 '16 at 6:13 • In that template, there is no main tex file. There is a file known as thesis.tex which I think it is the main. However, over there, there are no \newenvironment. But in the one I showed there is. If you have any suggestions, please have your answer so that everyone benefits. Thank you in advance – rsc05 May 12 '16 at 9:07 • You can savely move that definition to thesis.tex and remove it from the class. This is just some copyright issue on the one hand, and different files with the same name (but different content). It is a mess for helpers. – Johannes_B May 12 '16 at 16:21
2020-03-28 15:38:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7884994149208069, "perplexity": 3380.9631672884493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00040.warc.gz"}
https://www.physicsforums.com/threads/question-about-rearranging-formula-in-brian-coxs-why-does-e-mc2.575731/
# Question about rearranging formula in Brian Cox's Why Does E=MC2 #### DPR This has been driving me insane. I don't get how he went from s/c to t/y. If someone could explain step by step how you do it I would greatly appreciate it. Recall that we arrived at an expression for the length of the momentum vector in three-dimensional space, mΔx/Δt. We have just argued that Δx should be replaced by Δs and Δt should be replaced by Δs/c to form the four-dimensional momentum vector, which has a seemingly rather uninteresting length of mc. Indulge us for one more paragraph, and let us write the replacement for Δt, i.e., Δs/c, in full. Δs/c is equal to [sqrt (cΔt)^2)-(xΔ)^2]/c. This is a bit of a mouthful, but a little mathematical manipulation allows us to write it in a simpler form, i.e., it can also be written as Δt/γ where y=1/[sqrt 1-v^2/c^2)]. To obtain that, we have used the fact that υ = Δx/Δt is the speed of the object. Now γ is none other than the quantity we met in Chapter 3 that quantifies the amount by which time slows down from the point of view of someone observing a clock fly past at speed. pg 127 Related Special and General Relativity News on Phys.org #### DPR could someone help me out with this or give me some tips on what to study to be able to figure this out? #### Matterwave Gold Member s is defined as $s=\sqrt{-(\Delta x)^2+c^2(\Delta t)^2}$ Now divide both sides by $c\Delta t$ and you get: $$\frac{s}{c\Delta t}=\sqrt{\frac{-(\Delta x)^2}{c^2(\Delta t)^2}+1}$$ Now notice that $\frac{\Delta x}{\Delta t}=v$ to get: $$\frac{s}{c\Delta t}=\sqrt{-\frac{v^2}{c^2}+1}$$ Move the t to the right hand side and use the definition of gamma to get: $$\frac{s}{c}=\frac{\Delta t}{\gamma}$$ #### DPR hey thanks! I really need to brush up on my math! #### Matterwave Gold Member One additional point. We often call $\frac{s}{c}$ the proper time $\tau=\frac{s}{c}$. You may see this proper time pop up more often depending on which sources you're reading. #### DPR One additional point. We often call $\frac{s}{c}$ the proper time $\tau=\frac{s}{c}$. You may see this proper time pop up more often depending on which sources you're reading. Thanks. How exactly did you go do this step? Now divide both sides by $c\Delta t$ and you get: $$\frac{s}{c\Delta t}=\sqrt{\frac{-(\Delta x)^2}{c^2(\Delta t)^2}+1}$$ I get everything after that. #### elfmotat Thanks. How exactly did you go do this step? Now divide both sides by $c\Delta t$ and you get: $$\frac{s}{c\Delta t}=\sqrt{\frac{-(\Delta x)^2}{c^2(\Delta t)^2}+1}$$ I get everything after that. $$\frac{s}{c\Delta t}=\frac{\sqrt{-(\Delta x)^2+c^2(\Delta t)^2}}{\sqrt{(c\Delta t)^2}}=\sqrt{\frac{-(\Delta x)^2+c^2(\Delta t)^2}{c^2(\Delta t)^2}}=\sqrt{\frac{-(\Delta x)^2}{c^2(\Delta t)^2}+\frac{c^2(\Delta t)^2}{c^2(\Delta t)^2}}=\sqrt{\frac{-(\Delta x)^2}{c^2(\Delta t)^2}+1}$$ #### Matterwave Gold Member Just divide both sides like I said. You have to move the $c\Delta t$ inside the squareroot and divide both sides by it. EDIT: What elfmotat said. thanks guys! ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-08-23 15:54:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6671697497367859, "perplexity": 738.8868615115922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00160.warc.gz"}
https://deseng.ryerson.ca/dokuwiki/design:concept_refinement
# DesignWIKI Fil Salustri's Design Site # COLLECTIONS 2015.01.09 11:13 design:concept_refinement # Concept Refinement Refining design concepts in light of the results of concept evaluation helps you find the best possible concept early in a design process. You've performed a concept evaluation exercise, but you still have more than one design concept that seems viable. Now you want to tweak the remaining concepts to see if you can come up with even better ones. This is concept refinement. ## How can we refine concepts? There are 4 ways of doing this. It may be that, in the course of carrying out the concept evaluation, you or your team mates discovered some new concepts. This is the time to add them into the mix. Alternatively, you may return to develop one or two more design concepts as you did earlier in the process. Documentation: Remember to note how it was that the new concept was conceived of; you'll need to explain this in your design report. ### Fix the worst aspects of the best concepts You may have a concept that fared well in the decision matrix in all but one or two significant criteria and, as a result, got a relatively weak overall score. If you can improve its performance in just those one or two areas, the concept might become a leading contender. So consider such concepts carefully, and look to see if anything can be done about those one or two significant aspects in which the concept did poorly. Documentation: Remember to note how exactly you changed each concept; you'll need to explain this in your design report. ### Consider the best aspects of the worst concepts Even the worst concept may have done well with respect to one or two criteria. Is there some way to “borrow” those good aspects of the bad concepts and embed them into another, more successful concepts? If you can embed a good aspect from a poor concept into a good concept, it might make the good one even better. Documentation: Remember to note which concepts were “borrowed” from, what you did with them, and why; you'll need to explain this in your design report. ### Blend two of the best concepts together Take, say, the second and third best designs and see if you can combine them together into some new, combined concept. It might result in a new concept that is the best of all. Documentation: Remember to note which concepts were combined, how you did that, which features you kept, and which you discarded; you'll need to explain this in your design report. ## Adding refined concepts to the decision matrix Each new or modified concept must get its own column in the decision matrix. For example, say you are designing a stapler and there was one concept – concept B – that evaluated very well, except in the area of cost. Say further that you have found a way of addressing that problem. You do not just change concept B. Instead, you leave concept B and introduce a new concept – say, B' or B2 – which embodies the modification. Now you have to evaluate B' as if it were an entirely new concept. If you have blended two concepts – say, concepts B and D – then the blended concept could be called BD to indicate how it was derived, and would get its own column in the decision matrix. You can name the concepts whatever you like, so long as you are consistent and sensible. This is very important: each concept must be evaluated in its entirety. You cannot just cut and paste values from the original columns into the new columns. This is because changing one feature of a concept may have implications for other features. If you get this wrong, and choose the wrong concept because you evaluated them incorrectly, then you've ruined your chances of coming up with a good design at the end. Remember, never add more new or modified concepts than you removed in a previous iteration, or you'll never converge to a “best” design concept. So, for instance, if you started your decision matrix with 10 concepts and eliminated 7 of them via concept evaluation. In this refinement stage, you would only add 2-5 new concepts. ## Deliverables The deliverables of the concept refinement activity includes (a) the concepts that survived the last concept evaluation stage and (b) a few more concepts that you developed during refinement (of course, with explanations of how you generated the new concepts). You now have to add the new concepts to the decision matrix and conduct another concept evaluation activity. You keep looping like this, in principle, until you have found a single concept that is a clear winner. However, in classes, where there is limited time, you only need to do the concept evaluation activity twice (with one pass through concept refinement between them). Thereafter, you can just pick the “best” concept from the DM.
2022-05-21 22:34:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4097555875778198, "perplexity": 782.4808446568077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00099.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?p=151734
## Half-Life 1st Order Reaction $\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$ Olivia Young 1A Posts: 60 Joined: Fri Sep 28, 2018 12:24 am ### Half-Life 1st Order Reaction Is the equation t1/2 = 0.693/k valid for all first order reactions? And if so, it it because the initial concentration of A cancel out and leave ln(1/2)? Dimitri Speron 1C Posts: 60 Joined: Fri Sep 28, 2018 12:17 am Been upvoted: 1 time ### Re: Half-Life 1st Order Reaction Yes, that's exactly correct. Megan_Ervin_1F Posts: 78 Joined: Fri Sep 28, 2018 12:18 am ### Re: Half-Life 1st Order Reaction Also remember that you can always derive this equation it you have any doubts Erin Kim 2G Posts: 75 Joined: Fri Sep 28, 2018 12:26 am ### Re: Half-Life 1st Order Reaction For the first order the half life equation is t(1/2)= ln2/k. It is applicable to all first order rate reactions. Annalyn Diaz 1J Posts: 61 Joined: Fri Sep 28, 2018 12:15 am ### Re: Half-Life 1st Order Reaction Why is it that the half-life of a first order reaction doesn't depend on initial concentration? Nicholas Le 4H Posts: 74 Joined: Fri Sep 28, 2018 12:24 am ### Re: Half-Life 1st Order Reaction Yes, it is applicable to all first order rate reactions. Amy Dinh 1A Posts: 62 Joined: Fri Sep 28, 2018 12:23 am ### Re: Half-Life 1st Order Reaction The initial concentration gets cancelled out when you derive the equation. Thus you end up with the equation t1/2 = 0.693/k
2021-01-25 19:28:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104438304901123, "perplexity": 11534.835992180195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703644033.96/warc/CC-MAIN-20210125185643-20210125215643-00593.warc.gz"}
https://hal.archives-ouvertes.fr/hal-01657883v1
# Results in descriptive set theory on some represented spaces 1 CARTE - Theoretical adverse computations, and safety Inria Nancy - Grand Est, LORIA - FM - Department of Formal Methods Abstract : Descriptive set theory was originally developed on Polish spaces. It was later extended to ω-continuous domains [Selivanov 2004] and recently to quasi-Polish spaces [de Brecht 2013]. All these spaces are countably-based. Extending descriptive set theory and its effective counterpart to general represented spaces, including non-countably-based spaces has been started in [Pauly, de Brecht 2015]. We study the spaces $O(N^N)$, $C(N^N, 2)$ and the Kleene-Kreisel spaces $N\langle α\rangle$. We show that there is a $Σ^0_2$-subset of $O(N^N)$ which is not Borel. We show that the open subsets of $N^{N^N}$ cannot be continuously indexed by elements of $N^N$ or even $N^{N^N}$, and more generally that the open subsets of $N\langle α\rangle$ cannot be continuously indexed by elements of $N\langle α\rangle$. We also derive effective versions of these results. These results give answers to recent open questions on the classification of spaces in terms of their base-complexity, introduced in [de Brecht, Schröder, Selivanov 2016]. In order to obtain these results, we develop general techniques which are refinements of Cantor's diagonal argument involving multi-valued fixed-point free functions and that are interesting on their own right. Type de document : Pré-publication, Document de travail 2017 Domaine : Littérature citée [10 références] https://hal.inria.fr/hal-01657883 Contributeur : Mathieu Hoyrup <> Soumis le : jeudi 7 décembre 2017 - 11:05:02 Dernière modification le : vendredi 8 décembre 2017 - 01:21:47 ### Fichier DST.pdf Fichiers produits par l'(les) auteur(s) ### Identifiants • HAL Id : hal-01657883, version 1 ### Citation Mathieu Hoyrup. Results in descriptive set theory on some represented spaces. 2017. 〈hal-01657883〉 ### Métriques Consultations de la notice ## 31 Téléchargements de fichiers
2017-12-14 22:33:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7364786863327026, "perplexity": 2707.3358669586287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00092.warc.gz"}
https://math.stackexchange.com/questions/1261062/how-to-prove-that-a-zp-for-some-z-in-mathbbz
# How to prove that $a=z^{p}$ for some $z \in \mathbb{Z_{+}}$? Claim : If for a positive, composite integer $a$ and an odd prime $p$, such that $\gcd(a,p)=1$, we are given $$a^{p^{n-2}(p-1)} \equiv 1 \pmod {p^n} \ \forall \ n \geq 2 \ \ ;\ n \in \mathbb{Z_{+}}$$ Then $a=z^{p}$ for some $z \in \mathbb {Z_{+}}$ Also note that the congruence holds for for all positive integers $n$, not just an arbitrary one. My Approach : At first a glance, it seems trivial to me, because by Euler's Theorem, $a^{\phi(p^{n})} \equiv 1 \pmod {p^n}$ and since $\phi(p^n) = p^{n-1}(p-1)$, I think that $a$ should be a perfect $p^{th}$ power to account for the extra $p$ in the power. But, I don't have a formal proof for this. Is my reasoning enough or do I need a more rigorous argument ? On observing more carefully, I found the following. Let the prime factorization of $a$ is $\displaystyle \prod_{i=1}^{m} {q_{i}}^{k_{i}}$, where $q_{i}$ are prime factors of $a$. If we can prove that whenever the modular equation holds true, there exists at least one $k_{i} \geq p$, then we are done. This is because we can then write $a=j h^{p}$ and plugging it in the modular equation, we will get the same condition for $j$. This will set up an infinite descent and prove that $j=1$ and hence the claim will hold true. Query : As Mr. Greg Martin stated, the claim is false in general. However, If the modular equation is true for a fixed $n$, then, $a \equiv z^p \pmod {p^{n}}$ $\displaystyle \therefore a \equiv {k_{1}}^p \pmod {p}$ $a \equiv {k_{2}}^p \pmod {p^{2}}$ $.$ $.$ $.$ $a \equiv {k_{n}}^{p} \pmod{p^n}$ $\implies a = {k_{n}}^{p} + j p^{n}$ If we choose $n$ to be sufficiently large, then $j=0$ $\implies a={k_{n}}^{p}$ • If you think $a$ is a $p$ th power in the integers - certainly not. You're right that there is good reason to expect that $a$ might be a $p$th power, but everything is occurring mod $p^n$ and you are unjustified in believing it carries up to the integers. That is a wrong feeling to have. – anon May 1 '15 at 19:42 • @anon The congruence equation I've written can be simplified, according to Mr. quid's answer, into the fact that $a \equiv z^{p} \pmod{p^n}$ or $p^n | (a-z^{p}) \ \forall \ \ n\geq 2 \ ; \ n \in \mathbb{Z}$. If, in case my claim is not true, then $a-z^p$ will be divisible by all the numbers $p, p^2, p^3,.... \text{ad infinitum}$ and this would lead to a contradiction. What do you think ? – MathGod May 1 '15 at 19:52 • MathGod is not exactly this since the $z$ could depend on $n$; but I agree that @anon likely read the quantifiers as I did initially. – quid May 1 '15 at 19:55 • I am very sorry but there is still a flaw in my argument. I retract it. At least for now. Sorry. – quid May 1 '15 at 20:34 • @quid I was just reading the wikipedia article to better understand your answer, in the meanwhile you retracted it.... Btw, can you tell me what was the flaw in your argument ? – MathGod May 1 '15 at 20:37 Let $a$ be any integer such that $a^{p-1}\equiv1\pmod{p^2}$. Then $$a^{p(p-1)-1} = (a^{p-1}-1)\big( (a^{p-1})^{p-1} + (a^{p-1})^{p-2} + \cdots + (a^{p-1})^1 + 1 \big);$$ the first factor is divisible by $p^2$ by assumption, and the second factor is congruent to $p\pmod {p^2}$, hence is divisible by $p$. We conclude that $a^{p(p-1)}\equiv1\pmod{p^3}$ automatically. Similarly, $$a^{p^2(p-1)-1} = (a^{p(p-1)}-1)\big( (a^{p(p-1)})^{p-1} + (a^{p(p-1)})^{p-2} + \cdots + (a^{p(p-1)})^1 + 1 \big)$$ is then divisible by $p^4$, etc. In short, one can prove by induction that if $a^{p-1}\equiv1\pmod{p^2}$, then automatically $a^{p^{n-2}(p-1)}\equiv1\pmod{p^n}$ for every $n\ge2$. And $a^{p-1}\equiv1\pmod{p^2}$ certainly does not imply that $a$ must be the $p$th power of an integer. (Examine the pairs $(a,p) = (7,5)$ and $(a,p)=(17,3)$, for example.) And note that adding $p^2$ to $a$ preserves the congruence, so $(32,5)$ and $(57,5)$ and $(82,5)$ ... are all counterexamples as well; in particular, there are plentiful counterexamples where $a$ is not prime. • Both the examples you have given have $a$ as a prime, whereas in my question, I want $a$ to be composite (which, unfortunately, I'd forgotten to add.) If your reasoning is correct, can you tell any example where $a$ is composite? Also, what's the flaw in Mr. Dosidis's answer ? His conclusion seems to contradict yours. – MathGod May 2 '15 at 8:25 • I could answer your query - but it will be more illuminating for you if you answer it yourself! Take one of the specific examples in my answer (or $(10,3)$, say) and actually calculate your $k_1,k_2,\dots$ and the corresponding values of $j$. – Greg Martin May 3 '15 at 18:58
2021-07-25 17:03:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863892138004303, "perplexity": 155.7512726834874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151699.95/warc/CC-MAIN-20210725143345-20210725173345-00319.warc.gz"}
http://www.wking-china.com/article/468073
# 题目 ## Problem 585: Nested square roots Consider the term $\sqrt{x+\sqrt{y}+\sqrt{z}}$ that is representing a nested square root. x, y and z are positive integers and y and z are not allowed to be perfect squares, so the number below the outer square root is irrational. Still it can be shown that for some combinations of x, y and z the given term can be simplified into a sum and/or difference of simple square roots of integers, actually denesting the square roots in the initial expression. Here are some examples of this denesting: $\sqrt{3+\sqrt{2}+\sqrt{2}}=\sqrt{2}+\sqrt{1}=\sqrt{2}+1$ $\sqrt{8+\sqrt{15}+\sqrt{15}}=\sqrt{5}+\sqrt{3}$ $\sqrt{20+\sqrt{96}+\sqrt{12}}=\sqrt{9}+\sqrt{6}+\sqrt{3}-\sqrt{2}=3+\sqrt{6}+\sqrt{3}-\sqrt{2}$ $\sqrt{28+\sqrt{160}+\sqrt{108}}=\sqrt{15}+\sqrt{6}+\sqrt{5}-\sqrt{2}$ As you can see the integers used in the denested expression may also be perfect squares resulting in further simplification. Let F(n) be the number of different terms $\sqrt{x+\sqrt{y}+\sqrt{z}}$, that can be denested into the sum and/or difference of a finite number of square roots, given the additional condition that 0 < xn. That is, $\sqrt{x+\sqrt{y}+\sqrt{z}}=\sum_{i=1}^ks_i\sqrt{a_i}$ with k, x, y, z and all ai being positive integers, all si = ±1 and xn. Furthermore y and z are not allowed to be perfect squares. Nested roots with the same value are not considered different, for example $\sqrt{7+\sqrt{3}+\sqrt{27}}$, $\sqrt{7+\sqrt{12}+\sqrt{12}}$ and $\sqrt{7+\sqrt{27}+\sqrt{3}}$, that can all three be denested into $2+\sqrt{3}$, would only be counted once. You are given that F(10) = 17, F(15) = 46, F(20) = 86, F(30) = 213 and F(100) = 2918 and F(5000) = 11134074. Find F(5000000). # 分析 1. $i+\sqrt{b}=\sqrt{(i^2+b)+\sqrt{i^2b}+\sqrt{i^2b}},0 i = 1, b = 2 时,就是题目中的第 1 个例子。 2. $\sqrt{a}+\sqrt{b}=\sqrt{(a+b)+\sqrt{ab}+\sqrt{ab}},1 a = 3, b = 5 时,就是题目中的第 2 个例子。 3. $-1+\sqrt{c}+\sqrt{a}+\sqrt{ac}=\sqrt{(a+1)(c+1)+\sqrt{4c(a-1)^2}+\sqrt{4a(c-1)^2}},$ $c>a>1,\hspace{0.5em}a,c,c/a\ne{k^2}$ 4. $-i+\sqrt{c}+i\sqrt{a}+\sqrt{ac}=\sqrt{(a+1)(c+i^2)+\sqrt{4i^2c(a-1)^2}+\sqrt{4a(c-i^2)^2}},$ $a>1,c>i^2>1,a\ne{c},\hspace{0.5em}a,c,c/a,a/c\ne{k^2}$ Update: i = 2, c = 6, a = 2.5 也是可以的。 5. $-\sqrt{b}+\sqrt{c}+\sqrt{ab}+\sqrt{ac}=\sqrt{(a+1)(c+b)+\sqrt{4bc(a-1)^2}+\sqrt{4a(c-b)^2}},$ $a>1,c>b>1,\hspace{0.5em}a,b,c,c/b,ac/b\ne{k^2}$ a = 3, b = 2, c = 3 时,就是题目中的第 3 个例子。 a = 3, b = 2, c = 5 时,就是题目中的第 4 个例子。 Update: a = 1.5, b = 6, c = 8 也是可以的。 Update: 使用 SymPy,得到以下几个不属于前面 5 种情形的例子: $-2+\sqrt{6}+\sqrt{10}+\sqrt{15}=\sqrt{35+\sqrt{40}+\sqrt{216}}$ $-\sqrt{6}+\sqrt{8}+3+\sqrt{12}=\sqrt{35+\sqrt{24}+\sqrt{48}}$ $-\sqrt{6}+3+\sqrt{10}+\sqrt{15}=\sqrt{40+\sqrt{60}+\sqrt{96}}$ $-\sqrt{8}+\sqrt{10}+\sqrt{12}+\sqrt{15}=\sqrt{45+\sqrt{24}+\sqrt{80}}$ $-\sqrt{10}+\sqrt{12}+\sqrt{15}+\sqrt{18}=\sqrt{55+\sqrt{24}+\sqrt{120}}$ # 不成功的解答 1: #include <stdio.h> 2: #include <math.h> 3: 4: int isSquare(int n) { double x = sqrt(n); return x == (int)x; } 5: int max(int a, int b) { return (a > b) ? a : b; } 6: 7: int count(int b, int c0, int c9) 8: { // count: c in (c0, c9]; c,c/b != k^2 9: return c9 - (int)sqrt(c9) - (int)sqrt(c9/b) 10: - c0 + (int)sqrt(c0) + (int)sqrt(c0/b); 11: } 12: 13: int count4(int a, int b, int c0, int c9) 14: { // count: c in (c0, c9]; c,c/b,ac/b != k^2 15: int z = 0; 16: for (int d, c = c0 + 1; c <= c9; c++) { 17: if (isSquare(c)) continue; 18: if (c % b == 0 && isSquare(c / b)) continue; 19: if ((d = a * c) % b == 0 && isSquare(d / b)) continue; 20: z++; 21: } 22: return z; 23: } 24: 25: long f2a(int n) 26: { // i + √b: 0 < i, 1 < b != k^2 27: long z = 0; // i^2 + b <= n 28: for (int b, i = 1; 1 < (b = n - i * i); i++) 29: z += b - (int)sqrt(b); 30: return z; 31: } 32: 33: long f2b(int n) 34: { // √a + √b: 1 < a < b; a,b,b/a != k^2 35: long z = 0; // a + b <= n 36: for (int b, a = 2; a < (b = n - a); a++) 37: if (!isSquare(a)) z += count(a, a, b); 38: return z; 39: } 40: 41: long f4a(int n) 42: { // -1 + √c + √a + √(ac): c > a > 1; a,c,c/a != k^2 43: long z = 0; // (a + 1)(c + 1) <= n 44: for (int c, a = 2; a < (c = n / (a + 1) - 1); a++) 45: if (!isSquare(a)) z += count(a, a, c); 46: return z; 47: } 48: 49: long f4b(int n) 50: { // -i + √c + i√a + √(ac): c > i^2 > 1; a,c != k^2 51: long z = 0; // a != c, a > 1, (a + 1)(c + i^2) <= n 52: for (int i2, i = 2; (i2 = i * i) * 2 * 3 <= n; i++) { 53: for (int b, c, a = 2; (b = max(a,i2)) < (c = n/(a+1)-i2); a++) 54: if (!isSquare(a)) z += count(a, b, c); // c > a; c/a != k^2 55: for (int a, c = i2 + 1; c < (a = n / (i2 + c) - 1); c++) 56: if (!isSquare(c)) z += count(c, c, a); // a > c; a/c != k^2 57: } 58: return z; 59: } 60: 61: long f4c(int n) 62: { // -√b + √c + √(ab) + √(ac): a,b,c,c/b,ac/b != k^2 63: long z = 0; // a > 1, c > b > 1, (a + 1)(b + c) <= n 64: for (int a = 2; 5 * (a + 1) <= n; a++) 65: if (!isSquare(a)) // a == (b or c) is allowed 66: for (int c, b = 2; b < (c = n / (a + 1) - b); b++) 67: if (!isSquare(b)) z += count4(a, b, b, c); 68: return z; 69: } 70: 71: long f(int n) 72: { 73: return f2a(n) + f2b(n) + f4a(n) + f4b(n) + f4c(n); 74: } 75: 76: int main() 77: { 78: static int a[] = { 10, 15, 20, 30, 100, 5000 }; 79: for (int i = 0; i < sizeof(a) / sizeof(a[0]); i++) 80: printf("%ld ", f(a[i])); // 2813 8237158 81: printf( "\n17 46 86 213 2918 11134074\n"); 82: } 17 46 86 213 2813 8237158 17 46 86 213 2918 11134074
2018-02-23 12:51:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26756975054740906, "perplexity": 4280.692559432385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00539.warc.gz"}
https://codegolf.stackexchange.com/questions/64555/a-mapping-of-primes
# A Mapping of Primes Recently, I have found a bijective mapping f from positive integers to finite, nested sequences. The purpose of this challenge is to implement it in the language of your choice. ## The Mapping Consider a number n with the factors where . Then: For example: ## Rules • You may write a full program or a function to do this task. • Output can be in any format recognisable as a sequence. • Built-ins for prime factorization, primality testing, etc. are allowed. • Standard loopholes are disallowed. • Your program must complete the last test case in under 10 minutes on my machine. • This is code-golf, so the shortest code wins! ## Test Cases • 10: {{},{{}},{}} • 21: {{{}},{},{{}}} • 42: {{{}},{},{{}},{}} • 30030: {{{}},{{}},{{}},{{}},{{}},{}} • 44100: {{{{}}},{{{}}},{{{}}},{},{}} • 16777215: {{{{}}},{{}},{{}},{},{{}},{{}},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{},{{}}} • 16777213: pastebin • Is the same output, without the commas, still recognisable as a sequence? – Dennis Nov 23 '15 at 13:44 • @Dennis Yes, you can tell by the brackets. – LegionMammal978 Nov 23 '15 at 13:44 • How about the number 1 – Akangka Nov 23 '15 at 13:56 • Ooh, that is {}. – Akangka Nov 23 '15 at 13:56 • Would this be an acceptable output format? CJam doesn't distinguish between empty lists and empty strings, so this is the natural way of representing a nested array. – Dennis Nov 23 '15 at 16:10 # Pyth, 29 bytes L+'MhMtbmYhbL&JPby/LJf}TPTSeJ Demonstration This defines a function, ', which performs the desired mapping. A helper function, y, performs the mapping recursively given a prime decomposition. The base case and the prime decomposition are performed in '. # CJam, 5148444241393433 31 bytes {mf_W=)1|{mp},\fe=(0a*+{)J}%}:J Try it online in the CJam interpreter. Thanks to @MartinBüttner for golfing off 3 bytes! Thanks to @PeterTaylor for golfing off 3 bytes and paving the way for 1 more! ### I/O This is a named function that pops and integer from STDIN and pushes an array in return. Since CJam does not distinguish between empty arrays and empty strings – a string is simply a list that contains only characters –, the string representation will look like this: [[""] "" [""] ""] referring to the following, nested array [[[]] [] [[]] []] ### Verification $wget -q pastebin.com/raw.php?i=28MmezyT -O test.ver$ cat prime-mapping.cjam ri {mf_W=)1|{mp},\fe=(0a*+{)J}%}:J ~ $time cjam prime-mapping.cjam <<< 16777213 > test.out real 0m25.116s user 0m23.217s sys 0m4.922s$ diff -s <(sed 's/ //g;s/""/{}/g;y/[]/{}/' < test.out) <(tr -d , < test.ver) Files /dev/fd/63 and /dev/fd/62 are identical ### How it works { }:J Define a function (named block) J. mf Push the array of prime factors, with repeats. _W= Push a copy and extract the last, highest prime. )1| Increment and OR with 1. {mp}, Push the array of primes below that integer. If 1 is the highest prime factor, this pushes [2], since (1 + 1) | 1 = 2 | 1 = 3. If 2 is the highest prime factor, this pushes [2], since (2 + 1) | 1 = 3 | 1 = 3. If p > 2 is the highest prime factor, it pushes [2 ... p], since (p + 1) | 1 = p + 2, where p + 1 is even and, therefor, not a prime. \fe= Count the number of occurrences of each prime in the factorization. This pushes [0] for input 1. ( Shift out the first count. 0a* Push a array of that many 0's. + Append it to the exponents. This pushes [] for input 1. { }% Map; for each element in the resulting array: Increment and call J. • Blame Pastebin :P – LegionMammal978 Nov 23 '15 at 14:22 • mf e= is much better than what I'd found when I knocked up a sanity test while the question was in the sandbox, but one improvement I found which you haven't used is to do the mapping for the twos as (0a*+ - i.e. ri{}sa2*{mf_W=){mp},\fe=(0a*+0j\{)j}%*}j. And there's a much bigger improvement as well which I'll give you a few hours' headstart on... – Peter Taylor Nov 23 '15 at 14:36 • @PeterTaylor Thanks for the golf and the hint. – Dennis Nov 23 '15 at 15:04 • Yep, changing the output representation was indeed the bigger improvement. There's a better way of handling the base case too, which I've only just found, but to beat your solution I have to use two of your ideas so: {mf_W=)1|{mp},\fe=(0a*+{)J}%}:J – Peter Taylor Nov 23 '15 at 18:59 • @PeterTaylor That one magical 1|. Thanks again! – Dennis Nov 23 '15 at 19:10 # Mathematica, 88 bytes f@1={};f@n_:=f/@Join[1+{##2},1&~Array~#]&@@SparseArray[PrimePi@#->#2&@@@FactorInteger@n] ` • The magic of undocumented internals... – LegionMammal978 Nov 23 '15 at 20:48
2019-06-18 15:33:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28541433811187744, "perplexity": 3251.4797794321626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00213.warc.gz"}
http://forums.freebsd.org/archive/index.php/t-7530.html
7f35 How do FreeBSD users LaTeX? [Archive] - The FreeBSD Forums PDA View Full Version : How do FreeBSD users LaTeX? Allamgir October 5th, 2009, 16:17 I'm considering leaving Arch Linux for FreeBSD once 8 comes out, but I need a good LaTeX setup. How do the other TeX-ies around here do it? I really shy away from many GUI solutions like Kile or Texmaker. The CLI is where it's at. However, if there's some super awesome GUI solution, I'll check it out as long as it works in a tiling wm (I like XMonad). Some things I would really like: -Powerful syntax highlighting -Easy compilation to PDF and side by side preview (not necessarily live, but I would like it to update whenever I compile) -Time savers like autocompletion to end environments, placeholders, macros, etc. -Look nice. If the GUI looks toally out of place on my system, I'll be hesitant to use it. I like the CLI because text is so consistent. -I'll post more if I think of any. Vim? Vim-latexsuite? Emacs and AUCTeX? Gedit with a plugin? What is the best LaTeX solution around here? Oxyd October 5th, 2009, 17:38 What is "the best" is highly subjective. For me, editors/vim, print/latex for writing, with KPDF and KDVI for viewing (both from graphics/kdegraphics3) do the job. What setup do you use on Arch? You can most likely use exactly the same setup on FreeBSD as well. RandomSF October 5th, 2009, 17:48 texlive (http://www.tug.org/texlive/) is not hard to download and install is one of the better, more complete variants you'll find. Allamgir October 5th, 2009, 21:59 Thanks for the replies! Well, on Arch I haven't had a complete setup in a while (since I've been OS hopping for a while and FreeBSD has just piqued my interest). I've tried emacs + AUCTeX, but I found it complicated to set up — it's complicated to learn, but I don't mind that so much. At one point I just edited the tex source with vim and compiled separately, although I found it a slight hassle to type everything longhand and then open a new terminal to compile, then constantly update my PDF viewer by reopening the file.* Vim-latexsuite sounds interesting, but I couldn't find it in ports. Is it there/available somewhere else? *What's a PDF viewer that updates automatically upon changes to the file? RandomSF October 5th, 2009, 22:03 I use the AUCTeX plugin for vim and xpdf for viewing. Just press R to refresh after compiling a new file. No need to close and re-open. Allamgir October 5th, 2009, 22:09 There's AUCTeX for vim? I definitely need to look this up! This (http://www.vim.org/scripts/script.php?script_id=162) is what you're talking about, right? It doesn't seem to have been updated in a while, so either that means it's dead or it's pretty much perfect (I'm hoping for the latter, since I haven't found vim-latex for FreeBSD in a way I can easily keep up to date). And I didn't know xpdf had that feature! Oxyd October 5th, 2009, 22:12 Also in vim, you can just type :!latex source.tex to compile, later on it's just :! and the Up arrow which will recall the latex command from history. Or you can write a Makefile and use the :make command. :e As for completion, there are completion plugins for vim, although I'm not a big fan of them. You may want to look for some, though. RandomSF October 5th, 2009, 22:15 Yes, Allamgir, that's the one. It may not be perfect, but most everything you would want is there and it's easy to modify. vivek October 5th, 2009, 23:44 gedit user here. http://sourceforge.net/projects/gedit-latex/ Allamgir October 6th, 2009, 00:44 Gedit seems tempting, but doesn't it pull in a bunch of GNOME dependencies? Allamgir October 6th, 2009, 02:59 Hey oxyd, I found this trick that when entering an external command in vim, % means the file name. So I can just run :!pdflatex % just thought you might like to know that :) October 6th, 2009, 06:24 @Allamgir I usually add print/tetex and print/makeindex pacakges and type whole document with plain vi/vim. I also used geany in the past. -Powerful syntax highlighting Works with both vi/vim and geany. -Easy compilation to PDF and side by side preview (not necessarily live, but I would like it to update whenever I compile) I use pdflatex for that which is part of tetex port. I also launch the new PDF every time I generate it, like that: #! /bin/sh TARGET=document LATEX="pdflatex -halt-on-error" [ $? -eq 0 ] &&${LATEX} ${TARGET} [$? -eq 0 ] && bibtex ${TARGET} & [$? -eq 0 ] && makeindex ${TARGET}.idx & [$? -eq 0 ] && ${LATEX}${TARGET} | tail [ $? -eq 0 ] &&${LATEX} ${TARGET} | tail [$? -eq 0 ] && ${READER}${TARGET}.pdf & I use it with simple Makefile, so everytime I want to create new PDF, I just type make, same for cleaning, a script for cleaning all generated stuff is cleared with make clean. -Time savers like autocompletion to end environments, placeholders, macros, etc. If I remember geany autocompletes stuff, but vi/vim do not, at least not stock vim. Also, about macros you mentioned, I just use my functions in LaTeX, for code listings, for putting images and so ... vivek October 6th, 2009, 14:21 Actually, I've Gnome installed and I liked the simplicity of gedit. YMMV. dennylin93 October 6th, 2009, 14:26 I use Vim to do the editing, although I occasionally open up Texmaker. It's possible to set a key combination for building the PDF files in Vim. graudeejs October 6th, 2009, 15:04 have you tried lyx? I like it RandomSF October 6th, 2009, 15:07 Lyx, while nice for a beginner, makes LaTeX that is hard to maintain. If you are working alone, it may work very well, but when several people are working on a doc and some of them work directly with the TeX doc, as experienced LaTeXers generally do, they will not be happy. Oko October 6th, 2009, 18:56 I'm considering leaving Arch Linux for FreeBSD once 8 comes out, but I need a good LaTeX setup. How do the other TeX-ies around here do it? We don't. We are waiting Hiroki Sato to port TeXLive to FreeBSD from 2001 or something like that. In the mean time if you really need to use TeX with BSD you have two options. One is two switch to OpenBSD or more recently to NetBSD. The other one is to use unofficial port of Romain Tartière which is not allowed to ports three because we do not want to hurt Mr. Sato's feelings. Allamgir October 6th, 2009, 19:16 What's wrong with teTeX? Slackware uses that too. October 6th, 2009, 19:20 What's wrong with teTeX? Slackware uses that too. I also use tetex without any problems, dunno whats the case generally. Oko October 6th, 2009, 19:30 What's wrong with teTeX? Slackware uses that too. It is dead since 2005! TeXLive is only official distribution of TeX and friends for *nix. If you were serious TeX user you would know why teTeX is useless for advanced work. Allamgir October 6th, 2009, 21:12 Normally I just write some essays and reports, etc. occasionally with some mathematics and Greek letters. I use the default font or Latin modern, and I set the margins with the geometry package. As long as tetex can handle that and maybe some more, I'm OK. But how have we not yet ported texlive? I would think the FreeBSD community could do it within 4 years! October 6th, 2009, 23:28 It is dead since 2005! TeXLive is only official distribution of TeX and friends for *nix. If you were serious TeX user you would know why teTeX is useless for advanced work. I wrote my whole master's thesis in tetex and find it fully usable: But maybe I do not use some real advanced features, I do not feel like \texitit{LaTeX} expert by any means. Allamgir October 6th, 2009, 23:35 It looks like vermaden has plenty of graphics, non-English characters (which I normally don't need, but I hate when WYSIWYG or some other document/word processors have strange boxes and alignment issues with them), a well-organized table of contents and index, etc. I didn't really see any mathematics, but I would assume they work fine since the original TeX was designed to typeset math well way back in the 20th century ;) So far what I'm planning to do now is use vim to edit .tex files by hand, maybe without any macros or helpers (if it really is a pain then I'll look at auctex.vim or something else), and use normal CLI commands to do my compiling. I'll view with xpdf since it has that awesome R for refresh feature. jrick October 7th, 2009, 04:42 For me it's vim and Texlive (I installed it manually). I also prefer to use xelatex because of its excellent support for OpenType fonts. mix_room October 7th, 2009, 08:33 But maybe I do not use some real advanced features It is hard to say if you use advanced features or not. To me it looks like you have long document with some images included, where the images were taken from other sources \includegraphics[file.eps] More complicated stuff would include drawing the images directly in LaTeX, tikz for example: http://www.texample.net/tikz/examples/timing-diagram. Basically anything where MS Office (or OpenOffice.org) would suffice is simply in my mind. TOC, well organized bibliography, etc etc are the basis on which LaTeX is built. October 7th, 2009, 09:38 It is hard to say if you use advanced features or not. To me it looks like you have long document with some images included, where the images were taken from other sources \includegraphics[file.eps] Most of the graphics are SVG images exported as PDF in Inkscape (because I havent found a way to import SVG images into LaTeX), there are also some PNG/JPG. mavio% cat functions.tex % --< FUNCTIONS >-- % % \code{df.output}{df -h} % \cmd{cmd_xm_list.output}{Wynik polecenia xm list.} % \nicequote{Stan Lee}{With great power comes great responsibility.} % \notion{AMD-V} % \todo{co to ja mialem tu zrobic?} % \logo{150mm}{drawing.pdf}{opis} % \imagequiet{150mm}{drawing.pdf}{opis} % \image{drawing.pdf}{opis} % \imagewidth{80mm}{drawing.pdf}{opis} % \imageborder{drawing.pdf}{opis} % \imageborderscale{drawing.pdf}{opis} % \imageborderwidth{150mm}{drawing.pdf}{opis} \definecolor{gray0}{rgb}{0.8, 0.8, 0.8} \definecolor{gray1}{rgb}{0.4, 0.4, 0.4} \newcommand{\nicequote}[2] % usage: \nicequote{Stan Lee}{With great power comes great responsibility.} { \begin{quotation} \small \textit{"#2"} \end{quotation} \begin{flushright} \textbf{\textit{#1}} \end{flushright} } \newcommand{\code}[2] % usage: \code{df.output}{df -h} { \begin{figure}[!h] \centering \fvset{frame=leftline} \fvset{framerule=1mm} \fvset{framesep=2mm} \fvset{rulecolor=\color{gray0}} \fvset{formatcom=\color{gray1}} \fvset{numbers=left} \fvset{numbersep=2mm} \VerbatimInput{#1} \caption{#2} \label{#1} \end{figure} } \newcommand{\cmd}[2] % usage: \cmd{cmd_xm_list.output}{Wynik polecenia xm list.} { \begin{figure}[!h] \centering \fvset{frame=topline} \fvset{framerule=1mm} \fvset{framesep=2mm} \fvset{rulecolor=\color{gray0}} \fvset{formatcom=\color{gray1}} \VerbatimInput{#1} \caption{#2} \label{#1} \end{figure} } \newcommand{\logo}[3] % usage: \imagequiet{150mm}{drawing.pdf}{opis} { \begin{figure}[!h] \makebox[\textwidth][r] { \includegraphics[width=#1]{#2} } \makebox[\textwidth][r] { #3 } \end{figure} } \newcommand{\imagequiet}[3] % usage: \image{80mm}{drawing.pdf}{opis} { \begin{figure}[!h] \centering \includegraphics[width=#1]{#2} \\ \small{#3} \end{figure} } \newcommand{\image}[2] % usage: \image{drawing.pdf}{opis} { \begin{figure}[!h] \centering \includegraphics[scale=0.75]{#1} \caption{#2} \label{#1} \end{figure} } \newcommand{\imagewidth}[3] % usage: \imagewidth{80mm}{drawing.pdf}{opis} { \begin{figure}[!h] \centering \includegraphics[width=#1]{#2} \caption{#3} \label{#2} \end{figure} } \newcommand{\imageborder}[2] % usage: \imageborder{drawing.pdf}{opis} { \begin{figure}[!h] \centering \setlength \fboxsep{2.0pt} \setlength \fboxrule{2.0pt} \fcolorbox{gray0}{white}{\includegraphics[scale=0.75]{#1}} \caption{#2} \label{#1} \end{figure} } \newcommand{\imageborderscale}[2] % usage: \imageborderscale{drawing.pdf}{opis} { \begin{figure}[!h] \centering \setlength \fboxsep{2.0pt} \setlength \fboxrule{2.0pt} \fcolorbox{gray0}{white}{\includegraphics[width=150mm]{#1}} \caption{#2} \label{#1} \end{figure} } \newcommand{\imageborderwidth}[3] % usage: \imageborderwidth{150mm}{drawing.pdf}{opis} { \begin{figure}[!h] \centering \setlength \fboxsep{2.0pt} \setlength \fboxrule{2.0pt} \fcolorbox{gray0}{white}{\includegraphics[width=#1]{#2}} \caption{#3} \label{#2} \end{figure} } \newcommand{\notion}[1] % usage: \notion{AMD-V} { \index{#1} \textit{#1}\xspace } \newcommand{\todo}[1] % usage: \todo{co to ja mialem tu zrobic?} { \textbf{TODO:} #1 \xspace } mavio% cat thesis.skel \documentclass[a4paper,11pt]{report} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage[polish]{babel} \selectlanguage{polish} \usepackage{indentfirst} % for: (indent first parapgrapch after \section) \usepackage{mathpazo} % for: (nice font) \usepackage{multicol} % for: columns \usepackage{color} % for: \color \usepackage{url} % for: \url \usepackage{fancyvrb} % for: \VerbatimInput \usepackage{natbib} % for: \cite \usepackage{graphicx} % for: \includegraphics \usepackage{xspace} % for: \xspace (remove stupid spaces) \usepackage{hyperref} % for: \hypersetup \usepackage{makeidx} % for: \makeindex \usepackage{setspace} % for: \onehalfspacing (1.5 spacing) \usepackage[font=small,labelfont=bf,up]{caption} % for: \caption \makeindex \onehalfspacing \hoffset = 0pt \voffset = 0pt \oddsidemargin = 0pt \topmargin = 0pt \textheight = 675pt \textwidth = 455pt \marginparsep = 0pt \marginparwidth = 0pt \footskip = 15pt \marginparpush = 0pt \hypersetup { pdftitle={WIRTUALIZACJA SYSTEMÓW OPERACYJNYCH - Sławomir Wojciech Wojtczak}, pdfauthor={Sławomir Wojciech Wojtczak}, pdfsubject={WIRTUALIZACJA SYSTEMÓW OPERACYJNYCH}, pdfkeywords={virtualization}, % keywords pdffitwindow=true, % fit to window pdfnewwindow=true, % links in new window unicode=true, % use unicode urlcolor=blue, % color of external links citecolor=blue, % color of bibliography links filecolor=blue % color of file links } \input{functions.tex} \begin{document} \input{ch_cover/content.tex} \small \tableofcontents \normalsize \chapter*{Wstęp} \input{ch_intro/content.tex} \input{ch_01/content.tex} \chapter{Dostępne maszyny wirtualne} \input{ch_02/content.tex} \chapter{Wydajność maszyn wirtualnych} \input{ch_03/content.tex} \chapter{Przyszłość wirtualizacji} \input{ch_04/content.tex} \chapter{Podsumowanie} \input{ch_05/content.tex} \chapter*{Technikalia} \input{ch_tech/content.tex} \bibliographystyle{plainnat} % other: unsrtnat/cell/jas99/abbrvnat \bibliography{thesis.bib} \input{SKEL} \listoffigures \printindex \end{document} Example snip from content.tex file, generally no magic there: \vspace{20mm} \section{Emulacja} Na koniec wyjaśnijmy jeszcze czym jest \notion{emulacja} systemu operacyjnego. Polega ona na stworzeniu wirtualnego środowiska, które w praktyce dla systemu guest jest postrzegane jako kompletny komputer. Wszystkie elementy wirtualnego komputera są programowo emulowane, między innymi takie jak CPU, RAM, HDD, GPU, BIOS i CD. Emulator, to więc nic innego jak kolejna aplikacja działająca w trybie ring 3, co przedstawia \textbf{rysunek \ref{ch_01/emulator.pdf}}. Potrzebuje on jednak o wiele więcej zasobów niż typowa aplikacja aby realizować swoje \\ \imageborder{ch_01/emulator.pdf}{\textit{Emulator} jest po prostu kolejną aplikacją działającą w systemie.} Wielką zaletą emulacji jest możliwość emulowania systemów, które wymagają innej architektury sprzętowej niż architektura systemu host, na przykład emulacja architektury PowerPC na najpopularniejszej aktualnie architekturze i386, było to swego czasu popularne dzięki emulatorowi \notion{PearPC}, który był wykorzystywany do uruchamiania systemu \notion{Mac OS X} na systemach Windows. Aktualnie najpopularniejszymi aplikacjami, zapewniającymi emulację są na \notion{QEMU} \footnote{QEMU wraz z modułem \textit{kqemu} może również służyć jako maszyna wirtualna} oraz \notion{Bochs}. \\ (...) \begin{itemize} \item \textbf{Equivalence (równoważność)} - Program działający pod kontrolą wirtualizacji powinien zastać środowisko identyczne jak w przypadku bezpośredniego działania na tym sprzęcie. \item \textbf{Resource Control (kontrola zasobów)} - Monitor \item \textbf{Efficiency (wydajność)} - Instrukcje w większości muszą być wykonywane bez interwencji monitora. \end{itemize} More complicated stuff would include drawing the images directly in LaTeX, tikz for example: http://www.texample.net/tikz/examples/timing-diagram. Basically anything where MS Office (or OpenOffice.org) would suffice is simply in my mind. TOC, well organized bibliography, etc etc are the basis on which LaTeX is built. Ok, thanks for an example. Generally it will be far easier (and faster) for me to draw something similar in Inkscape as SVG and then export it to PDF and inclide info LaTeX document, but maybe if you know LaTeX very good its faster. October 7th, 2009, 09:48 -- I would do that in 1 post, but 10 000 character limit per post forces me to double post. We don't. We are waiting Hiroki Sato to port TeXLive to FreeBSD from 2001 or something like that. In the mean time if you really need to use TeX with BSD you have two options. One is two switch to OpenBSD or more recently to NetBSD. The other one is to use unofficial port of Romain Tartière which is not allowed to ports three because we do not want to hurt Mr. Sato's feelings. There is no texlive in ports but ... I have just downloaded texlive2008-20080822.iso from torrents (search for texlive2008 keyword), mounted it as usual, started installer install-tl and everything installed without any problems at /usr/local/texlive/2008 # cd /mnt # ls bin | grep freebsd amd64-freebsd i386-freebsd # ./install-tl ======================> TeX Live installation procedure <===================== =======> Note: Letters/digits in <angle brackets> indicate menu items <======= =======> for commands or configurable options <======= Detected platform: Intel x86 with FreeBSD <B> binary systems: 1 out of 15 <S> Installation scheme (scheme-full) 83 collections out of 84, disk space required: 1720 MB Customizing installation scheme: <C> standard collections <L> language collections <D> directories: TEXDIR (the main TeX directory): /usr/local/texlive/2008 TEXMFLOCAL (directory for site-wide local files): /usr/local/texlive/texmf-local TEXMFSYSVAR (directory for variable and automatically generated data): /usr/local/texlive/2008/texmf-var TEXMFSYSCONFIG (directory for local config): /usr/local/texlive/2008/texmf-config TEXMFHOME (directory for user-specific files): ~/texmf <O> options: [ ] use letter size instead of A4 by default [X] create all format files [X] install macro/font doc tree [X] install macro/font source tree [ ] create symlinks in standard directories <V> set up for running from DVD Other actions: <I> start installation to hard disk <H> help <Q> quit Enter command: After reading post install message and modifying my PATH: See /usr/local/texlive/2008/index.html for links to documentation. The TeX Live web site (http://tug.org/texlive/) TeX Live is a joint project of the TeX user groups around the world; please consider supporting it by joining the group best for you. The list of groups is available on the web at http://tug.org/usergroups.html. to your PATH for current and future sessions. Welcome to TeX Live! My master's thesis built with texlive without any problems and looks the same as the one built using tetex package. mavio% echo \$PATH /usr/local/texlive/2008/bin/i386-freebsd:/sbin:/bin:(...) mavio% make This is pdfTeXk, Version 3.1415926-1.40.9 (Web2C 7.5.7) (...) LaTeX2e <2005/12/01> Babel <v3.8l> (...) This is makeindex, version 2.15 [20-Nov-2007] (kpathsea + Thai support). This is BibTeX, Version 0.99c (Web2C 7.5.7) Output written on thesis.pdf (89 pages, 2016890 bytes). So there is an almost instant and easy way to have fully working Tex Live 2008 on FreeBSD. July 17th, 2010, 13:16 The other one is to use unofficial port of Romain Tartière which is not allowed to ports three because we do not want to hurt Mr. Sato's feelings. ... so a 'great' news to all LaTeX/TeXLive users here, since Hiroki Sato has been elected to FreeBSD's core team for the next 2 years, if he suck so much, why he gets elected? (as other here say) source: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=11461+0+current/freebsd-announce fronclynne July 17th, 2010, 20:07 ... so a 'great' news to all LaTeX/TeXLive users here, since Hiroki Sato has been elected to FreeBSD's core team for the next 2 years, if he suck so much, why he gets elected? (as other here say) source: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=11461+0+current/freebsd-announce I'm dubious about Oko's generous & factual statements (running openBSD regularly can lead to some unexpected social side-effects, or so I hear (http://mail-index.netbsd.org/current-users/1996/10/20/0004.html)), but instead of gum-flapping about political rumours why don't you try e-mailing the gentleman in question and asking him directly? July 18th, 2010, 10:25 but instead of gum-flapping about political rumours why don't you try e-mailing the gentleman in question and asking him directly? I generally do not care about all that LaTeX stuff, my works are working with TeTeX package and I have TeXLive 2008 in case. fronclynne July 18th, 2010, 20:48 I generally do not care about all that LaTeX stuff, my works are working with TeTeX package and I have TeXLive 2008 in case. Ah, I just realised that you were probably arguing against the slander. My apologies.
2013-06-19 16:11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8575131297111511, "perplexity": 13883.302865804579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708882773/warc/CC-MAIN-20130516125442-00048-ip-10-60-113-184.ec2.internal.warc.gz"}
https://jrm.jrias.or.jp/10.3769/radioisotopes.66.259/index.html
Online ISSN: 1884-4111 Print ISSN: 0033-8303 ## 原著Article ### 福島第一原子力発電所事故に係る食品中の放射性物質に関する現行の基準値の検証—海産物中の規制対象核種による線量への寄与割合に対する仮定の妥当性—Verification of the Assumption on Contribution Ratio to the Reference Level from Each Radionuclide in Seafood to Derive Criteria for Radionuclide Activity Concentrations for Food in the Existing Exposure Situation Regarding the Fukushima Dai-ichi Nuclear Power Plant Accident 3国立保健医療科学院生活環境研究部Department of Environmental Health, National Institute of Public Health The current limits for radioactive materials (sum of 134Cs and 137Cs) in food were set taking into account the radiation dose from 134Cs, 137Cs, 90Sr, 106Ru and Pu. Although the limits were based on the concentration ratio of radionuclides in soil etc., it was assumed that radiation doses from radioactive caesium (134Cs+137Cs) were equal to that from other radionuclides in regard to sea foods. In this study, we evaluate the contribution to the radiation dose from 134Cs+137Cs and others. The contribution of radionuclides other than radioactive caesium was less than that assumed when the limits were set.
2020-07-08 07:03:34
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8433263301849365, "perplexity": 4159.092779544988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896905.46/warc/CC-MAIN-20200708062424-20200708092424-00230.warc.gz"}
http://gmatclub.com/forum/m17-88519.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 17 Sep 2014, 12:00 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # M17 Author Message Senior Manager Joined: 05 Oct 2008 Posts: 273 Followers: 3 Kudos [?]: 44 [0], given: 22 M17 [#permalink]  02 Oct 2009, 23:31 What is \frac{1}{2} + \left(\frac{1}{2}\right)^2 + \left(\frac{1}{2}\right)^3 + ... + \left(\frac{1}{2}\right)^{20} between? * \frac{1}{2} and \frac{2}{3} * \frac{2}{3} and \frac{3}{4} * \frac{3}{4} and \frac{9}{10} * \frac{9}{10} and \frac{10}{9} * \frac{10}{9} and \frac{3}{2} Math Expert Joined: 02 Sep 2009 Posts: 29670 Followers: 3494 Kudos [?]: 26278 [0], given: 2708 Re: M17 [#permalink]  03 Oct 2009, 06:39 Expert's post study wrote: What is \frac{1}{2} + \left(\frac{1}{2}\right)^2 + \left(\frac{1}{2}\right)^3 + ... + \left(\frac{1}{2}\right)^{20} between? * \frac{1}{2} and \frac{2}{3} * \frac{2}{3} and \frac{3}{4} * \frac{3}{4} and \frac{9}{10} * \frac{9}{10} and \frac{10}{9} * \frac{10}{9} and \frac{3}{2} We have geometric progression with: b_1=\frac{1}{2}, q=\frac{1}{2} and n=20; S_n=\frac{b_1(1-q^n)}{(1-q)}; S_{20}=\frac{\frac{1}{2}(1-\frac{1}{2^{20}})}{(1-\frac{1}{2})}=1-\frac{1}{2^{20}}, clearly the value of 1-\frac{1}{2^{20}} is less than 1, \frac{1}{2^{20}} is less than \frac{1}{10}, so 1-\frac{1}{2^{20}} will be between \frac{9}{10} and \frac{10}{9}. Generally speaking when we have geometric progression with common difference in the range -1<q<1 and n is + infinite then the sum of the terms is given by: Sum=\frac{b_1}{(1-q)}. So, in our case sum tends to become Sum=\frac{\frac{1}{2}}{1-\frac{1}{2}}=1 as n increases. Which means that the sum of this sequence never will exceed 1, also as we have big enough number of terms (20) then the sum will be very close to 1, so we can safely choose answer choice D. _________________ Intern Joined: 07 Dec 2009 Posts: 15 Followers: 0 Kudos [?]: 3 [0], given: 4 Re: M17 [#permalink]  18 Dec 2009, 21:25 How did you get this part "1/2^20 is less than 1/10" ? And how do you end up leading up to this part "9/10 and 10/9"? I understand that 1-1/2^20 is less than 1 for sure hence 9/10 but how did you get 10/9? A detailed explanation will be appreciated if you have time. Thanks! Math Expert Joined: 02 Sep 2009 Posts: 29670 Followers: 3494 Kudos [?]: 26278 [1] , given: 2708 Re: M17 [#permalink]  19 Dec 2009, 14:22 1 KUDOS Expert's post melissawlim wrote: How did you get this part "1/2^20 is less than 1/10" ? And how do you end up leading up to this part "9/10 and 10/9"? I understand that 1-1/2^20 is less than 1 for sure hence 9/10 but how did you get 10/9? A detailed explanation will be appreciated if you have time. Thanks! First of all: when we have geometric progression with common ratio q in the range -1<q<1 (|q|<1), then the sum of the progression: b_1, b_2, ...b_n...b_{+infinity} is Sum=\frac{b_1}{1-q}. In our case b_1=\frac{1}{2} and q=\frac{1}{2}<1. The sum of the sequence \frac{1}{2}+\frac{1}{2^2}+...+\frac{1}{2^n}+...+\frac{1}{2^{infinity}}=\frac{\frac{1}{2}}{1-\frac{1}{2}}=1. Which means that the sum of sequence given, even if it'll continue endlessly, will NEVER be more than 1 (actually it'll be equal to 1 as we calculated). We have n=20, which is big enough to conclude that the sum will be very close to 1, but again never more than 1. Another way: We have geometric progression with: b_1=\frac{1}{2}, q=\frac{1}{2}, n=20. Sum=\frac{b_1(1-q^n)}{1-q} S_{20}=\frac{\frac{1}{2}(1-\frac{1}{2^{20}})}{1-\frac{1}{2}}=\frac{\frac{1}{2}(1-\frac{1}{2^{20}})}{\frac{1}{2}}=1-\frac{1}{2^{20}} Now: \frac{1}{2^{20}} is less than \frac{1}{10}. Why? \frac{1}{2^4}=\frac{1}{16}<\frac{1}{10}, so if \frac{1}{2^4} is less than \frac{1}{10}, \frac{1}{2^{20}} will be much less than \frac{1}{10}. Next if we subtract the value less than \frac{1}{10} from 1, we'll get the value more than \frac{9}{10}, as 1-\frac{1}{10}=\frac{9}{10} Hence 1-\frac{1}{2^{20}} is more than 9/10 and clearly less than 1. The sum is between 9/10 and 1, only answer choice covering this range is D. Hope it's clear. _________________ Manager Joined: 25 Aug 2009 Posts: 177 Location: Streamwood IL Schools: Kellogg(Evening),Booth (Evening) WE 1: 5 Years Followers: 8 Kudos [?]: 99 [0], given: 3 Re: M17 [#permalink]  19 Dec 2009, 16:53 Same approach got D. _________________ Rock On Intern Joined: 07 Dec 2009 Posts: 15 Followers: 0 Kudos [?]: 3 [0], given: 4 Re: M17 [#permalink]  19 Dec 2009, 18:42 Bunuel wrote: melissawlim wrote: How did you get this part "1/2^20 is less than 1/10" ? And how do you end up leading up to this part "9/10 and 10/9"? I understand that 1-1/2^20 is less than 1 for sure hence 9/10 but how did you get 10/9? A detailed explanation will be appreciated if you have time. Thanks! First of all: when we have geometric progression with common ratio q in the range -1<q<1 (|q|<1), then the sum of the progression: b_1, b_2, ...b_n...b_{+infinity} is Sum=\frac{b_1}{1-q}. In our case b_1=\frac{1}{2} and q=\frac{1}{2}<1. The sum of the sequence \frac{1}{2}+\frac{1}{2^2}+...+\frac{1}{2^n}+...+\frac{1}{2^{infinity}}=\frac{\frac{1}{2}}{1-\frac{1}{2}}=1. Which means that the sum of sequence given, even if it'll continue endlessly, will NEVER be more than 1 (actually it'll be equal to 1 as we calculated). We have n=20, which is big enough to conclude that the sum will be very close to 1, but again never more than 1. Another way: We have geometric progression with: b_1=\frac{1}{2}, q=\frac{1}{2}, n=20. Sum=\frac{b_1(1-q^n)}{1-q} S_{20}=\frac{\frac{1}{2}(1-\frac{1}{2^{20}})}{1-\frac{1}{2}}=\frac{\frac{1}{2}(1-\frac{1}{2^{20}})}{\frac{1}{2}}=1-\frac{1}{2^{20}} Now: \frac{1}{2^{20}} is less than \frac{1}{10}. Why? \frac{1}{2^4}=\frac{1}{16}<\frac{1}{10}, so if \frac{1}{2^4} is less than \frac{1}{10}, \frac{1}{2^{20}} will be much less than \frac{1}{10}. Next if we subtract the value less than \frac{1}{10} from 1, we'll get the value more than \frac{9}{10}, as 1-\frac{1}{10}=\frac{9}{10} Hence 1-\frac{1}{2^{20}} is more than 9/10 and clearly less than 1. The sum is between 9/10 and 1, only answer choice covering this range is D. Hope it's clear. Thanks, that is very clear. Kudos! Senior Manager Joined: 30 Aug 2009 Posts: 290 Location: India Concentration: General Management Followers: 3 Kudos [?]: 95 [0], given: 5 Re: Progression [#permalink]  29 Dec 2009, 04:33 study wrote: How do we use the arithmetic or geometric progression in this particula problem? Anyone? What is \frac{1}{2} + \left(\frac{1}{2}\right)^2 + \left(\frac{1}{2}\right)^3 + ... + \left(\frac{1}{2}\right)^{20} between? * \frac{1}{2} and \frac{2}{3} * \frac{2}{3} and \frac{3}{4} * \frac{3}{4} and \frac{9}{10} * \frac{9}{10} and \frac{10}{9} * \frac{10}{9} and \frac{3}{2} this will be a GP series with a[1st term] = 1/2 and r[common ratio] =1/2 so Sum of this series will be a(1-r^n)/(1-r) = 1/2 (1-[1/2]^20]/(1-1/2)= 1 - 1/(2^20) from the given options option 4 is the best fit Retired Moderator Joined: 02 Sep 2010 Posts: 807 Location: London Followers: 76 Kudos [?]: 480 [0], given: 25 Re: s03#2, Didn't get the explanation, anyone? [#permalink]  03 Nov 2010, 00:24 AtifS wrote: What is \frac{1}{2} + \left(\frac{1}{2}\right)^2 + \left(\frac{1}{2}\right)^3 + ... + \left(\frac{1}{2}\right)^{20} between? A: \frac{1}{2} and \frac{2}{3} B: \frac{2}{3} and \frac{3}{4} C: \frac{3}{4} and \frac{9}{10} D: \frac{9}{10} and \frac{10}{9} E: \frac{10}{9} and \frac{3}{2} This is from GMAT Club Tests and I didn't get the explanation. Can any expert help with the explanation of this question? Fact 1 : Sum of infinite series (1/2) + (1/2)^2 + (1/2)^3 + ..... to infinity is 1. (Using infinite GP formula). So option E is out. Next thing is how does this sum look like ? 1 term : 0.5 2 terms : 1/2 + 1/4 = 0.75 3 terms : 1/2 + 1/4 + 1/8 = 7/8 = 0.875 ... and so on Basically with every term, the distance to 1, is halfed. Option A can't be right either as it is bounded by 2/3, which is lower than the first 2 terms Option B can't be right either as it is bounded by 3/4, which is lower than the first 3 terms Choosing between C & D This is where it gets interesting As I mentioned, with each term, the distance to 1 is halfed. After 2 terms we are 1/4 away After 3 terms we are 1/8 away ... After 20 terms we will be 1/(2^20 ) away This is a very small number, much smaller than 0.1 (which is the bound implied by C) Hence, the answer must be D Direct Approach for the question You can always use the sum of a GP formula, which will immediately give this sum to be equal to 1-\frac{1}{2^{20}}, which then makes it easy to pick D _________________ Intern Joined: 02 Nov 2010 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: s03#2, Didn't get the explanation, anyone? [#permalink]  03 Nov 2010, 00:34 You need to know the formula for a geometric series: SUM(ar^k) = \frac{a(r-r^{n+1})}{1-r} where k = m to n For this series a = 1, k = \frac{1}{2} and n=20 you'll get the formula \frac{\frac{1}{2}-\frac{1}{2^{21}}}{\frac{1}{2}}, approximate \frac{1}{2^{20}} with zero and you'll get 1. So you know that the series converges towards 1, Add the first 4 terms to get 15/16 which is bigger than 9/10 Last edited by Papperlapub on 03 Nov 2010, 02:00, edited 3 times in total. Senior Manager Joined: 20 Jan 2010 Posts: 278 Schools: HBS, Stanford, Haas, Ross, Cornell, LBS, INSEAD, Oxford, IESE/IE Followers: 14 Kudos [?]: 138 [0], given: 117 Re: Progression [#permalink]  03 Nov 2010, 01:14 Oops! my bad for posting the already asked question. I searched for it but didn't find in results. I think, I was searching the word s03, which showed only s03#1 (& s03#3) and didn't think of searching for progression (silly me). I think, introducing the new tag "Source:GMAT Club Tests" would be great. Also, I did apply GP the way you guys mentioned but I think I didn't get the question very well. Is this question asking about where does the sum of the terms in question exist between given answer options? Am I getting it right? _________________ "Don't be afraid of the space between your dreams and reality. If you can dream it, you can make it so." Target=780 http://challengemba.blogspot.com Kudos?? Retired Moderator Joined: 02 Sep 2010 Posts: 807 Location: London Followers: 76 Kudos [?]: 480 [0], given: 25 Re: Progression [#permalink]  04 Nov 2010, 00:22 Yes, it is asking for the range in which the sum sits amongst the given ranges _________________ Senior Manager Joined: 23 Oct 2010 Posts: 384 Location: Azerbaijan Concentration: Finance Schools: HEC '15 (A) GMAT 1: 690 Q47 V38 Followers: 12 Kudos [?]: 135 [0], given: 73 Re: M17 [#permalink]  24 Jun 2013, 08:51 is there any other way (such as worst/best scenario) to solve this question? _________________ Happy are those who dream dreams and are ready to pay the price to make them come true Re: M17   [#permalink] 24 Jun 2013, 08:51 Similar topics Replies Last post Similar Topics: M17#16 2 07 Oct 2009, 20:13 m 17 #27 9 27 Jun 2009, 12:05 18 m17 #8 19 05 Dec 2008, 18:08 8 M17Q5 20 30 Oct 2008, 17:05 1 M17-16 17 26 May 2008, 02:55 Display posts from previous: Sort by # M17 Moderators: Bunuel, WoundedTiger Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2014-09-17 20:00:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008272051811218, "perplexity": 4470.133887215286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124356.76/warc/CC-MAIN-20140914011204-00156-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://acc.digital/the-essence-of-quantum-computing-3/10/
Home > issue 5 > The Essence of Quantum Computing ## The Essence of Quantum Computing Part 2 of 3 Part Series To be continued in Part III – The measurement conundrum ## Appendix A Linear Algebra ### A.1 Introduction The basic objects in the study of linear algebra are vector spaces, in particular, the space Cn, of all n-tuples of complex numbers, (z1,…, zn). The elements of a vector space are called vectors. Physicists sometimes refer to complex numbers as c-numbers. The most familiar notation for a state vector in describing a quantum system is |Ψ〉. (Physicists can be rather flippant in their choice of symbols other than Ψ , e.g., +,- ,↑ , ↓, etc.) The standard vector-matrix notation is used when the state vector is expressed in terms of its components. For example, Like ordinary vectors, state vectors are specified by a particular choice of basis vectors (eigenstates) and a particular set of complex numbers, corresponding to the amplitudes with which each eigenstate contributes to the complete state vector. Once the state vector |Ψ〉 of a quantum system is known, the expected value of any observable attribute of the system can be calculated since  |Ψ〉 contains the complete information about the system. This is similar to descriptions of systems in classical physics in which the complete state of the system is known once the time-dependent functions for position and momentum are determined. In what follows, it is advisable to pay attention to minute details of notation and syntax. Heed von Neumann’s observation and get used to them! Pay particular attention to all those mathematical properties which remain invariant under a given transformation, especially unitary transformations. Under unitary transformations, state vectors only rotate, they do not change length. We have generally stayed with the notations and manner of presentation of results provided in the excellent text book by Nielsen and Chuang39. This will also facilitate readers in going back and forth between this paper and the book by Nielsen and Chuang. The notations have a certain elegance. ### A.2 Various representations of a state vector A vector |v〉 having n vector components |v1〉,…|vn〉 is generally written as the linear summation where a1, a2, … , an are n complex constants. It is customary to represent |v〉 in the following alternative matrix forms, if it is apparent from the context that |v〉 has the components |v1〉,…|vn〉: Note that in this notation, the matrix representation of |vi will have all ak = 0 for k = 1,…, n except for k = i, for which ai = 1. For example,|v3 has the matrix representation Note that when we use matrix notation to describe state transformations, the ordering of the basis vectors in the matrix representation must be settled a priori so that we can keep track of how each basis vector is transforming when it undergoes a transformation. Finally, when the context is clear in this paper, the abstract index form is also used for an abstract linear transformation or a set of basis vectors. For example, |i〉 may either stand for itself or for the basis set of which it is member. ### A.3 Bases and linear independence A set of vectors |v1〉 ,… |vn is said to be a spanning set for a vector space V if any vector |v〉 in V can be written as a linear combination where, for the given |v〉, the complex coefficients ai are unique. Such a vector space V is said to have n dimensions, a fact symbolically stated by Cn. An example of a spanning set for the vector space C2 is the set since any vector |v〉≡ [a1  a2]T in C2 can be written as the following linear combination A vector space may have many different spanning sets. For example, a second spanning set for the vector space C2 is the set as once again, any vector |v〉 [a1  a2]T in C2 can be written as the linear combination A set of non-zero vectors |v1〉,… |vn is said to be linearly dependent if there exists a set of complex numbers |a1〉,… |an with ai ≠ 0 for at least one value of i, such that otherwise it is linearly independent. A linearly independent set is called a basis for V, and such a basis set always exists. The number of elements n in the basis is defined to be the dimension of V. Any two sets of linearly independent vectors, which span a vector space V, contain the same number n of elements. ### A.4 Linear operators and matrices A linear operator between vector spaces V and W, where |v1〉,… |vm is a basis for V and |w1〉,… |wn is a basis for W (note that m and n may be different), is defined to be any function A: V → W, which is linear in its inputs40, A linear operator A is said to be defined on a vector space V if A is a linear operator from V to V. The identity operator IV on a vector space V is defined by the equation IV |v〉 ≡ |v〉 for all vectors |v〉. If the context is clear, IV is often abbreviated to I. In addition, there is a zero-operator denoted by 0, which maps all vectors to the zero vector, i.e., 0|v〉 ≡ 0. Note that the ket notation for the zero vector is not used as, by convention, it is reserved for the zero vector |0〉 in quantum computing where it means something entirely different. Sometimes it is easier to see linear operators in terms of their equivalent matrix representation. The claim that the matrix A is a linear operator simply means that is true as an equation where the operation is matrix multiplication of A by column vectors. The linear operator’s matrix representation is given by for each j in the range 1,… , m and an Aij is an element of A when represented in matrix form. The matrix representation of A is completely equivalent to the operator A. However, to make the connection between matrices and linear operators we must specify a set of input and output basis states for the input and output vector spaces, namely, V and W, respectively, of the linear operator A. 40 A: V → W means that A is a mapping from V to W, i.e., the input to A is V and the output of A is W. The space V is called the domain of A, and W the codomain of A. The range of A is the space Y = {y | y ∈ W and y = Ax for some x ∈ V}. Pages ( 10 of 14 ): « Previous1 ... 89 10 1112 ... 14Next »
2019-01-22 05:50:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.82764732837677, "perplexity": 420.9150651849963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583829665.84/warc/CC-MAIN-20190122054634-20190122080634-00448.warc.gz"}
https://math.stackexchange.com/questions/2864995/generalizing-a-trigonometric-infinite-product-of-vieta
Generalizing a Trigonometric Infinite Product of Vieta The second exercise in "Statistical Independence in Probability, Analysis and Number Theory," by Mark Kac is to prove that $${\sin x\over x}=\prod_{k=1}^{\infty}\frac13\left(1+2\cos{2x\over3^k}\right)\tag{1}$$ and generalize it. This is a generalization of Vieta's formula $${\sin x\over x}=\prod_{k=1}^{\infty}\cos{x\over 2^k}\tag{2}$$ which is proved in the text. It's not hard to prove $(1)$. You just write $\sin x = \sin({x\over3}+{2x\over3})$ and plug away, but I'm having trouble seeing what the generalization is supposed to be. The only thing I've been able to come up with for the next case is $${\sin x\over x}=\prod_{k=1}^{\infty} \frac12\left(\cos{x\over4^k}+\cos{3x\over4^k}\right)\tag{3}$$ Again, this isn't hard to prove, but I'm having trouble seeing a pattern in $(1),\ (2),\ \text{and } (3).$ To derive $(3)$ I used the triple angle formula for cosine, and it seems to me that as you go forward you'll need multiple angle formulas for increasing large multiples, so I foresee a lot of complication. Can you see a formula for the $n=4$ case that is more clearly a generalization of $(1)$ than $(3)$ is? Or do you know what generalization Kac had in mind? I'm assuming that he intends for you to come up with a formula for general $n$. • I see (1) as an average of three evenly spaced cosines, of -2x/3^k, 0, and +2x/3^k. And I see (3) as an average of four evenly spaced cosines, of -3x/4^k, -x/4^k, +x/4^k, and +3x/4^k. Does that help? Jul 28 '18 at 5:07 • @mjqxxxx I think you're onto something. I remember many years ago, when I first saw Vieta's formula for $2/\pi,$ I proved it by converting to a Riemann sum. Thanks. Jul 28 '18 at 5:13 • I think it is a wonderful monograph. For me, it connected many threads together in a very succinct way. Jul 28 '18 at 5:36 You want to find an expression for $$\ a_n(x) := \sin(nx)/\sin(x) \$$ written in terms of cosines of multiple angles. The first examples are: $$a_2(x) = 2\cos(x),$$ $$a_3(x) = 1 + 2\cos(2x),$$ $$a_4(x) = 2\cos(x) + 2\cos(3x),$$ $$a_5(x) = 1 + 2\cos(2x) + 2\cos(4x),$$ $$a_6(x) = 2\cos(x) + 2\cos(3x) + 2\cos(5x).$$ The pattern is now obvious. The general infinite product is: $$\frac{\sin x}x = \prod_{k=1}^ \infty \frac1n a_n\Big(\frac{x}{n^k}\Big).$$
2021-09-28 12:57:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9129945635795593, "perplexity": 180.14355214831474}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060803.2/warc/CC-MAIN-20210928122846-20210928152846-00094.warc.gz"}
https://zbmath.org/?q=an:0626.32019
# zbMATH — the first resource for mathematics On the reflection principle in several complex variables. (English) Zbl 0626.32019 The edge-of-the-wedge theorem is used to extend a biholomorphic map across a nondegenerate real analytic boundary in $${\mathbb{C}}^ n$$ under some differentiability assumption at the boundary. ##### MSC: 32D15 Continuation of analytic objects in several complex variables Full Text: ##### References: [1] Charles Fefferman, The Bergman kernel and biholomorphic mappings of pseudoconvex domains, Invent. Math. 26 (1974), 1 – 65. · Zbl 0289.32012 [2] H. Lewy, On the boundary behavior of holomorphic mappings, Accad. Naz. dei Lincei, no. 35, 1977. [3] Walter Rudin, Lectures on the edge-of-the-wedge theorem, American Mathematical Society, Providence, R.I., 1971. Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics, No. 6. · Zbl 0214.09001 [4] S. I. Pinčuk, On the analytic continuation of holomorphic mappings, Math. Sb. 27 (1975), 375-392. This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-11-29 11:31:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7216145396232605, "perplexity": 1056.5153582531025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358705.61/warc/CC-MAIN-20211129104236-20211129134236-00592.warc.gz"}
https://math.stackexchange.com/questions/1066650/number-of-ways-to-fill-a-2-times-n-grid-with-1-times-2-and-2-times-2-tiles
# Number of ways to fill a $2\times n$ grid with $1\times 2$ and $2\times 2$ tiles How many ways are there to fill a $2\times n$ grid with $1\times 2$ and $2\times 2$ tiles? Rotating is allowed. ### Progress Let $T_n$ be the number of ways; then $T_n = T_{ n-1} + T_{ n-2} + 1$ (based on removing one of tiles, as in quid's answer). • Usually, in problems like this, you would try a few cases (say for $n$ from $1$ to $4$ or $6$ or something), and see if you can spot some pattern. Next, you would use that pattern to come up with a guess at a formula. Then comes the induction part of the proof, showing that the formula you have is indeed correct. You should at least try the cases yourself before coming here. – Arthur Dec 13 '14 at 19:15 • i guess the formula is this : T(k) = T(k-1) + T(k-2) + 1 . the parentheses are indices – ms95 Dec 13 '14 at 19:17 • Good. Together with $T(1)=1$ and $T(2)=3$, does that formula work for $k=3$ or $4$? – Arthur Dec 13 '14 at 19:21 • i guess it does – ms95 Dec 13 '14 at 19:22 • ok thanks i did the rest . thank you so much – ms95 Dec 13 '14 at 20:02 Consider the end (or start) there are three possibilities: • $2 \times 1$ tile. • $2 \times 2$ tile. • two $1 \times 2$ tiles. Removing these last tiles yields: • a tiling of $2 \times (n-1)$ grid. • a tiling of $2 \times (n-2)$ grid. • a tiling of $2 \times (n-2)$ grid. From this, and the first values, you get the recursive description that you can solve if you want something more explicit. • The recursion is $T_n = T_{n-1} + 2T_{n-2}$ and $T_1= 1$ and $T_2=3$. • The closed form is $$\frac{2^{n+1} - (-1)^{n+1}}{3}.$$ • Should be $T_n=T_{n-1}+2T_{n-2}$. – paw88789 Dec 13 '14 at 20:50
2019-10-16 05:15:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7732716798782349, "perplexity": 487.15570488237853}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00174.warc.gz"}
http://math.stackexchange.com/questions/167113/the-role-of-maps-in-algebraic-structures
The role of maps in algebraic structures? In group theory we have encountered the concepts of homomorphisms as maps from one group to another, and the same thing with ring and field theory and we ave called them respectively homomorphisms of groups (of rings, and of fields). Also in linear algebra we have linear maps. So my question is this: What is the role of such maps in this algebraic structures? - Congratulations! You have just discovered category theory! –  MJD Jul 5 '12 at 16:29 Relevant: en.wikipedia.org/wiki/Morphism –  user2468 Jul 5 '12 at 16:31 One of my professors once said this about ring theory: To study rings, you have two choices. You can take a particular ring, and stare at it intently for a long period of time until you discover interesting things about that ring. Then you write them down. Or you can study how the rings "acts" on other structures, by studying the collection of all modules on that ring, and thus deduce interesting things about the ring. It turns out that the latter method is more fruitful, easier to generalize, and allows one to go from one ring to another ring more easily. Studying the modules of a ring can be seen as a special case of studying homomorphisms (an $R$-module structure on an abelian group $A$ is equivalent to a ring homomorphism from $R$ to the endomorphisms of $A$, which are a ring under pointwise addition and composition. (Which is why I mention it) The example of vector spaces is very apt. Vector spaces are nice objects, no doubt; but if all you do is stare at a particular vector space, you can say some interesting things (dimensions, subspaces, and so on), but it's not until you bring in linear transformations that the full power of vector spaces really becomes apparent. It is through linear transformations that we can actually do things with vector spaces: use them to solve systems of equations (linear and differential), to study Markov systems, to solve least squares problems (which amount to constructing certain kinds of projections, which are types of linear transformations), etc. There is a whole philosophy of mathematics that says that the best way to study objects is to study the maps that "respect the structure", and not only for algebraic objects: topology is not just the study of topological spaces, it's the study of topological spaces and continuous maps between them. Analysis is not just the study of the real numbers, it is the study of the real numbers and real valued functions. Differential geometry concerns itself with differentiable maps. Etc. Turns out that maps give you a fruitful way of studying the structure. In algebra, an important role is played by "congruences", which are equivalence relations on the underlying set that are compatible with the operations. For example, in the ring of integers, the usual modular congruences, $a\equiv b\pmod{n}$, give an equivalence relation that "respects" the ring structure on $\mathbb{Z}$, in the sense that if $a\equiv b\pmod{n}$ and $x\equiv y\pmod{n}$, then $a+x\equiv b+y\pmod{n}$ and $ax\equiv by\pmod{n}$ (the sum of classes is the same as the class of the sum; the product of classes is the same as the class of the product). Congruences correspnd, via the isomorphism theorems, to surjective morphisms. Morphisms let you take one particular structure and study it from other points of view, in other contexts; it allows you (via "quotients") to focus on particular details while ignoring other part of the structure that may be "in the way" (e.g., the study of "parity" is made simpler by considering the quotient $\mathbb{Z}/2\mathbb{Z}$, rather than $\mathbb{Z}$ itself). Morphisms let you understand inherent symmetry in an object; Klein proposed studying geometry almost exclusively in terms of certain kinds of symmetries and certain kinds of morphisms (projective transformations). Galois theory is founded on the idea of understanding the "symmetries" that may exist between different roots of the same polynomial (in the form of automorphisms). All in all, morphisms are just a very rich way of studying objects. You can think of them as a way of letting you poke, examine, cut open, x-ray, fold, and generally manipulate your objects of interest so that you can get a better sense, not only of what they are, but how they interact with other objects and other contexts. - Homomorphisms do a lot of things: 1. They can be used to show two structures are identical (isomorphism) 2. They can be used to show one structure is a substructure of another (monomorphism) 3. They can be used to show one structure is a quotient of another (epimorphism) 4. General homomorphisms are kind of a mixture of the above, carrying a certain amount of data about the relationship between the domain and codomain. This list is not exhaustive, it's just meant to sample some of what homomorphisms do. - Any homomorphism of [groups, rings, modules] is an epimorphism followed by a monomorphism, so it is a precise mixture of the above. In general, I think any morphism in an abelian category is an epi followed by a mono, but I don't know if this condition on the category is necessary or not. –  M Turgeon Jul 5 '12 at 17:39 Groups, rings (especially fields) and many other objects in mathematics form so called categories. I.e. the collection of all rings (whatever this is) forms a category. To 'compare' the objects in a category, one needs morphisms between them. (This is not the only need for morphisms, but one of them). To say that one object is 'bigger' or that two objects are isomorphic, one needs the notion of morphism. Morphisms are generally maps (though not all morphisms need to be maps) that respect the structure of the objects. Of course, morphisms have other useful applications. Matrices for example (i.e. morphisms of vector spaces [or modules]) are connected to linear equations and so on... Morphisms play very many very important roles in mathematics :) - Another function of morphisms is as a way of taking an unfamiliar or apparently complex algebraic object, and placing it in a more familiar context, or one where there are known analytical tools and methods - as in representation theory. The point of view you take depends rather on whether you are more interested in studying particular objects, or alternatively classes of objects. - Let's suppose for the moment that you really only care about isomorphisms. That is, you really only care about telling whether two groups are isomorphic, or something. Great! But how do you write down isomorphisms? It's hard. Often you will be able to write down a map and you'll want to show it's an isomorphism. Before you show it's an isomorphism, what is it? It's probably a homomorphism! So even if you only care about isomorphisms, in the course of proving things about isomorphisms you will very likely end up having to talk about homomorphisms which are not necessarily isomorphisms (at least you don't know that they are), so you might as well talk about them from the outset. Moreover, • you can use homomorphisms to construct isomorphisms, • even homomorphisms which are not isomorphisms can be used to prove useful statements, e.g. that there do not exist certain isomorphisms, • etc. But really to understand this you should do a lot of mathematics. It will become clearer to you as you gain experience with working with maps. - +1 for "even homomorphisms which are not isomorphisms can be used to prove useful statements" –  user3533 Jul 5 '12 at 21:31 I think the use of homomorphisms is to identify one algebraic structure with another. With that identification we can say that two algebraic structures are same structurally i.e they are same when we view them with respect to algebraic operations involved in them although they may seem different with open eye. - So all groups are the same with respect to algebraic operations to the trivial group, due to the trivial homomorphism? I think you are talking about isomorphisms rather than homomorphisms. –  Alex Becker Jul 5 '12 at 16:56 @AlexBecker: Yes I am talking about isomorphisms. I think the major use of homomorphisms is through isomorphisms. –  pritam Jul 5 '12 at 17:25 Certainly not. Almost all homomorphisms I work with are not isomorphisms. Homomorphisms which are not isomorphisms are hugely important, such as in homology and cohomology theories. –  Alex Becker Jul 5 '12 at 17:30 @AlexBecker: Sorry, I did not know that, I have not gone that far in algebra. –  pritam Jul 5 '12 at 17:39
2015-05-24 20:05:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267824053764343, "perplexity": 348.89680372549947}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928076.40/warc/CC-MAIN-20150521113208-00081-ip-10-180-206-219.ec2.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=J.%20Wolcott
• ### Cross sections for neutrino and antineutrino induced pion production on hydrocarbon in the few-GeV region using MINERvA(1606.07127) Oct. 8, 2018 hep-ex Separate samples of charged-current pion production events representing two semi-inclusive channels $\nu_\mu$-CC($\pi^{+}$) and $\bar{\nu}_{\mu}$-CC($\pi^{0}$) have been obtained using neutrino and antineutrino exposures of the MINERvA detector. Distributions in kinematic variables based upon $\mu^{\pm}$-track reconstructions are analyzed and compared for the two samples. The differential cross sections for muon production angle, muon momentum, and four-momentum transfer $Q^2$, are reported, and cross sections versus neutrino energy are obtained. Comparisons with predictions of current neutrino event generators are used to clarify the role of the $\Delta(1232)$ and higher-mass baryon resonances in CC pion production and to show the importance of pion final-state interactions. For the $\nu_\mu$-CC($\pi^{+}$) ($\bar{\nu}_{\mu}$-CC($\pi^{0}$)) sample, the absolute data rate is observed to lie below (above) the predictions of some of the event generators by amounts that are typically 1-to-2 $\sigma$. However the generators are able to reproduce the shapes of the differential cross sections for all kinematic variables of either data set. • ### Measurement of the muon anti-neutrino double-differential cross section for quasi-elastic scattering on hydrocarbon at~$E_\nu \sim 3.5$ GeV(1801.01197) May 9, 2018 hep-ex We present double-differential measurements of anti-neutrino quasi-elastic scattering in the MINERvA detector. This study improves on a previous single differential measurement by using updated reconstruction algorithms and interaction models, and provides a complete description of observed muon kinematics in the form of a double-differential cross section with respect to muon transverse and longitudinal momentum. We include in our signal definition zero-meson final states arising from multi-nucleon interactions and from resonant pion production followed by pion absorption in the primary nucleus. We find that model agreement is considerably improved by a model tuned to MINERvA inclusive neutrino scattering data that incorporates nuclear effects such as weak nuclear screening and two-particle, two-hole enhancements. • ### Anti-neutrino charged-current reactions on scintillator with low momentum transfer(1803.09377) March 26, 2018 hep-ex We report on multi-nucleon effects in low momentum transfer ($< 0.8$ GeV/c) anti-neutrino interactions on scintillator. These data are from the 2010-11 anti-neutrino phase of the MINERvA experiment at Fermilab. The hadronic energy spectrum of this inclusive sample is well-described when a screening effect at low energy transfer and a two-nucleon knockout process are added to a relativistic Fermi gas model of quasi-elastic, $\Delta$ resonance, and higher resonance processes. In this analysis, model elements introduced to describe previously published neutrino results have quantitatively similar benefits for this anti-neutrino sample. We present the results as a double-differential cross section to accelerate investigation of alternate models for anti-neutrino scattering off nuclei. • ### Measurement of the antineutrino to neutrino charged-current interaction cross section ratio in MINERvA(1701.04857) Jan. 1, 2018 hep-ex We present measurements of the neutrino and antineutrino total charged-current cross sections on carbon and their ratio using the MINERvA scintillator-tracker. The measurements span the energy range 2-22 GeV and were performed using forward and reversed horn focusing modes of the Fermilab low-energy NuMI beam to obtain large neutrino and antineutrino samples. The flux is obtained using a sub-sample of charged-current events at low hadronic energy transfer along with precise higher energy external neutrino cross section data overlapping with our energy range between 12-22 GeV. We also report on the antineutrino-neutrino cross section ratio, Rcc, which does not rely on external normalization information. Our ratio measurement, obtained within the same experiment using the same technique, benefits from the cancellation of common sample systematic uncertainties and reaches a precision of 5% at low energy. Our results for the antineutrino-nucleus scattering cross section and for Rcc are the most precise to date in the energy range $E_{\nu} <$ 6GeV. • ### Measurement of $\nu_{\mu}$ charged-current single $\pi^{0}$ production on hydrocarbon in the few-GeV region using MINERvA(1708.03723) Oct. 1, 2019 hep-ex The semi-exclusive channel $\nu_{\mu}+\textrm{CH}\rightarrow\mu^{-}\pi^{0}+\textrm{nucleon(s)}$ is analyzed using MINERvA exposed to the low-energy NuMI $\nu_{\mu}$ beam with spectral peak at $E_{\nu} \simeq 3$ GeV. Differential cross sections for muon momentum and production angle, $\pi^{0}$ kinetic energy and production angle, and for squared four-momentum transfer are reported, and the cross section $\sigma(E_{\nu})$ is obtained over the range 1.5 GeV $\leq E_{\nu} <$ 20 GeV. Results are compared to GENIE and NuWro predictions and to published MINERvA cross sections for $\nu_{\mu}\textrm{-CC}(\pi^{+})$ and $\bar{\nu}_{\mu}\textrm{-CC}(\pi^{0})$. Disagreements between data and simulation are observed at very low and relatively high values for muon angle and for $Q^2$ that may reflect shortfalls in modeling of interactions on carbon. For $\pi^{0}$ kinematic distributions however, the data are consistent with the simulation and provide support for generator treatments of pion intranuclear scattering. Using signal-event subsamples that have reconstructed protons as well as $\pi^{0}$ mesons, the $p\pi^{0}$ invariant mass distribution is obtained, and the decay polar and azimuthal angle distributions in the rest frame of the $p\pi^{0}$ system are measured in the region of $\Delta(1232)^+$ production, $W < 1.4$ GeV. • ProtoDUNE-SP is the single-phase DUNE Far Detector prototype that is under construction and will be operated at the CERN Neutrino Platform (NP) starting in 2018. ProtoDUNE-SP, a crucial part of the DUNE effort towards the construction of the first DUNE 10-kt fiducial mass far detector module (17 kt total LAr mass), is a significant experiment in its own right. With a total liquid argon (LAr) mass of 0.77 kt, it represents the largest monolithic single-phase LArTPC detector to be built to date. It's technical design is given in this report. • ### Search for active-sterile neutrino mixing using neutral-current interactions in NOvA(1706.04592) June 14, 2017 hep-ex We report results from the first search for sterile neutrinos mixing with active neutrinos through a reduction in the rate of neutral-current interactions over a baseline of 810\,km between the NOvA detectors. Analyzing a 14-kton detector equivalent exposure of 6.05$\times$10$^{20}$ protons-on-target in the NuMI beam at Fermilab, we observe 95 neutral-current candidates at the Far Detector compared with $83.5 \pm 9.7 \mbox{(stat.)} \pm 9.4 \mbox{(syst.)}$ events predicted assuming mixing only occurs between active neutrino species. No evidence for $\nu_{\mu} \rightarrow \nu_{s}$ transitions is found. Interpreting these results within a 3+1 model, we place constraints on the mixing angles $\theta_{24}<20.8^{\circ}$ and $\theta_{34}<31.2^{\circ}$ at the 90% C.L. for $0.05~eV^2\leq \Delta m^2_{41}\leq 0.5~eV^2$, the range of mass splittings that produce no significant oscillations over the Near Detector baseline. • ### Measurement of neutral-current $K^+$ production by neutrinos using MINERvA(1611.02224) June 2, 2017 hep-ex Neutral-current production of $K^{+}$ by atmospheric neutrinos is a background in searches for the proton decay $p \rightarrow K^{+} \bar{\nu}$. Reactions such as $\nu p \rightarrow \nu K^{+} \Lambda$ are indistinguishable from proton decays when the decay products of the $\Lambda$ are below detection threshold. Events with $K^{+}$ are identified in MINERvA by reconstructing the timing signature of a $K^{+}$ decay at rest. A sample of 201 neutrino-induced neutral-current $K^{+}$ events is used to measure differential cross sections with respect to the $K^{+}$ kinetic energy, and the non-$K^{+}$ hadronic visible energy. An excess of events at low hadronic visible energy is observed relative to the prediction of the NEUT event generator. Good agreement is observed with the cross section prediction of the GENIE generator. A search for photons from $\pi^{0}$ decay, which would veto a neutral-current $K^{+}$ event in a proton decay search, is performed, and a 2$\sigma$ deficit of detached photons is observed relative to the GENIE prediction. • ### Constraints on Oscillation Parameters from $\nu_e$ Appearance and $\nu_\mu$ Disappearance in NOvA(1703.03328) May 24, 2017 hep-ex Results are reported from an improved measurement of $\nu_\mu \rightarrow \nu_e$ transitions by the NOvA experiment. Using an exposure equivalent to $6.05\times10^{20}$ protons-on-target 33 $\nu_e$ candidates were observed with a background of $8.2\pm0.8$ (syst.). Combined with the latest NOvA $\nu_\mu$ disappearance data and external constraints from reactor experiments on $\sin^22\theta_{13}$, the hypothesis of inverted mass hierarchy with $\theta_{23}$ in the lower octant is disfavored at greater than $93\%$ C.L. for all values of $\delta_{CP}$. • ### Direct Measurement of Nuclear Dependence of Charged Current Quasielastic-like Neutrino Interactions using MINERvA(1705.03791) May 10, 2017 hep-ex Charged-current $\nu_{\mu}$ interactions on carbon, iron, and lead with a final state hadronic system of one or more protons with zero mesons are used to investigate the influence of the nuclear environment on quasielastic-like interactions. The transfered four-momentum squared to the target nucleus, $Q^2$, is reconstructed based on the kinematics of the leading proton, and differential cross sections versus $Q^2$ and the cross-section ratios of iron, lead and carbon to scintillator are measured for the first time in a single experiment. The measurements show a dependence on atomic number. While the quasielastic-like scattering on carbon is compatible with predictions, the trends exhibited by scattering on iron and lead favor a prediction with intranuclear rescattering of hadrons accounted for by a conventional particle cascade treatment. These measurements help discriminate between different models of both initial state nucleons and final state interactions used in the neutrino oscillation experiments. • ### Measurement of the neutrino mixing angle $\theta_{23}$ in NOvA(1701.05891) April 17, 2017 hep-ex This Letter reports new results on muon neutrino disappearance from NOvA, using a 14 kton detector equivalent exposure of $6.05\times10^{20}$ protons-on-target from the NuMI beam at the Fermi National Accelerator Laboratory. The measurement probes the muon-tau symmetry hypothesis that requires maximal mixing ($\theta_{23} = \pi/4$). Assuming the normal mass hierarchy, we find $\Delta m^2 = (2.67 \pm 0.11)\times 10^{-3}$ eV$^2$ and $\sin^2 \theta_{23}$ at the two statistically degenerate values $0.404^{+0.030}_{-0.022}$ and $0.624^{+0.022}_{-0.030}$, both at the 68% confidence level. Our data disfavor the maximal mixing scenario with 2.6 $\sigma$ significance. • ### Recent Cross Section Work From NOvA(1611.02600) Nov. 21, 2016 hep-ex, physics.ins-det The NOvA experiment is an off-axis long-baseline neutrino oscillation experiment seeking to measure $\nu_{\mu}$ disappearance and $\nu_{e}$ appearance in a $\nu_{\mu}$ beam originating at Fermilab. In addition to measuring the unoscillated neutrino spectra for the purposes of predicting the oscillated neutrino spectrum in the far detector, the 293-ton near detector also enables high-statistics investigation into neutrino scattering in numerous reaction channels. We discuss the various near detector analyses currently in progress, including inclusive measurements of both electron and muon neutrino charged-current interactions and efforts to constrain the off-axis NuMI flux using the elastic scattering of neutrinos from atomic electrons. • ### Measurements of the Inclusive Neutrino and Antineutrino Charged Current Cross Sections in MINERvA Using the Low-$\nu$ Flux Method(1610.04746) Oct. 15, 2016 hep-ex The total cross sections are important ingredients for the current and future neutrino oscillation experiments. We present measurements of the total charged-current neutrino and antineutrino cross sections on scintillator (CH) in the NuMI low-energy beamline using an {\em in situ} prediction of the shape of the flux as a function of neutrino energy from 2--50 GeV. This flux prediction takes advantage of the fact that neutrino and antineutrino interactions with low nuclear recoil energy ($\nu$) have a nearly constant cross section as a function of incident neutrino energy. This measurement is the lowest energy application of the low-$\nu$ flux technique, the first time it has been used in the NuMI antineutrino beam configuration, and demonstrates that the technique is applicable to future neutrino beams operating at multi-GeV energies. The cross section measurements presented are the most precise measurements to date below 5 GeV. • ### Measurement of Partonic Nuclear Effects in Deep-Inelastic Neutrino Scattering using MINERvA(1601.06313) Sept. 30, 2016 hep-ex The MINERvA collaboration reports a novel study of neutrino-nucleus charged-current deep inelastic scattering (DIS) using the same neutrino beam incident on targets of polystyrene, graphite, iron, and lead. Results are presented as ratios of C, Fe, and Pb to CH. The ratios of total DIS cross sections as a function of neutrino energy and flux-integrated differential cross sections as a function of the Bjorken scaling variable x are presented in the neutrino-energy range of 5 - 50 GeV. Good agreement is found between the data and predicted ratios, based on charged-lepton nucleus scattering, at medium x and low neutrino energies. However, the data rate appears depleted in the vicinity of the nuclear shadowing region, x < 0.1. This apparent deficit, reflected in the DIS cross-section ratio at high neutrino energy , is consistent with previous MINERvA observations and with the predicted onset of nuclear shadowing with the the axial-vector current in neutrino scattering. • ### Measurement of $K^{+}$ production in charged-current $\nu_{\mu}$ interactions(1604.03920) July 25, 2016 hep-ex, physics.ins-det Production of K^{+} mesons in charged-current \nu_{\mu} interactions on plastic scintillator (CH) is measured using MINERvA exposed to the low-energy NuMI beam at Fermilab. Timing information is used to isolate a sample of 885 charged-current events containing a stopping K^{+} which decays at rest. The differential cross section in K^{+} kinetic energy, d\sigma/dT_{K}, is observed to be relatively flat between 0 and 500 MeV. Its shape is in good agreement with the prediction by the \textsc{genie} neutrino event generator when final-state interactions are included, however the data rate is lower than the prediction by 15\%. • ### First evidence of coherent $K^{+}$ meson production in neutrino-nucleus scattering(1606.08890) July 12, 2016 hep-ex Neutrino-induced charged-current coherent kaon production, $\nu_{\mu}A\rightarrow\mu^{-}K^{+}A$, is a rare, inelastic electroweak process that brings a $K^+$ on shell and leaves the target nucleus intact in its ground state. This process is significantly lower in rate than neutrino-induced charged-current coherent pion production, because of Cabibbo suppression and a kinematic suppression due to the larger kaon mass. We search for such events in the scintillator tracker of MINERvA by observing the final state $K^+$, $\mu^-$ and no other detector activity, and by using the kinematics of the final state particles to reconstruct the small momentum transfer to the nucleus, which is a model-independent characteristic of coherent scattering. We find the first experimental evidence for the process at $3\sigma$ significance. • ### Neutrino Flux Predictions for the NuMI Beam(1607.00704) July 11, 2016 hep-ex, physics.ins-det Knowledge of the neutrino flux produced by the Neutrinos at the Main Injector (NuMI) beamline is essential to the neutrino oscillation and neutrino interaction measurements of the MINERvA, MINOS+, NOvA and MicroBooNE experiments at Fermi National Accelerator Laboratory. We have produced a flux prediction which uses all available and relevant hadron production data, incorporating measurements of particle production off of thin targets as well as measurements of particle yields from a spare NuMI target exposed to a 120 GeV proton beam. The result is the most precise flux prediction achieved for a neutrino beam in the one to tens of GeV energy region. We have also compared the prediction to in situ measurements of the neutrino flux and find good agreement. • ### Identification of nuclear effects in neutrino-carbon interactions at low three-momentum transfer(1511.05944) Sept. 20, 2019 hep-ex Two different nuclear-medium effects are isolated using a low three-momentum transfer subsample of neutrino-carbon scattering data from the MINERvA neutrino experiment. The observed hadronic energy in charged-current $\nu_\mu$ interactions is combined with muon kinematics to permit separation of the quasielastic and $\Delta$(1232) resonance processes. First, we observe a small cross section at very low energy transfer that matches the expected screening effect of long-range nucleon correlations. Second, additions to the event rate in the kinematic region between the quasielastic and $\Delta$ resonance processes are needed to describe the data. The data in this kinematic region also has an enhanced population of multi-proton final states. Contributions predicted for scattering from a nucleon pair have both properties; the model tested in this analysis is a significant improvement but does not fully describe the data. We present the results as a double-differential cross section to enable further investigation of nuclear models. Improved description of the effects of the nuclear environment are required by current and future neutrino oscillation experiments. • ### Measurement of Neutrino Flux from Neutrino-Electron Elastic Scattering(1512.07699) June 15, 2016 hep-ex, physics.ins-det Muon-neutrino elastic scattering on electrons is an observable neutrino process whose cross section is precisely known. Consequently a measurement of this process in an accelerator-based $\nu_\mu$ beam can improve the knowledge of the absolute neutrino flux impinging upon the detector; typically this knowledge is limited to $\sim$ 10% due to uncertainties in hadron production and focusing. We have isolated a sample of 135 $\pm$ 17 neutrino-electron elastic scattering candidates in the segmented scintillator detector of MINERvA, after subtracting backgrounds and correcting for efficiency. We show how this sample can be used to reduce the total uncertainty on the NuMI $\nu_\mu$ flux from 9% to 6%. Our measurement provides a flux constraint that is useful to other experiments using the NuMI beam, and this technique is applicable to future neutrino beams operating at multi-GeV energies. • ### Electron Neutrino Charged-Current Quasielastic Scattering in the MINERvA Experiment(1512.09312) Dec. 31, 2015 hep-ex The electron-neutrino charged-current quasielastic (CCQE) cross section on nuclei is an important input parameter for electron neutrino appearance oscillation experiments. Current experiments typically begin with the muon neutrino cross section and apply theoretical corrections to obtain a prediction for the electron neutrino cross section. However, at present no experimental verification of the estimates for this channel at an energy scale appropriate to such experiments exists. We present the cross sections for a CCQE-like process determined using the MINERvA detector, which are the first measurements of any exclusive reaction in few-GeV electron neutrino interactions. The result is given as differential cross-sections vs. the electron energy, electron angle, and square of the four-momentum transferred to the nucleus, $Q^{2}$. We also compute the ratio to a muon neutrino cross-section in $Q^{2}$ from MINERvA. We find satisfactory agreement between these measurements and the predictions of the GENIE generator. We furthermore report on a photon-like background unpredicted by the generator which we interpret as neutral-coherent diffractive scattering from hydrogen. • ### Single neutral pion production by charged-current $\bar{\nu}_\mu$ interactions on hydrocarbon at $\langle E_\nu \rangle =$ 3.6 GeV(1503.02107) Aug. 25, 2015 hep-ex, nucl-ex Single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) is studied using the \minerva detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for $\bar{\nu}_e$ appearance oscillation experiments. The differential cross sections for $\pi^0$ momentum and production angle, for events with a single observed $\pi^0$ and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the $\pi^0$ kinematics for this process. • ### Charged Pion Production in $\nu_\mu$ Interactions on Hydrocarbon at $\langle E_{\nu}\rangle$= 4.0 GeV(1406.6415) July 31, 2015 hep-ex Charged pion production via charged current $\nu_{\mu}$ interactions on plastic (CH) is studied using the MINERvA detector exposed to the NuMI wideband neutrino beam at Fermilab. Events with hadronic invariant mass W $<$ 1.4 GeV are selected to isolate single pion production, which is expected to occur primarily through the $\Delta(1232)$ resonance. Cross sections as functions of pion production angle and kinetic energy are reported and compared to predictions from different theoretical calculations and generator-based models, for neutrinos ranging in energy from 1.5 GeV to 10 GeV. The data are best described by calculations which include significant contributions from pion intranuclear rescattering. These measurements constrain the primary interaction rate and the role of final state interactions in pion production, both of which need to be well understood by neutrino oscillation experiments. • ### MINERvA neutrino detector response measured with test beam data(1501.06431) April 8, 2015 hep-ex, physics.ins-det The MINERvA collaboration operated a scaled-down replica of the solid scintillator tracking and sampling calorimeter regions of the MINERvA detector in a hadron test beam at the Fermilab Test Beam Facility. This article reports measurements with samples of protons, pions, and electrons from 0.35 to 2.0 GeV/c momentum. The calorimetric response to protons, pions, and electrons are obtained from these data. A measurement of the parameter in Birks' law and an estimate of the tracking efficiency are extracted from the proton sample. Overall the data are well described by a Geant4-based Monte Carlo simulation of the detector and particle interactions with agreements better than 4%, though some features of the data are not precisely modeled. These measurements are used to tune the MINERvA detector simulation and evaluate systematic uncertainties in support of the MINERvA neutrino cross section measurement program. • ### Measurement of muon plus proton final states in $\nu_{\mu}$ Interactions on Hydrocarbon at $\langle$$E_{\nu}$$\rangle$ = 4.2 GeV(1409.4497) April 6, 2015 hep-ex A study of charged-current muon neutrino scattering on hydrocarbon in which the final state includes a muon and a proton and no pions is presented. Although this signature has the topology of neutrino quasielastic scattering from neutrons, the event sample contains contributions from both quasielastic and inelastic processes where pions are absorbed in the nucleus. The analysis accepts events with muon production angles up to 70$^{\circ}$ and proton kinetic energies greater than 110 MeV. The extracted cross section, when based completely on hadronic kinematics, is well-described by a simple relativistic Fermi gas nuclear model including the neutrino event generator modeling for inelastic processes and particle transportation through the nucleus. This is in contrast to the quasielastic cross section based on muon kinematics, which is best described by an extended model that incorporates multi-nucleon correlations. This measurement guides the formulation of a complete description of neutrino-nucleus interactions that encompasses the hadronic as well as the leptonic aspects of this process. • ### Measurement of Ratios of $\nu_{\mu}$ Charged-Current Cross Sections on C, Fe, and Pb to CH at Neutrino Energies 2-20 GeV(1403.2103) March 10, 2015 hep-ex, nucl-ex We present measurements of $\nu_{\mu}$ charged-current cross section ratios on carbon, iron, and lead relative to a scintillator (CH) using the fine-grained MINERvA detector exposed to the NuMI neutrino beam at Fermilab. The measurements utilize events of energies $2<E_{\nu}<20~GeV$, with $\left< E_{\nu}\right>=8~GeV$, which have a reconstructed $\mu^{-}$ scattering angle less than $17^\circ$ to extract ratios of inclusive total cross sections as a function of neutrino energy $E_{\nu}$ and flux-integrated differential cross sections with respect to the Bjorken scaling variable $x$. These results provide the first high-statistics direct measurements of nuclear effects in neutrino scattering using different targets in the same neutrino beam. Measured cross section ratios exhibit a relative depletion at low $x$ and enhancement at large $x$. Both become more pronounced as the nucleon number of the target nucleus increases. The data are not reproduced by GENIE, a conventional neutrino-nucleus scattering simulation, or by the alternative models for the nuclear dependence of inelastic scattering that are considered.
2020-11-29 08:45:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7458477020263672, "perplexity": 2019.2264666160288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00677.warc.gz"}
https://brilliant.org/discussions/thread/martin-gardner-can-you-make-7-cylinders-all-touch/?sort=new
# Martin Gardner - "Can you make 7 cylinders all touch each other?" 50 years later... A Tale of Touching Tubes Using computers to solve a system of 20 variables, a challenge to make seven cigarettes touch each other is answered. Gardner himself technically answered his own challenge, but not in such an interesting way, because the ends of the cigarettes were involved. Now it can be done with 7 infinitely long cigarettes. Note by Dan Krol 6 years, 6 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Stand a cigarette upright. Then lay 3 flat cigarettes all touching it and each other , which is possible if they're 120 degrees apart. Do the same on top of that with another 3, but the mirror image of the first. I'm not sure why we need to invoke "infinitely long cigarettes". - 6 years, 4 months ago An equilateral triangle made of cigarettes is going to have a much larger area than the cross section of a cigarette, so I don't see how an upright cigarette can touch all three of them. Also, if you lay them "flat" as you say, that requires cigarettes which are not infinitely long, or else the sides of the triangle will run into each other. So, the length of the cigarettes is relevant. Maybe I'm not imagining your model correctly though. Staff - 6 years, 3 months ago I was once asked the "how many" version of this question for both finite and infinite cylinders as an interview question. I'm still kind of at a loss what was expected as a "good" interview response... - 6 years, 6 months ago Update: I just reached out to one of the people who interviewed me at that company to let them know that the problem is now solved. He said "Worst interview question ever." - 6 years, 6 months ago 2nd Try: Stand a cigarette upright. Then lay 3 flat cigarettes all touching it and each other , which is possible if they're 120 degrees apart. Do the same on top of that with another 3, but the mirror image of the first. I'm not sure why we need to invoke "infinitely long cigarettes". - 6 years, 6 months ago I'm interested as to why you have cigarettes in the first place... Bad habit. :D - 6 years, 4 months ago What's a habit worse than cigarettes is not understanding a problem correctly the first time around. - 6 years, 4 months ago Haha, I am victim to this habit. :D - 6 years, 4 months ago Interesting, I didn't consider that making the cigarette longer could make it easier. But that's because the cigarettes are of a certain thickness. Maybe the solution from the article still wouldn't work with real cigarettes. Maybe Virginia Slims, heh. Staff - 6 years, 6 months ago In re-reading this question, I understand now that what was done recently by those mathematicians was to find a solution using 7 infinitely long cigarettes, which is much harder to do. Again, I had fallen back on my bad habit of solving a problem before I've fully understood it. - 6 years, 6 months ago
2020-10-26 10:17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396154880523682, "perplexity": 1839.4782344275457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00139.warc.gz"}
https://physics.stackexchange.com/questions/258298/implication-of-breakdown-of-scale-invariance-for-problems-with-intrinsic-length
# Implication of breakdown of scale invariance for problems with intrinsic length or time scales? According to Wikipedia article on scale invarince, the equations for electric (and magnetic) fields : $$\nabla^2\vec{E}=\frac{1}{c^2}\frac{\partial^2\vec{E}}{\partial t^2}\hspace{0.3cm}\text{and}\hspace{0.3cm}\nabla^2\vec{B}=\frac{1}{c^2}\frac{\partial^2\vec{B}}{\partial t^2}$$ are invariant under scale transformation $\vec{r}\rightarrow \lambda\vec{r}$ and $t\rightarrow\lambda t$. This implies that, if $\vec{E}(\vec{r},t)$ (and $\vec{B}(\vec{r},t)$) is a solution of the Maxwell's equations in free space then $\vec{E}(\lambda\vec{r},\lambda t)$ (and $\vec{B}(\lambda\vec{r},\lambda t)$) having the same functional form, are also solutions ($\lambda$ is a real number). The reason for this invariance, as I understand, is the absence of any intrinsic length scale in the problem. Whenever there is a length scale, as for fields in a conductor, existence of a solution $\vec{E}(\vec{r},t)$ doesn't necessarily guarantee the existence of a scaled-solution $\vec{E}(\lambda\vec{r},\lambda t)$ (with the same functional form) because of the breakdown of scale invariance. Is my inference correct? Does it mean that at new scales (of time and length) newer forms of solutions can emerge? Similarly for other quantum fields, if they are massless they would be scale invariant; an example is the scalar massless Klein Gordon field. As soon as a mass term is introduced with the introduction of an $m^2$ in the equation, it no longer is scale invariant, and the field diminishes exponentially with m as the scale.
2020-06-05 16:33:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696017265319824, "perplexity": 281.7305299609401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348502097.77/warc/CC-MAIN-20200605143036-20200605173036-00237.warc.gz"}
https://vidifonycihagez.skayra.com/limit-infimum-of-a-sequence-for-academic-writing-38286sq.html
# Limit infimum of a sequence for academic writing All tasks will be monitored for uncongenial accuracy by students of a board selected by the Chicago Mathematical Society. Multiples of 4 are,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72, For a while, the Sources treated as a secret the meaning that the square root of two is paramount, and, according to write, Hippasus was murdered for buying it. Likewise, if the infimum biases, it is concise. A more detailed list of classrooms is included in the most outline below. Other ways to get a sequence are discussed after the requirements. MATH Teaching Methods in Academic I Basic concepts for the tutor and the relationship between these concepts in different, legal basis for the overall according to The Constitution and the United Law of Modern Education, The general objectives of the text teaching, the use of ideas, techniques, equipment and materials. For speculation, the first four odd nuts form the sequence 1, 3, 5, 7. Minute Einstein stated that as far as the readers of mathematics refer to run, they are not guilty, Mathematics is essential in many people, including natural progression, engineering, medicine, finance and the depiction sciences. There are a scene of ways to denote a handful, some of which are more sophisticated for specific types of scholars. Assume we have some additional function. There are a certain of ways to cite a sequence, some of which are more concise for specific types of sequences. The communicating value for the exam of two, specified to 65 decimal places, is,1. That will give you the average intelligent in the time commitment from to. The limit inferior of a comparative x. The Fibonacci numbers are the amateur sequence whose elements are the sum of the argentinian two elements. If S sounds a least element, then that topic is the infimum; otherwise, the infimum novelists not belong to S or ideas not exist. Minimal upper bounds[ detail ] Finally, a sure ordered set may have many different upper bounds without having a least affluent bound. The needs of hundreds from cultural backgrounds commonly ate in mainstream classrooms such as Important students, students from not, culturally and then diverse backgrounds, physics with special learning needs will be reached, and teaching resources and inclusive weekends will be evaluated and developed. The hardest uses of mathematics were in trading, help measurement, painting and putting patterns, in Babylonian mathematics unimportant arithmetic first appears in the civil record. For variation, the infinite falling of positive odd integers can be used 1, 3, 5, 7, The san 1 is a lower bound, but not the greatest lower bound, and hence not the infimum. Contradiction examples of sequences peter ones made up of rational numbersincident numbersand complex numbers. One way to test a sequence is to do the elements. It is advisable after Henri Lebesgue, who drew the integral and it is also a decent part of the axiomatic theory of academic. Mathematics has since been proven, and there has been a poorly interaction between mathematics and sub, to the benefit of both. Computing limit of sequence of sets defined on indicator function. to start writing integrals, you need first to show $1_A$ is measurable and integrable, How is the limit infimum of sets different from the limit infimum of a sequence of real numbers? 1. In fact, is the largest lower bound (or infimum) of the sequence, and the larger a natural number we choose, the closer the sequence element will be to. It’s consequently not completely absurd to suggest that the sequence approaches in such a way, that we may. For questions on suprema and infima. Use together with a subject area tag, such as (real-analysis) or (order-theory). Oxford Academic. Google Scholar. Christopher Leininger. Babak Modami. First, the weak * limit of an infinite sequence of weighted distinct Bers curves at times t i → b is an ending measure of the ray r, Observe that the infimum of the function F on any stratum \mathcal{S}(v^. Prove that the sequence $$\left \{a_{n}\right \}_{n=1}^{\infty }$$ has a limit by showing that it is a Cauchy sequence. The Least Upper Bound Principle In this section we will discuss certain consequences of the completeness of real numbers that we introduced as the Cauchy convergence principle in the previous section. In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set T is the greatest element in T that is less than or equal to all elements of S, if such an element exists. Consequently, the term greatest lower bound (abbreviated as GLB) is also commonly used. The supremum (abbreviated sup; plural suprema) of a subset S of a partially ordered set T is the. Limit infimum of a sequence for academic writing Rated 0/5 based on 96 review Newest 'supremum-and-infimum' Questions - Page 4 - Mathematics Stack Exchange
2019-11-13 04:56:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6998540759086609, "perplexity": 813.6007338259593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665985.40/warc/CC-MAIN-20191113035916-20191113063916-00103.warc.gz"}
https://deepai.org/publication/hessianfr-an-efficient-hessian-based-follow-the-ridge-algorithm-for-minimax-optimization
HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization Wide applications of differentiable two-player sequential games (e.g., image generation by GANs) have raised much interest and attention of researchers to study efficient and fast algorithms. Most of the existing algorithms are developed based on nice properties of simultaneous games, i.e., convex-concave payoff functions, but are not applicable in solving sequential games with different settings. Some conventional gradient descent ascent algorithms theoretically and numerically fail to find the local Nash equilibrium of the simultaneous game or the local minimax (i.e., local Stackelberg equilibrium) of the sequential game. In this paper, we propose the HessianFR, an efficient Hessian-based Follow-the-Ridge algorithm with theoretical guarantees. Furthermore, the convergence of the stochastic algorithm and the approximation of Hessian inverse are exploited to improve algorithm efficiency. A series of experiments of training generative adversarial networks (GANs) have been conducted on both synthetic and real-world large-scale image datasets (e.g. MNIST, CIFAR-10 and CelebA). The experimental results demonstrate that the proposed HessianFR outperforms baselines in terms of convergence and image generation quality. • 4 publications • 8 publications • 37 publications • 1 publication 10/17/2020 Training Generative Adversarial Networks via stochastic Nash games Generative adversarial networks (GANs) are a class of generative models ... 10/16/2019 On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach Many tasks in modern machine learning can be formulated as finding equil... 11/10/2021 The wide applications of Generative adversarial networks benefit from th... 06/25/2020 Generative Adversarial Networks are notoriously challenging to train. Th... 06/25/2020 Newton-type Methods for Minimax Optimization Differential games, in particular two-player sequential games (a.k.a. mi... 10/26/2020 12/24/2021 Lyapunov Exponents for Diversity in Differentiable Games Ridge Rider (RR) is an algorithm for finding diverse solutions to optimi... 1 Introduction Two-player games as extensions of minimization problems achieve wide applications especially in economics [echenique2003equilibrium] and machine learning (e.g., generative adversarial networks (GANs) [goodfellow2014generative, arjovsky2017wasserstein, park2019sphere], adversarial learning [shaham2018understanding, madry2018towards][dai2018sbeed] etc.). We merely focus on differentiable two-player zero-sum games which are mathematically formulated as the following min-max optimization problem: minxmaxyf(x,y), where and are two players and is the payoff function. There are mainly two types of two-player games, i.e., simultaneous games and sequential games. Taking words literally, the difference between two types of games lies in the order of actions two players take. In two-player simultaneous games, and have the same position that they are blind to other’s actions before they finish at each step. It further implies that the payoff function is convex-concave (i.e., ) in simultaneous learning. Sequential games strictly require the order of two players’ actions. The variable plays the role of a leader who aims to reduce the loss (pay) while the follower tries to maximize his gains after observing the leader’s action. Most of previous works stabilized GANs training through regularization and specific modeling. Generators and discriminators in generative adversarial networks (GANs) act as the leader and the follower respectively. Training GANs is usually equivalent to solving a sequential min-max optimization problem [goodfellow2014generative, arjovsky2017towards, arjovsky2017wasserstein, gulrajani2017improved, park2019sphere]. Optimization difficulties of GANs, especially the instabilities and nonconvergence, have been emphasized and discussed for a long time, since GANs was first proposed [salimans2016improved, goodfellow2016nips, arjovsky2017wasserstein] . From the model’s point of view, some proper regularization terms added into the loss function numerically stabilize training [goodfellow2016nips, gulrajani2017improved, brockLRW17, miyato2018spectral, cao2018improving]. Brock et al. [brockLRW17] encouraged weights to be orthonormal to mitigate the instabilities of GANs by introducing an additional regularization term. With 1-Lipschitz constraints on discriminators, Wasserstein GAN achieves much better performances than vanilla GAN in terms of generation quality and training stability [arjovsky2017wasserstein, gulrajani2017improved, wei2018improving, adler2018banach] . However, they are still suffering from the same difficulties using gradient descent ascent algorithms. To escape the optimization difficulties, some recent works of generative models aim to estimate the score of the target distribution by minimizing the fisher divergence [song2019generative, song2020sliced] . No free lunch in the world that it is hard to sample high quality and high dimensional data from the score, although score-based models are more easier to be optimized, because score loses some information compared with the density function. Therefore, it is urgent to develop efficient algorithms for GANs training (sequential games). Gradient descent ascent (GDA) is the extension of gradient descent from minimization problems to min-max problems. Unfortunately, GDA has been shown to suffer from undesirable convergence and strong rotation around fixed points [daskalakis2018limit]. To overcome the mentioned drawbacks, several variants are proposed. Two Time-Scale GDA [heusel2017gans] and GDA-k [goodfellow2014generative] are two variants of GDA and are widely used in training GANs. Extra gradient (EG) [korpelevich1976extragradient, gidel2018a], optimistic GDA (OGDA) [daskalakis2018training] and consensus optimization (CO) [mescheder2017numerics], extended from algorithms solving minimization problems, improve the convergence of GDA. Unfortunately, they are designed for solving convex-concave problems (simultaneous problems). Recently, Jin et al. [jin2020local] defined the equilibrium (i.e. local minimax) for differentiable sequential games which is more appropriate than Nash equilibrium, based on the works from Evtushenko [evtushenko1974some, evtushenko1974iterative]. Local minimax takes into account the sequential structure and makes use of the Schur complement of the Hessian matrix rather than merely the block diagonal. Based on the definition of local minimax, FR [Wang2020On] and TGDA [fiez2020implicit] are proposed recently and locally converge to local minimax. Furthermore, to accelerate the convergence, Newton-type methods are proposed in [zhang2020newton] . However, some of them may be not applicable in machine learning (deep learning) tasks. For more details, we will discuss later in Section 3. In this paper, we propose a novel algorithm, namely HessianFR, which has better convergence than FR [Wang2020On] by adding Hessian information to in each update but without additional computation. Mathematically, the Hessian information reduces the condition number of Jacobian matrix and thus accelerates the convergence. Overall, equipped with the perspective, here are our contributions: • A new algorithm is proposed and is theoretically guaranteed to locally converge and only converge to local minimax with proper learning rates. • We theoretically and numerically study several fast computation methods including diagonal methods as well as conjugate gradient for Hessian inverse and stochastic learning for lower computation costs in each update. Both diagonal methods and conjugate gradient perform well in practice. • Finally, we apply our algorithm in training generative adversarial networks on synthetic dataset to show the superiority of HessianFR than other algorithms in terms of iterations and seconds for convergence. Furthermore, we test our algorithm in stochastic setting on large scale image datasets (e.g., MNIST, CIFAR-10 and CelebA). According to numerical results, the proposed HessianFR outperforms other algorithms in terms of the image generation quality. 2 Preliminaries Notation. In this paper, we use to represent the euclidean norm of both vector and the corresponding matrix spectral norm. Concisely, we rewrite and the Hessian matrix Sometimes, we may use to highlight the spatial and temporal location of to avoid ambiguity of notations. Analogous notations holds for , and so on. Usually, the spatial and temporal location is omitted if it is clear and straightforward. To highlight the Hessian matrix evaluated at a given point which is within our interest, we denote it as , etc. We denote the maximal eigenvalue, the minimal eigenvalue and the spectral radius of a matrix by , and respectively. Differentiable two-player zero-sum sequential games are mathematically formulated as solving the following min-max optimization problems: minxmaxyf(x,y), (1) where and are two players and is the payoff function. Note that the payoff function is nonconvex-nonconcave in sequential setting. In this paper, we mainly focus on solving min-max problem for training generative adversarial networks. 2.1 Why Minimax Optimization Most of previous works define local Nash equilibrium for min-max problems in training generative adversarial networks. We first review the definition and some properties for (local) Nash equilibrium here. Then, we demonstrate that it is overly strict to define Nash equilibrium in training generative adversarial networks. Definition 1 (Local Nash equilibrium) A point is a local Nash equilibrium for min-max problem Eq. 1 if there exists such that f(x∗,y)≤f(x∗,y∗)≤f(x,y∗), for all satisfying and . Proposition 1 (Necessary conditions for local Nash equilibrium) The local Nash equilibrium for a twice differentiable payoff function must satisfy the following conditions: (1) It is a critical point (i.e., and ); (2) . Proposition 2 (Sufficient conditions for local Nash equilibrium) For a twice differentiable min-max problem Eq. 1, if the critical point satisfies , then it is a (strict) local Nash equilibrium. Local Nash equilibroum implies that the payoff function is locally convex-concave although it is not for simultaneous games. In other words, the local Nash equilibrium for min-max problem is also a local Nash equilibrium for max-min problem . The Nash equilibrium is overly strict for equilibrium of sequential games. Moreover, it is hard to say whether Nash equilibrium exists for nonconvex-nonconcave min-max problems. For example, we consider a two-dimensional payoff function . We can easily verify that no equilibrium exists by Proposition 1. The necessary condition of Nash equilibrium requires the Hessian matrix and to be semi positive definite and semi negative definite respectively, regardless of the correlation term , which further implies the uncorrelation of variables and w.r.t. at the local Nash equilibrium . Generative adversarial networks, first proposed by Goodfellow et al. [goodfellow2014generative], achieve much attention and are widely applied in various fields and applications [bowman2015generating, odena2017conditional]. In training generative adversarial networks, and represent the trainable parameters of neural networks for generators and discriminators respectively. In JS-GAN (vanilla GAN) [goodfellow2014generative], the discriminator parameterized by measures the JS divergence between the generated distribution and the target distribution. Wasserstein GAN [arjovsky2017wasserstein] adopts a weaker but softer metric which is numerically represented by 1-Lipschitz neural networks parameterized by . The discriminator in Sphere GAN [park2019sphere] first projects data to a unit sphere and then measures the data distance on the manifold. Suppose that which measures the distance between the generated distribution and the target distribution, it is not necessary to be a local maximizer for all satisfying . Based on above analysis, it is inappropriate to depict the solution of sequential games by Nash equilibrium. Jin et al. [jin2020local] defined local minimax for the equilibrium of differentiable sequential games Eq. 1 which is intuitively and theoretically more feasible. Definition 2 (Local minimax) A point is a local minimax for min-max problem Eq. 1 if there exists and a continuous function with as , such that f(x∗,y)≤f(x∗,y∗)≤maxy′:∥y′−y∗∥2≤h(δ)f(x,y′), for all , and . By implicit function theorem, the definition can be further clarified [Wang2020On]. (1) is a local maximum of ; (2) is a local minimum of where is an implicit function defined by in a neighborhood of with . Here, local minimax does not require to be the local maximum of for all in a neighborhood of . Note that the local maximum of is allowed to change slightly (by properties of ) with . For more details, please refer to [jin2020local]. Similar to local Nash equilibrium, necessary and sufficient conditions for local minimax are established in [jin2020local]. Proposition 3 (Necessary conditions for local minimax) The local minimax for a twice differentiable payoff function must satisfies the following conditions: (1) It is a critical point (i.e., and ); (2) . Proposition 4 (Sufficient conditions for local minimax) For a twice differentiable min-max problem Eq. 1, if the critical point satisfies , then it is a (strict) local minimax. Comparing Proposition 1 and Proposition 2 with Proposition 3 and Proposition 4, the main difference lies in that local minimax utilizes the Schur complement of the Hessian matrix while local Nash equilibrium merely focus on the block diagonal matrix. Intuitively, the Schur complement has more information than the block diagonal matrix because the latter ignores the correlation of two variables while the former adopts the whole Hessian information. 2.2 Why not GDA and its Variants Two Time-Scale GDA (i.e., TTUR in [heusel2017gans]) and GDA-k (adopted in most of GANs training [goodfellow2014generative, arjovsky2017wasserstein, gulrajani2017improved]) are two most popular variants of GDA. The global convergence of Two Time-Scale GDA and GDA-k is less than satisfactory. Two Time-Scale GDA perhaps converges to an undesired point which is neither Nash equilibrium nor local minimax. And GDA-k is convergent if it satisfies Max-Oracle which is extremely strict in practice [jin2020local]. Fortunately, GDA has local convergence properties [zhang2020newton] for local minimax, which may explain the success of GANs trained by GDA in varies tasks. We further find that Extra Gradient (EG) [korpelevich1976extragradient] which is derived for solving convex-concave problems (simultaneous games), is not suitable for solving sequential min-max problem. For more details of (local) convergence of GDA (and its variants) to local minimax, please see Appendix A. 2.3 Follow-the-Ridge (FR) Algorithm Follow-the-Ridge (FR) algorithm [Wang2020On] is the first work studying min-max problem Eq. 1 based on local minimax. We briefly introduce the main idea of FR algorithm here. Suppose that is on the ridge, i.e., and where is the implicit function defined in Definition 2. In each step , is updated by gradient descent, i.e., . The leader always cannot foresee the follower’s action, that’s why simple gradient descent is adopted for updating . However, the follower witnesses the update of and he hopes to take advantage of the additional information and stay on the ridge (i.e., satisfying ). With the current state that and the known update , the follower hopes to satisfy . Taylor expansion for implies that 0=∇yf(xt+1,yt+1)≈∇yf(xt,yt)+Hyx(xt+1−xt)+Hyy(yt+1−yt)=Hyx(−ηx∇xf(xt,yt))+Hyy(yt+1−yt). (2) Therefore, the correction term brings to with staying on the ridge. If is not on the ridge, then gradient ascent update for is necessary for moving closer to the ridge. In conclusion, FR algorithm has an additional correction term compared with GDA to keep parallel to or on the ridge. The algorithm is listed as follows: xt+1←xt−ηx∇xf(xt,yt),yt+1←yt+ηy∇yf(xt,yt)+ηxH−1yyHyx∇xf(xt,yt), where and are learning rates for and respectively. 3 HessianFR In this section, we first develop the HessianFR algorithm based on the FR algorithm and the Newton method. Theoretically, we show that the proposed HessianFR algorithm has greater convergence rates to strict local minimax than the FR algorithm. To reduce the computational costs in large scale problems, we extend the deterministic HessianFR to the stochastic HessianFR with theoretical convergence guarantees. Furthermore, we discuss several computation methods for Hessian inverse for the proposed HessianFR. 3.1 Deterministic Algorithm In this part, we introduce the motivation for the proposed algorithm and then local convergence is developed. Theoretical analysis conveys that the proposed HessianFR is better than FR [Wang2020On] when is ill-conditioned and is not worse than GDN [zhang2020newton]. However, HessianFR is more computationally friendly than GDN in implementation. 3.1.1 Motivation According to the convergence analysis of FR algorithm in [Wang2020On], the theoretical convergence rate is related to the condition number of and , i.e., the Jacobian matrix of FR at local minimax is similar to I−ηx[H∗xx−H∗xyH∗−1yyH∗yxH∗xy−cH∗yy], (3) which implies that the maximal eigenvalue is 1−ηxmin{λmin(H∗xx−H∗xyH∗−1yyH∗yx),λmin(−cH∗yy)}, where ηx<2max{λmax(H∗xx−H∗xyH∗−1yyH∗yx),λmax(−cH∗yy)} and . How to improve the convergence of FR without extra computation (or accepted computation costs)? A direct way is to decrease the condition number of two block diagonal of the Jacobian matrix, i.e., and . Furthermore, we hope the follower to be close to the optimal one, i.e., . Actions of is essential for min-max problem and fast convergence is expected. In training GANs, weak discriminators will lead to disasters and failures. Therefore, instead of improving the updates for , acceleration of seems to be more reasonable. Observe that the correction term for involves the inverse of Hessian . It reminds us of the Newton method that it outperforms gradient-based (first-order) methods theoretically and numerically under some regularization conditions. Taking the advantage of the Newton method, an extra correction term (Newton step) can accelerate the convergence of , i.e., (4) where and are learning rates for gradient ascent term and Newton correction term respectively. Moreover, compared with FR, HessianFR merely requires additional computation of . The extra computation costs are acceptable and can be ignored in practice. The algorithm is named ”HessianFR” because it has Hessian (Newton) step in the context of FR algorithm. The pseudocode for HessianFR is shown in Algorithm 1. Here, we use finite difference to compute , i.e., Hyx∇xf(xt,yt)≈∇yf(xt+α∇xf(xt,yt),yt)−∇yf(xt,yt)α, (5) when . As for the computation of , we will discuss it in Section 3.3. 3.1.2 Relation to other algorithms HessianFR is actually an generalized algorithm of FR [Wang2020On] and GDN [zhang2020newton]. It is equivalent to FR if we set . The update rule for GDN is as follows: xt+1←xt−ηx∇xf(xt,yt),yt+1←yt−(∇yyf)−1∇yf(xt+1,yt). Note that by Taylor expansion, we have yt+1=yt−(∇yyf)−1∇yf(xt+1,yt)=yt−(∇yyf)−1∇yf(xt−ηx∇xf(xt,yt),yt)≈yt−(∇yyf)−1∇yf(xt,yt)+ηx(∇yyf)−1∇yxf∇xf(xt,yt), (6) which is a special case of (equivalent to) HessianFR if and . Theoretical and numerical comparisons of those three related algorithms will be discussed later. 3.1.3 Convergence Analysis We first prove the local convergence properties of HessianFR that it converges and only converges to a local minimax. Then, the theoretical convergence rates are compared for HessianFR, FR and GDN. With a proper choice of learning rates, our HessianFR is better than FR and is comparable with GDN in theory. Here, we first introduce the definition of strict stable points of an algorithm, which is useful for convergence analysis in optimization. Definition 3 (Strict stable point of an algorithm) For an algorithm defined as , we call a point is a strict stable point of the algorithm if λ(J∗)<1, where is the Jacobian matrix at and denotes the spectral radius. Theorem 1 (Local convergence of HessianFR) With a proper choice of learning rates, all strict local minimax points are strict stable fixed points of HessianFR. Moreover, any strict stable fixed point is a strict local minimax. Proof Let and . The Jacobian matrix of HessianFR at local minimax is J∗HFR=I−ηx[I−H∗−1yyH∗yx−c1I+c2H∗−1yy][H∗xxH∗xyH∗yxH∗yy]. (7) Observe that the matrix is invertible, then is similar to Therefore, and share the same eigenvalues. By conditions (Proposition 4) for strict local minimax, we have and . To guarantee the convergence of HessianFR, we require , i.e., −I≺I−ηx[H∗xx−H∗xyH∗−1yyH∗yxH∗xy−c1H∗yy+c2I]≺I and 0≺ηx[H∗xx−H∗xyH∗−1yyH∗yxH∗xy−c1H∗yy+c2I]≺2I, which implies that ηx<2max{λmax(H∗xx−H∗xyH∗−1yyH∗yx),λmax(−c1H∗yy+c2I)} and λ(J∗HFR)<1. Note that the spectral radius is the infimum of all norms and according to [horn2012matrix], there exist a matrix norm and a constant such that the Jacobbian matrix ∥J∗HFR∥=1−~λ<1. (8) Consider the Taylor expansion of HessianFR defined as , we have g(zt)=g(z∗)+J∗HFR(zt−z∗)+o(∥zt−z∗∥). There exist a small enough neighborhood of with radius such that if , we have . Therefore, which complete the proof of local convergence of HessianFR. Conversely, if a point is a strictly stable fixed point of HessianFR, i.e., is a critical point and the Jacobian at satisfies that . It further implies that and for any and . Therefore, and , which conclude that is a strict local minimax. Note that, the convergence rate of HessianFR can be roughly estimated by eigenvalues of that λ(J∗HFR)≤1−2κHFR, (9) where Let , the Jacobian matrix of FR at the local minimax is similar to M∗FR=I−ηx[H∗xx−H∗xyH∗−1yxH∗yxH∗xy−c1H∗yy]. (10) and the corresponding spectral radius is λ(J∗FR)≤1−2κFR, (11) with κFR=min{λmin(H∗xx−H∗xyH∗−1yyH∗yx),λmin(−c1H∗yy)}max{λmax(H∗xx−H∗xyH∗−1yyH∗yx),λmax(−c1H∗yy)}. If is ill-conditioned that is large but is small, then is small. With proper choice of such that λmin(−c1H∗yy+c2I)λmax(−c1H∗yy+c2I)>λmin(−c1H∗yy)λ%max(−c1H∗yy), (12) then and thus which implies the improved theoretical convergence of HessianFR over FR. Let and , then the Jacobian matrix of GDN at the local minimax is similar to M∗GDN=I−ηx[H∗xx−H∗xyH∗−1yxH∗yxH∗xy1/ηxI], (13) which implies that λ(J∗GDN)≤1−2κGDN, (14) where κGDN=λmin(H∗xx−H∗xyH∗−1yyH∗yx)λmax(H∗xx−H∗xyH∗−1yyH∗yx). With proper choice of and (always exist) such that λmin(−c1H∗yy+c2I)≥λmin(H∗xx−H∗xyH∗−1yyH∗yx) and λmax(−c1H∗yy+c2I)≤λmax(H∗xx−H∗xyH∗−1yyH∗yx), then the theoretical convergence rate of HessianFR is equal to that of GDN. In conclusion, HessianFR is better than FR when is ill-conditioned and not worse than GDN theoretically. Note that setting learning rate to be (in GDN) may be infeasible in deep learning applications. Although GDN outperforms FR in the sense of theoretical local convergence, it requires strict pretraining which is hard to achieve in practice. We will discussed it in the numerical part. 3.1.4 Preconditioning Preconditioning is a popular method to accelerate the convergence in machine learning and numerical linear algebra. Suppose that in each step of HessianFR, is preconditioned by a pair of diagonal matrix that are positive definite and bounded, then the convergence properties of HessianFR in Theorem 1 still hold. In the numerical part of this paper, we adopt the same preconditioning strategy as Adam [kingma2014adam]. Proposition 5 (Convergence of HessianFR with preconditioning) Suppose that the gradient in HessianFR is preconditioned by symmetric bounded positive definite matrix pairs , i.e., and are replaced by and respectively. Then, Theorem 1 still holds for HessianFR with preconditioning. Proof The update rule for HessianFR with preconditioning (HFR-P) is as follows: xt+1=xt−ηxP1∇xf(xt,yt),yt+1=yt+ηy1P2∇yf(xt,yt)−ηy2H−1yyP2∇yf(xt,yt)+ηxH−1yyHyxP1∇xf(xt,yt). The Jacobian for HessianFR with preconditioning is J∗HFR-P=I−ηx[I−H∗−1yyH∗yx−c1I+c2H∗−1yy][P1P2][H∗xxH∗xyH∗yxH∗yy], which is similar to Note that the matrix P1(H∗xx−H∗xyH∗−1yyH∗yx) is similar to P1/21(H∗xx−H∗xyH∗−1yyH∗yx)P1/21, which is positive definite if is symmetric positive definite. Moreover, the matrix (−c1I+c2H∗−1yy)P2H∗yy is similar to where both and are symmetric positive definite. We deduce that all eigenvalues of (−c1H∗yy+c2I)P2, which is similar to P1/22(−c1H∗yy+c2I)P1/22, are positive and real. Therefore, eigenvalues of two matrices P1(H∗xx−H∗xyH∗−1yyH∗yx) and (−c1I+c2H∗−1yy)P2H∗yy are all real and positive. Furthermore, Theorem 1 holds for HessianFR with preconditioning if ηx<2max{λmax(P1(H∗xx−H∗xyH∗−1yyH∗yx)),λ% max((−c1I+c2H∗−1yy)P2H∗yy)}. (15) 3.2 Stochastic Learning Large scale data sets and parameters appear in deep learning. In generative adversarial networks, we usually need to estimate millions of parameters and the size of dataset can also be thousands (e.g., MNIST and CIFAR-10) and millions (e.g., CelebA). Instead of solving a deterministic optimization problem, stochastic (mini-batch) learning is adopted to lower computational costs and the storage, but can still approximate the exact solution. In this section, we mainly analyze the convergence properties of HessianFR in stochastic setting and the analysis can also be extended for FR. 3.2.1 Motivation We first derive the stochastic algorithm for training generative adversarial networks. For simplicity, GANs model is formulated as minG∈GmaxF∈F1mm∑j=1F(G(Zj))−1nn∑k=1F(Xk), where and are generators and discriminators respectively; is the set of noises (usually be Gaussian or uniform) while is the observed data from the target distribution. Let and , then the object function of GANs model is rewritten into minG∈GmaxF∈F1mnmn∑i=1fi(G,F). Similarly, we can easily check that JS-GAN [goodfellow2014generative], WGAN (WGAN-clip [arjovsky2017wasserstein], WGAN-GP [gulrajani2017improved], WGAN-spectral [miyato2018spectral], etc.) and other GAN variants satisfy the above property. Therefore, with finite training data, the optimization problem Eq. 1 is rewritten as minxmaxyf(x,y)=1nn∑i=1fi(x,y), (16) where is the payoff function for the -th training data. The stochastic payoff function in each step is ^f(x,y)=1|St|∑i∈Stfi(x,y), (17) where is the set of sampling index at time . The pseudocode stochastic HessianFR to solve Eq. 16 is shown in Algorithm 2. 3.2.2 Convergence Analysis We have proved the convergence of HessianFR in solving deterministic min-max problem Eq. 1. In this part, we derive a similar convergence property for stochastic HessianFR in solving Eq. 16 under some mild conditions (Section 3.2.2). Suppose that is a strict local minimax of the min-max problem Eq. 16 and is a neighborhood of . [Standard assumptions for smoothness] The gradient and Hessian of each objective function are bounded in , i.e., we assume that the following inequalities hold, ∥∇xfi(x,y)∥2≤ρx,∥∇yfi(x,y)∥2≤ρy,∥∇xyfi(x,y)∥2=∥∇yxfi(x,y)∥2≤ρxy,∥∇yyfi(x,y)∥2≤ρyy, (18) for all and . Furthermore, by the definition that , also satisfies above inequalities, i.e., ∥∇xf(x,y)∥2≤ρx,
2022-10-05 02:42:54
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8549941182136536, "perplexity": 991.9828014134152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337531.3/warc/CC-MAIN-20221005011205-20221005041205-00367.warc.gz"}
https://mca2021.dm.uba.ar/en/tools/view-abstract?code=3206
No date set. ## Improved regularity for a time-dependent Isaacs equation ### Giane Casari Rampasso #### University of Campinas, Brazil   -   This email address is being protected from spambots. You need JavaScript enabled to view it. document.getElementById('cloak391504e17e522605a3816bae97e9b9e7').innerHTML = ''; var prefix = '&#109;a' + 'i&#108;' + '&#116;o'; var path = 'hr' + 'ef' + '='; var addy391504e17e522605a3816bae97e9b9e7 = 'g&#105;&#97;n&#101;cr' + '&#64;'; addy391504e17e522605a3816bae97e9b9e7 = addy391504e17e522605a3816bae97e9b9e7 + '&#117;n&#105;c&#97;mp' + '&#46;' + 'br'; var addy_text391504e17e522605a3816bae97e9b9e7 = 'g&#105;&#97;n&#101;cr' + '&#64;' + '&#117;n&#105;c&#97;mp' + '&#46;' + 'br';document.getElementById('cloak391504e17e522605a3816bae97e9b9e7').innerHTML += '<a ' + path + '\'' + prefix + ':' + addy391504e17e522605a3816bae97e9b9e7 + '\'>'+addy_text391504e17e522605a3816bae97e9b9e7+'<\/a>'; The purpose of this work is to discuss a regularity theory for viscosity solutions to a parabolic problem driven by the Isaacs operator. Under distinct smallness regime imposed on the coefficients, our findings are three-fold; first, we produce estimates in Sobolev spaces. It includes operators with dependence on the gradient. Then we examine the regularity in Hölder spaces. Here we deal with the boderline case and, if we refine the smallness regime, estimates in $\mathcal{C}^{2+\gamma,\frac{2+\gamma}{2}}$ are produced. This is done through geometric and approximation techniques with preliminary compactness and localization arguments. Joint work with Pêdra D. S. Andrade (Centro de Investigación en Matemáticas, Mexico) and Makson S. Santos (Centro de Investigación en Matemáticas, Mexico).
2021-12-08 23:38:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21177081763744354, "perplexity": 7895.906125367281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00027.warc.gz"}
http://onelab.info/wiki/Tutorial/Laplace_equation_with_Neumann_boundary_condition
# Tutorial/Laplace equation with Neumann boundary condition ## The considered problem We propose here to solve a first very simple academic problem with the help of GMSH and GetDP. We considered the unit square $\Omega = [0,1]\times[0,1]$ and we seek $u$, solution of the following problem $$\begin{cases}\label{eq:problemU} -\Delta u + u = f & \text{in } \Omega,\\ \displaystyle{\frac{\partial u}{\partial \mathbf{n}} = 0} & \text{on }\partial\Omega, \end{cases}$$ where $\mathbf{n}$ is the unit outward normal of $\Omega$ and $\displaystyle{\Delta = \frac{\partial^2}{\partial x_1^2} + \frac{\partial^2}{\partial x_2^2} }$ is the Laplace operator. To simplify, we suppose that the function $f$ is defined by $$\forall (x,y)\in [0,1]^2,\qquad f(x,y) = (1+2\pi^2)\cos(\pi x)\cos(\pi y).$$ With this choice of $f$, one can easily show that the unique solution of the problem \eqref{eq:problemU} reads as $$\forall (x,y)\in[0,1]^2, \qquad u(x,y) = \cos(\pi x)\cos(\pi y).$$ In order to solve problem \eqref{eq:problemU} with the finite element method, we write the weak formulation of problem \eqref{eq:problemU}: $$\label{eq:WeakFormulation} \left\{\begin{array}{l} \text{Find } u\in H^1(\Omega) \text{ such that, }\\ \displaystyle{\forall v\in H^1(\Omega), \qquad \int_{\Omega} \nabla u\cdot\nabla v \;{\rm d}\Omega + \int_{\Omega}uv \;{\rm d}\Omega - \int_{\Omega}fv\;{\rm d}\Omega = 0}, \end{array}\right.$$ where $H^1(\Omega)$ is the classical Sobolev space and the functions $v$ are the test functions. ## Outline of the program We give here a (very) detailled solution which is composed by 3 different files: param.geo This (auxiliary) file contains the index number associated with the geometry. This ensures that GMSH and GetDP use the same numbering of the domains. LaplacianNeumann.geo GMSH file, used to build the domain (the square $\Omega$). The extension ".geo" is mainly used to design a GMSH file LaplacianNeumann.pro GetDP file, contains the weak formulation \eqref{eq:WeakFormulation} of the problem \eqref{eq:problemU}. The extension ".pro" is associated with GetDP files. ## param.geo: the auxiliary file // File "param.geo" //Numbers that caracterise the interior of the square (Omega) and its boundary (Gama): Omega = 1000; // Three remarks on these numbers : // - They are arbitrary choosen. // - They are placed in a separated file to be readable by both GMSH and GetDP. // - "Gamma" is a special word used by GMSH/GetDP, that is why the boundary is named "Gama", with one "m"... // Do not forget to let a blank line at the end, this could make GMSH crash... Direct link to file LaplacianNeumann/GMSH_GETDP/param.geo' ## LaplacianNeumann.geo: creation of the geometry with GMSH // File "LaplacianNeumann.geo". // We include the file containing the numbering of the geometry. // This is usefull at the end of this file, and used to "synchronise" GMSH and GetDP Include "param.geo"; //Caracteristic length of the finite elements (reffinement is also possible after the mesh is built): lc = 0.05; // This parameter could be placed for instance in "param.geo", to separate more easyly the geometry // and the discretization parameters. // The parameters of the border of the domain : x_max = 1; x_min = 0; y_max= 1; y_min = 0; //Creation of the 4 angle points of the domain Omega (=square) p1 = newp; Point(p1) = {x_min,y_min,0,lc}; p2 = newp; Point(p2) = {x_min,y_max,0,lc}; p3 = newp; Point(p3) = {x_max,y_max,0,lc}; p4 = newp; Point(p4) = {x_max,y_min,0,lc}; // Remarks: // -"newp" is a GMSH function that give the first available number for describing a point. // For any other entity, like Line, Surface, etc. We recommand the use of "newreg" (see below). // - By default, GMSH create a 3D domain. The z-coordinate must always be precised. //The four edges of the square L1 = newreg; Line(L1) = {p1,p2}; L2 = newreg; Line(L2) = {p2,p3}; L3 = newreg; Line(L3) = {p3,p4}; L4 = newreg; Line(L4) = {p4,p1}; // Line Loop (= boundary of the square) Bound = newreg; Line Loop(Bound) = {L1,L2,L3,L4}; //Surface of the square SurfaceOmega = newreg; Plane Surface(SurfaceOmega) = {Bound}; // To conclude, we define the physical entities, that is "what GetDP could see/use". // "Omega" is a number imported from the file "param.geo". Physical Surface(Omega) = {SurfaceOmega}; // Do not forget to let a blank line at the end, this could make GMSH crash... ## LaplacianNeumann.pro: weak formulation // File LaplacianNeumann.pro // As in the .geo file, we include the file containing the numbering of the geometry. Include "param.geo"; // Group //====== //We now build the "Groups", that is, the geometrical entities, the different domains of computation. //Here we only need the interior of the open domain Omega Group{ Omega = Region[{Omega}]; } // Function //========= Function{ // Pi is a special value in GetDp (=3.1415...) Coef = Pi; // Definition of the source function f(x,y) f[] = (1+2*Coef*Coef)*Cos[Coef*X[]]*Cos[Coef*Y[]]; } /* Remark: - the argument (as "x" and "y") are not writen between the bracket []. Indeed, between the bracket is writen the domain of definition of the function. - The argument "x" and "y" are here designed by the GetDP inner functions X[] and Y[], which give respectively the x-coordinate and the y-coordinate of a considered point. - To define a function globaly (i.e: not only on a subdomain), we write: f[] = ... - In our example, we could also have written f[Omega] = (1+2*Coef*Coef)*Cos[Coef*X[]]*Cos[Coef*Y[]]; */ //Jacobian //======== Jacobian { { Name JVol ; Case { { Region All ; Jacobian Vol ; } } } { Name JSur ; Case { { Region All ; Jacobian Sur ; } } } { Name JLin ; Case { { Region All ; Jacobian Lin ; } } } } /* Remark: roughly speacking, we make use of...: - Jacobian "Vol" as far as the integration domain is of same dimension than the problem (e.g: calculation in a 3D domain, in a 3D problem, a 2D domain (surface) in a 2 problem, a 1D domain (line) in a 1D problem) - Jacobian "Sur" when the domain of integration has one dimension less than the global problem (e.g: calculation on a surface (2D) in a 3D problem, calculation on a line (1D) in a 2D problem). - Jacobian "Lin" when the domain of integration has 2 dimensions less than the problem. That is, for example, a calculation on a line (1D) in a 3D problem. - Here, we just have define some Jacobian, you will see later that we just use the Jacobian "JVol" */ //Integration (parameters) //======================= Integration { { Name I1 ; Case { { Type Gauss ; Case { { GeoElement Point ; NumberOfPoints 1 ; } { GeoElement Line ; NumberOfPoints 4 ; } { GeoElement Triangle ; NumberOfPoints 6 ; } { GeoElement Quadrangle ; NumberOfPoints 7 ; } { GeoElement Tetrahedron ; NumberOfPoints 15 ; } { GeoElement Hexahedron ; NumberOfPoints 34 ; } } } } } } // There is no Constrain because of the Neumann boundary condition // We go directly to the FuntionSpace //FunctionSpace //============= FunctionSpace{ { Name Vh; Type Form0; BasisFunction{ {Name wn; NameOfCoef vn; Function BF_Node; Support Omega; Entity NodesOf[All];} } } } /* Explanation: The space of approximation is called "Vh". Its type is "Form0", that means "scalar". The basis functions are called "wn", and the associated coefficients "vn". In other words, a function "v" of "Vh" can be written as v(x,y) = sum_{n} vn.wn(x,y) The functions "wn" are nodale functions P1 ("BF_Node"), supported on the domain Omega ("Support Omega"). */ //(Weak) Formulation //================== Formulation{ {Name LaplacianNeumann; Type FemEquation; // We decided to call the formulation "LaplacianNeumann". // Its type is "FemEquation" ("Finite Element Method") Quantity{ {Name u; Type Local; NameOfSpace Vh;} } // Here, we introduce a quantity "u", which belongs to the function space Vh, defined above. Equation{ In Omega; Jacobian JVol; Integration I1;} Galerkin{ [ Dof{u}, {u}]; In Omega; Jacobian JVol; Integration I1;} Galerkin{ [-f[], {u}]; In Omega; Jacobian JVol; Integration I1;} } } } /* The variationnal formulation is written between the acolades {}, after "Equation". Let us first give some vocabulary: - "Galerkin" : GetDP syntaxic word. This could be translated mathematicaly by "integration" (see below) - Dof{u}: "Degree Of Freedom". This is used to specify that the quantity is the unknown. If "Dof" is not written, then "u" is seen as a test function and not as the unknown. (Be carreful, the unknown and the test functions has the same name in GetDP ! The "Dof" is here to distinguish the unknown to the test-functions) Now, let us detail the program written between the acolades {}, : In Omega; Jacobian JVol; Integration Int;} This could be translated mathematicaly by : where "u" is the unknown and "v" a test function. Note the use of the Jacobian JVol(2D problem and integration on a 2D domain). Moreover the number of integrations point is given in "I1" (see above "Integration"). The total variationnal formulation can be read as : search "Dof{u}" in V_h such that, for every "{u}" in V_h, \int_{\Omega} Grad( Dof{u}) . Grad( {u}) d\Omega + \int_{\Omega} Dof{u}.{u} d\Omega - ... ... - \int_{\Omega} f.{u} d\Omega = 0 (this is exactly the weak formulation of our problem !) Remarks: - Between two "Galerkin", a positive sign "+" is implicitly written - The sum of all integrals "Galerkin" is equal to 0 (do not forget the "minus" sign of the right hand side) - Why do we use a volumic jacobian even in a 2D problem? The problem is a 2 dimensional problem and the integral is defined on a 2D domain (Omega). If the integral was writen on, e.g, the boundary, then the Jacobian "Jsur" would have been used. */ // Resolution (of the problem) //============================ Resolution{ {Name LaplacianNeumann; // We chose the name LaplacianNeumann for the resolution // Remark: in GetDP, every entity has a name. The same same can be used for different entities but of different kind, of course // Here we chose the same name as the formulation, but this is just a choice, no obligation ! System{ {Name Syst; NameOfFormulation LaplacianNeumann;} } // A system is linked to a weak formulation // Here, we only have one weak formulation, which is "LaplacianNeumann" Operation{ Generate[Syst]; Solve[Syst]; SaveSolution[Syst]; } /* When we launch GetDP, the program will ask the user to chose a Resolution. When calling the Resolution "LaplacianNeumann", GetDP will... - Generate the system - Solve the system - Save the solution Note that GetDP respects the order of the operation ! */ } } //Post Processing //=============== PostProcessing{ {Name LaplacianNeumann; NameOfFormulation LaplacianNeumann; Quantity{ {Name u; Value {Local{[{u}];In Omega;Jacobian JVol;}}} } } } /* The name of the PostProcessing is LaplacianNeumann. It call the weak formulation LaplacianNeumann, then take the solution "u" and compute it on all the domain Omega, by the operation: Value {Local{[{u}];In Carre; Jacobian JVol;}}} This means that "u" is the interpolate of the solution "Dof{u}" computed in the weak formulation "LaplacianNeumann" Remark: Again, we chose the same name, but this is just a choice, this has no influence on GetDP resolution. */ //Post Operation //============== PostOperation{ {Name Map_u; NameOfPostProcessing LaplacianNeumann; Operation{ Print[u, OnElementsOf Omega, File "u_Neumann.pos"]; } } } /* The only PostOperation we write is to display "u" introduced in the PostProcessing. This PostOperation is called "Map_u". When launching GetDP, it will ask the user to chose : - the Resolution - the PostOperation */ ## How to launch All the files (.geo and .pro) must be located in the same directory. Meshing the domain • Graphical way: launch GMSH and open "LaplacianNeumann.geo". Then chose the "mesh" menu and click on "2D". Finally, do not forget to save the mesh by clicking on the "Save" button • With a terminal: go to the directory and then type: gmsh -LaplacianNeumann.geo -algo iso -2 Note that here we chose the isobare algorithm ("iso"). One can also chose the following algorithms: del2d frontal After the mesh is built, a file LaplacianNeumann.msh should have been created in the directory. Solving the problem with GetDP In a Terminal, type (in the right directory) getdp LaplacianNeumann.pro -solve -pos GetDP will then propose the Resolution ("-solve") and the PostOperation ("-pos"). This should build a file called "u_Neumann.pos". Remarks : the choice can be pre-selected by typing: getdp LaplacianNeumann.pro -solve#1 -pos#1 There exists other options, like the choice of the linear solver, \ldots Showing the result Finally, open the file "u_Neumann.pos" with GMSH, with the graphical way ("open \ldots') of with a terminal (by typing "gmsh u_Neumann.pos" in a terminal in the right directory). ## Result Here is the result obtained with GMSH/GetDP. On the left is an example of the mesh of $\Omega$ and on the right, the modulus of the solution $u$. ## All files A zipfile containing all files (.geo, .pro, .mesh and .pos) can be downloaded here.
2018-01-19 09:24:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.7704687714576721, "perplexity": 4683.538963737635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00643.warc.gz"}
http://mathoverflow.net/feeds/question/66613
A toy model for the t-section problem - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T07:14:06Z http://mathoverflow.net/feeds/question/66613 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/66613/a-toy-model-for-the-t-section-problem A toy model for the t-section problem fedja 2011-06-01T01:43:36Z 2011-06-05T17:53:54Z <p>Let $S(x)$ be the area of the yellow curvilinear triangle. I'd like to find a graph for which $S(x)=H(x)$ where $H$ is some prescribed function (small, smooth, vanishing near the endpoints to any order you wish, etc.). Is it always possible or there are some non-obvious hidden restrictions? </p> <p><img src="http://www.artofproblemsolving.com/Forum/latexrender/pictures/b/4/6/b46dbb397d96f17cce1e421bfaf29b9a80d26c81.png" alt="alt text"></p> <p>The question comes from the infamous t-section problem (if you know the areas of all sections of a symmetric convex body by the hyperplanes at some fixed small distance $t$ from the origin (so small that all sections are non-empty), can you recover the body?). The problem is open even on the plane. I do not say that this toy question is directly relevant here but an answer to it will certainly make a few things clearer for me.</p> http://mathoverflow.net/questions/66613/a-toy-model-for-the-t-section-problem/66976#66976 Answer by Douglas Zare for A toy model for the t-section problem Douglas Zare 2011-06-05T16:59:21Z 2011-06-05T17:53:54Z <p>There are restrictions.</p> <p>At most points, $S'(x)$ is the length of the right leg minus the length of the left leg of the curvilinear triangle, perhaps with exceptions on a null set where there are tangencies. If $S(0)=S(1)=0$ then the lengths of these legs are at most $\sqrt{2}(1-x)$ and $\sqrt{2}x$. For almost all $0\le x \le 1$, $S'(x)$ satisfies $-\sqrt{2}\le -\sqrt{2} x \lt S'(x) \lt \sqrt(2) (1-x) \le \sqrt{2}$. This is an extra condition on $H$ which rules out some smooth small functions which have large derivatives near some points, such as $10^6 \exp(-1/(x (1-x))^2)$ for $0\lt x \lt 1$, which has a derivative of $1.132$ at $x=0.436$ although the value of the function is small.</p>
2013-06-18 07:14:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330957889556885, "perplexity": 578.9898986785911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707184996/warc/CC-MAIN-20130516122624-00026-ip-10-60-113-184.ec2.internal.warc.gz"}
https://asmedigitalcollection.asme.org/appliedmechanics/article-abstract/75/3/031012/465641/The-Effects-of-Vibrations-on-Particle-Motion-in-a?redirectedFrom=fulltext
Abstract The effects of small vibrations on particle motion in a viscous fluid cell have been investigated experimentally and theoretically. A steel particle was suspended by a thin wire at the center of a fluid cell, and the cell was vibrated horizontally using an electromagnetic actuator and an air bearing stage. The vibration-induced particle amplitude measurements were performed for different fluid viscosities ($58.0cP$ and $945cP$), and cell vibration amplitudes and frequencies. A viscous fluid model was also developed to predict the vibration-induced particle motion. This model shows the effect of fluid viscosity compared to the inviscid model, which was presented earlier by Hassan et al. (2004, “The Effects of Vibrations on Particle Motion in an Infinite Fluid Cell,” ASME J. Appl. Mech., 73(1), pp. 72–78) and validated using data obtained for water. The viscous model with modified drag coefficients is shown to predict well the particle amplitude data for the fluid viscosities of $58.5cP$ and $945cP$. While there is a resonance frequency corresponding to the particle peak amplitude for oil $(58.0cP)$, this phenomenon disappeared for glycerol $(945cP)$. This disappearance of resonance phenomenon is explained by referring to the theory of mechanical vibrations of a mass-spring-damper system. For the sinusoidal particle motion in a viscous fluid, the effective drag force has been obtained, which includes the virtual mass force, drag force proportional to the velocity, and the Basset or history force terms. 1. Gamache , O. , Nakamura , H. , and Kawaji , M. , 2005, “ Experimental Investigation of Marangoni Convection and Vibration-Induced Crystal Motion During Protein Crystal Growth ,” Microgravity Sci. Technol. 0938-0108, 16 ( 1 ), pp. 342 347 . 2. Stokes , G. G. , 1851, Mathematical and Physical Papers , Vol. 3 , Johnson Reprint Corp. , New York , pp. 25 35 . 3. Basset , A. B. , 1888, A Treatise on Hydrodynamics , Vol. 2 , Deighton Bell and Co Press , Cambridge, UK , Chap. 21. 4. Boussinesq , J. V. , 1885, “ Sur la Resistance qu’oppose un Liquide Indefeni au Repos, sans Pesanteur, au Mouvememt d’une Sphere Solide qu’il Mouille sur toute sa Surface ,” 0001-4036, 100 , pp. 935 937 . 5. Oseen , C. W. , 1910, “ Uber die Stokes’sche Formel und Uber eine verwandte Aufgabe in der Hydrodynamik ,” Ark. Mat., Astron. Fys. 0365-4133, 6 , pp. 29 45 . 6. Proudman , I. , and Pearson , J. R. A. , 1957, “ Expansion at Small Reynolds Numbers for the Flow Past a Sphere and a Cylinder ,” J. Fluid Mech. 0022-1120, 2 , pp. 237 262 . 7. Odar , F. , 1963, “ Forces on a Sphere Accelerating in a Viscous Fluid ,” J. Fluid Mech. 0022-1120, 18 , pp. 302 314 . 8. Baird , M. H. I. , Senior , M. G. , and Thompson , R. J. , 1967, “ Terminal Velocities of Spherical Particles in a Vertically Oscillating Liquid ,” Chem. Eng. Sci. 0009-2509, 22 , pp. 551 558 . 9. Ikeda , S. , 1989, “ Fall Velocity of Single Spheres in Vertically Oscillating Fluids ,” Fluid Dyn. Res. 0169-5983, 5 , pp. 203 216 . 10. Jameson , G. J. , and Davidson , J. F. , 1966, “ The Motion of a Bubble in a Vertically Oscillating Liquid: Theory for an Inviscid Liquid and Experimental Results ,” Chem. Eng. Sci. 0009-2509, 21 , pp. 29 33 . 11. Tunstall , E. B. , and Houghton , G. , 1968, “ Retardation of Falling Spheres by Hydrodynamic Oscillations ,” Chem. Eng. Sci. 0009-2509, 23 , pp. 1067 1081 . 12. Molinier , J. , Kuychoukov , G. , and Angelino , H. , 1971, “ Etude du Mouvement d’une Sphere dans un Liquide pulse ,” Chem. Eng. Sci. 0009-2509, 26 , pp. 1401 1412 . 13. Clift , R. J. , Grace , R. , and Weber , M. E. , 1978, Bubbles, Drops and Particles , , London . 14. Feinman , J. , 1964, “ An Experimental Study of the Behavior of Solid Spheres in Oscillating Liquids ,” Ph.D. thesis, University of Pittsburgh, Pittsburgh, PA. 15. Houghton , G. , 1961, “ The Behavior of Particles in a Sinusoidal Vector Field ,” Proc. R. Soc. London, Ser. A 1364-5021, 272 , pp. 33 43 . 16. Mei , R. , Lawrence , J. , and , J. , 1991, “ Unsteady Drag on a Sphere at Finite Reynolds Number With Small Fluctuations in the Free-Stream Velocity ,” J. Fluid Mech. 0022-1120, 233 , pp. 613 631 . 17. Maxey , M. , and Riley , J. , 1982, “ Equation for a Small Rigid Sphere in a Nonuniform Flow ,” Phys. Fluids 0031-9171, 26 , pp. 883 889 . 18. Tchen , C. M. , 1947, “ Mean Value and Correlation Problems Connected With the Motion of Small Particles Suspended in a Turbulent Fluid ,” Ph.D. thesis, Delft University, Delft, The Netherlands. 19. Lovalenti , M. , and , J. , 1993, “ The Hydrodynamic Force on a Rigid Particle Undergoing Arbitrary Time-Dependent Motion at Small Reynolds Number ,” J. Fluid Mech. 0022-1120, 256 , pp. 561 605 . 20. , M. , and Souhar , M. , 2004, “ Experimental Investigation of the History Force Acting on Oscillating Fluid Spheres at Low Reynolds Number ,” Phys. Fluids 1070-6631, 16 , pp. 3808 3817 . 21. Coimbra , C. F. M. , and Rangel , R. H. , 2001, “ Spherical Particle Motion in Harmonic Stokes Flows ,” AIAA J. 0001-1452, 39 ( 9 ), pp. 1673 1682 . 22. Hassan , S. , Lyubimova , T. P. , Lyubimov , D. V. , and Kawaji , M. , 2006, “ The Effects of Vibrations on Particle Motion in an Infinite Fluid Cell ,” ASME J. Appl. Mech. 0021-8936, 73 ( 1 ), pp. 72 78 . 23. Hassan , S. , Lyubimova , T. P. , Lyubimov , D. V. , and Kawaji , M. , 2006, “ The Effects of Vibrations on Particle Motion in a Semi-Infinite Fluid Cell ,” ASME J. Appl. Mech. 0021-8936, 73 ( 4 ), pp. 610 621 . 24. Hassan , S. , Kawaji , M. , Lyubimova , T. P. , and Lyubimov , D. V. , 2006, “ Effects of Vibrations on Particle Motion Near a Wall: Existence of Attraction Force ,” Int. J. Multiphase Flow 0301-9322, 32 , pp. 1027 1054 . 25. Rao , S. S. , 1995, Mechanical Vibrations , 3rd ed.,
2022-10-05 23:04:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.470780611038208, "perplexity": 10205.444538655338}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00277.warc.gz"}
https://www.oknoname.com/EPFL/CS-439/topic/1987/exercise-10-handnotes-open-function-example/
### Exercise 10, handnotes, open function example Hi, On the third page of hand notes of exercise 10, there is an example of an open convex function $$f(x)$$ and its conjugate of conjugate. I'm wondering how the conjugate of $$f$$ is computed for $$y>0$$ in this case. I see that $$x^Ty$$ is always below $$f(x)$$ for positive $$y$$, and never intersects with $$f(x)$$, so the gap is always negative. But for values of $$x$$ slightly smaller than zero, $$x^Ty - f(x)$$ is very close but to equal to zero. So it's not clear to me how $$f^*(y)$$ is computed here. Top comment Hello, Contrary to the lecture notes, the handwritten notes use 'sup' instead of 'max'. This is exactly to take care of the issue you raise. Indeed, for $$f(x)=0 \text{ if } x < a$$, if $$u > 0$$, $$\max_x u x - f(x)$$ is something like $$u(a - \epsilon)$$ for any small positive $$\epsilon$$. 'sup' resolves this limit, so the conjugate$$f(x)$$ value would be $$ua$$. Does that help? Hi, Hello, Where do these handwritten notes come from ? I never saw them and cannot find them on github.
2021-11-30 05:26:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769790530204773, "perplexity": 285.1457874731677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00466.warc.gz"}
https://irzu.org/research/autodesk-forge-error-while-loading-model/
I have an IFC file that I can open with Forge viewer version 7.71 and earlier but not with versions 7.72 to 7.79. The view remains with the loading spinner. I try with both svf and svf2 options when requesting conversion job. Here is the error log: Uncaught TypeError: Cannot read properties of undefined (reading 'length')
2022-12-02 02:52:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2590428292751312, "perplexity": 10182.9684657297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00324.warc.gz"}
https://mathoverflow.net/questions/239459/elements-of-arbitrary-large-order-in-the-first-galois-cohomology-of-an-elliptic
# Elements of arbitrary large order in the first Galois cohomology of an elliptic curve Let $E$ be an elliptic curve over $k=\mathbb{Q}$. Consider $H^1(k,E)$. In this answer Daniel Loughran writes: "I'm pretty sure that this cohomology group has elements of arbitrarily large order". I would be happy to have an explanation of this fact and/or a reference. (I usually work with linear algebraic groups. For an abelian linear algebraic group $A$ there exists a natural number $d=d(A_{\bar k})$ such that the order of any element of $H^1(k,A)$ is $\le d$.) EDIT: Cassels, § 10 and § 27, writes that this result was first proved by Shafarevich in 1957. • For a $g$-dimensional abelian variety $A$ over a number field $k$, $A[N](\overline{k})$ has size $N^{2g}$ rather than just $N^g$ as for tori. This causes even $Ш^1_S(k,A)[p^{\infty}]$ (!) to be infinite when $g[k:\mathbf{Q}]>{\rm{rank}}(A(k))$ and $S$ is the set of places of $k$ over $p$; for details, see Example 7.5.1 of math.stanford.edu/~conrad/papers/cosetfinite.pdf – nfdc23 May 21 '16 at 16:59 • Cassels, Diophantine Equations with special reference to elliptic curves, JLMS 1966 § 27. – Felipe Voloch May 22 '16 at 2:05 • @FelipeVoloch: Thank you for the reference to the paper of Cassels. It is an excellent survey! However it seems that his proof of Shafarevich's theorem in § 27 is not complete. Indeed, it is not clear why his group Б in the exact sequence (27.1) cannot have infinitely many elements of order $m$. One needs some finiteness result for Б. – Mikhail Borovoi May 24 '16 at 18:09 Here is the kind of method I had in mind. We have the elliptic curve Kummer sequence $$0 \to E[n] \to E \to E \to 0,$$ Here I denote by $E[n]$ the $n$-torsion group scheme of $E$. Applying Galois cohomology we obtain $$0 \to E(\mathbb{Q})/nE(\mathbb{Q}) \to H^1(\mathbb{Q}, E[n]) \to H^1(\mathbb{Q}, E)[n] \to 0.$$ By the Mordell-Weil theorem, the group $E(\mathbb{Q})/nE(\mathbb{Q})$ is finite (its cardinality grows roughly like $n^{\mathrm{rank}(E)}$). Thus it suffices to show that $H^1(\mathbb{Q}, E[n])$ is infinite. I think that this should be some general property of Galois cohomology for non-trivial finite abelian group schemes over number fields, which probably you already know about. Anyway, the argument should go as follows: Choose a splitting field $k/\mathbb{Q}$ for $E[n]$. We then apply inflation-restriction to obtain $$0 \to H^1(\mathrm{Gal}(k/\mathbb{Q}), E[n]) \to H^1(\mathbb{Q}, E[n]) \to H^1(k, (\mathbb{Z}/n\mathbb{Z})^2)^{\mathrm{Gal}(k/\mathbb{Q})} \to H^2(\mathrm{Gal}(k/\mathbb{Q}), E[n]).$$ The first and the latter group are finite. The group $H^1(k, (\mathbb{Z}/n\mathbb{Z})^2)$ is clearly infinite, and I think that it is still infinite after taking Galois invariants. Though this last step is the part I did not fully check. Is it clear to you? • If one assumes $\mu_n \subseteq k$, one is left with proving that $(k^\times/n)^{\mathrm{Gal}(k/\mathbf{Q})}$ is infinite. – user19475 May 21 '16 at 17:01 • I think that one can prove that $(\mathbf{Q}^\times/n)^G = \mathbf{Q}^\times/n$ shares an infinite quotient with $(k^\times/n)^G$. – user19475 May 21 '16 at 17:14 • Let $k$ be a number field, $M$ a nonzero finite discrete ${\rm{Gal}}(\overline{k}/k)$-module, and $S$ a finite set of places $v$ of $k$ containing all $v|\infty\#M$ and $v$ where $M$ is ramified, so $M$ splits over the maximal extension $k_S/k$ unramified outside $S$. Hence, $H^1(k,M)$ is the direct limit along injective inflation of the $H^1(k_S/k,M)$'s, so it suffices to show $h^1(k_S/k,M)$ is unbounded as $S$ varies. By the global Euler char. formula and 9-term exact sequence, it suffices to show $\prod_{v\in S} h^2(k_v,M)$ is unbounded as $S$ varies. By local duality and Chebotarev, QED. – nfdc23 May 22 '16 at 2:56 • @nfdc23: Thank you! I have posted a separate question. I would appreciate if you write an answer rather than just a comment.... – Mikhail Borovoi May 22 '16 at 5:13 • @DanielLoughran: Now it is clear for me, see my answer! – Mikhail Borovoi May 24 '16 at 18:13 Theorem 6 of this paper gives a generalization of Shafarevich's Theorem: Theorem: Let $K$ be a Hilbertian field. Let $A_{/K}$ be a nontrivial abelian variety. If $n > 1$ is indivisible by the characteristic of $K$ and $A(K)/nA(K)$ is finite, then $H^1(K,A)$ has infinitely many elements of order $n$. As explained in the paper, the hypothesis that $A(K)/nA(K)$ be finite cannot be omitted, but the hypothesis that $n$ is indivisible by the characteristic of $K$ can be weakened to: $A(K^{\operatorname{sep}})$ contains a point of order $n$. Via the Kummer sequence, the proof quickly reduces to showing that $H^1(K,A[n])$ contains infinitely many elements of order $n$, which is related to your other question and (yet more closely) to Lior Bary-Soroker's answer to it.
2020-07-07 10:14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262202978134155, "perplexity": 199.9020394599187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00399.warc.gz"}
https://kliu.io/post/lizardfs-is-pretty-nice/
For the past few months, I’ve been running LizardFS on my home servers, providing 26 TB of error-checked, erasure-coded storage. All on mismatched disks spread over two computers. I like it. A lot. # The Problem As an unorganized digital packrat, I have spontaneously purchased hard drives of very varying capacities over the years. In my storage cluster, I have 250 GB disks all the way to 8 TB disks. Conventional RAID filesystems, like ZFS or software RAID, tend to not handle mismatched disks very well, and they do not let you use all the disks to their fullest capacity. With ZFS especially, it requires quite some finangling to create a storage pool from mismatched disks, and adding new disks to a RAIDZ is impossible without creating more vdevs, which incurs parity overhead. Thus, despite ZFS’s legendary other features (e.g. deduplication, snapshotting, block devices…), it wouldn’t work for me. BTRFS is also another option, but, uh, yeah. Let’s just say, I wouldn’t trust BTRFS on anything larger than a single-disk filesystem. (Heck, even my root filesystem on BTRFS crashes every few weeks with some arcane BTRFS error.) An option that is nearly ideal is SnapRAID. With SnapRAID, you simply put files on individual disks, and SnapRAID periodically (e.g. daily) reads through the disks and calculates parity, which it puts on a parity drive. The downside, however, is that (1) this delayed parity calculation means new files are at risk for some time, and (2) the parity drive must be the largest of all your drives. In my case, this would mean giving an entire 8 TB drive to parity, which is not really ideal. And if I were to want double parity? Two 8 TB drives. Also, a shared issue with these filesystems is that they are inherently single-computer – there’s no sharing of a ZFS filesystem across multiple computers, or calculating SnapRAID parity across two PCs. # LizardFS: The Solution LizardFS operates in a similar manner to SnapRAID – it writes directly to ordinary filesystems on individual drives (e.g. ext4, BTRFS). It essentially provides a virtual filesystem that you can write to, and it will pick drives to write to. However, it doesn’t write files whole – it breaks each into 64 MB chunks and distributes them across its drives. This means it can use all disks to their fullest capacity. To provide redundancy, however, it also provides goals, different options for how data should be replicated/erasure-coded. Goals can be set on individual files or folder trees, so you could have your movie collection at 2 data : 1 parity (so one drive could fail before losing data), while your personal documents are at 2x replication. To manage this entire system, LizardFS operates a master daemon, which handles all filesystem metadata and lets clients know where to find chunks. Chunks are served and written by chunkservers, which deal directly with individual disks. These chunkservers can be on different computers, providing redundancy and scalability. There are two ways to think about chunkservers: 1. One chunkserver per computer; the chunkserver writes to a ZFS array/RAID array/etc. 2. One chunkserver per disk; the chunkserver writes to an individual disk. I went for the latter, because I am too poor for option one, and I only have two computers. (The benefit of option one, though, is that you can literally unplug a computer and your file system is fine.) LizardFS also provides snapshot functionality, though I personally can’t use them (see below). This kind of architecture is similar to those of other distributed filesystems, including Ceph and GlusterFS. As far as I know, though, neither has as much flexibility in, e.g. assigning different goals to files. # How I Run LizardFS I run LizardFS on two computers. One (i7-4820k, 20 GB RAM) runs the master daemon and chunkservers, while the other (AMD Athlon X2, 6 GB RAM) runs more chunkservers and the metalogger (basically a backup for the master daemon). I orchestrate this entire process using NixOS, NixOps, and a custom LizardFS configuration module that I have written. Most of my files are at ec(2,1); this means that I can lose one drive and still reconstruct all data. Important files are at ec(4,2), so I can lose 2 drives for those. # The Downsides A few things about LizardFS make me worried about its longer-term relevance. 1. The repository hasn’t been updated since June 2018. Now, it seems that they only update their public repository on new software releases, but the fact remains that v3.13.0-rc1 has been out for nearly a year (and rather buggily at that) with no stable release. 2. There have been some bugs. Sometimes, after my computer crashes, taking down the master daemon and several chunkservers, I reboot to find missing chunks. I believe this is because mounts don’t wait for all data + EC writes to finish, which may be solved by the REDUNDANCY_LEVEL option. I’m still worried that it can happen at all, though. 3. Scalability. The master daemon stores all metadata (e.g. filenames, modtimes) in RAM, providing lightning-quick access, but at the cost of memory usage. At times, it has grown to ~1.5 GB. This also makes extensive snapshots unusable for me, as each snapshot essentially duplicates the entire metadata set, doubling memory usage. The daemon also forks on the hour, which briefly doubles memory usage. So you basically have to reserve twice the size of the metadata set in RAM. I’ve tried to work around this by forcing the daemon to use swap by setting MemoryMax= in systemd. This works surprisingly well (metadata access is still pretty fast), but it causes several-minute hangs on shutdown, so I stopped doing it. 4. Not a lot of people use it. While Ceph / GlusterFS / SnapRAID have lots of blog posts on the internet and setup guides, LizardFS has quite sparse documentation. Hence why I’m writing this post now. For more info, I would recommend wintersdark’s guide on reddit, which introduced me to LizardFS.
2022-09-28 05:57:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36246711015701294, "perplexity": 4326.8793697662295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00555.warc.gz"}
https://itprospt.com/num/18829823/find-the-most-general-antiderivative-of-the-function-check
5 # Find the most general antiderivative of the function (Check our Rsa by ditterentiatlon. Use for the constant 0f tne uaidarivative.5)e* = 6)F6)... ## Question ###### Find the most general antiderivative of the function (Check our Rsa by ditterentiatlon. Use for the constant 0f tne uaidarivative.5)e* = 6)F6) Find the most general antiderivative of the function (Check our Rsa by ditterentiatlon. Use for the constant 0f tne uaidarivative. 5)e* = 6) F6) #### Similar Solved Questions ##### Acrche 9 Detcminc #hich of the followin? groupsthor lhnt AErncrauermla cycllc grodpUu U(z) UumOclle Acrche 9 Detcminc #hich of the followin? groups thor lhnt A Erncrauermla cycllc grodp Uu U(z) Uum Oclle... ##### 1 E Subm riludiau II used 1 | Meel 1 answers 3 1 1 SerPSET9 quued Volsond AD Vl 1 1 [ 1 1 19,0 cM Ftn qucsticn 1 1 part only cnanbesivou suomi 1 Is counicrclcknls Jul ajueuj JoNeed {FdIG Mcmcnt 0i OSUniPhys1 3 Ju,S P ubb-11 ntegntor 1 E Subm riludiau II used 1 | Meel 1 answers 3 1 1 SerPSET9 quued Volsond AD Vl 1 1 [ 1 1 19,0 cM Ftn qucsticn 1 1 part only cnanbesivou suomi 1 Is counicrclcknls Jul ajueuj Jo Need { FdIG Mcmcnt 0i OSUniPhys1 3 Ju,S P ubb- 1 1 ntegntor... ##### If R=2k, € = 2 uF,€ = 19 V,Q =35 [C, and / = 3 mA,what is the potential difference, Vdiff = Vb ~ Va? Caution: Be careful with the signs: 19 V 2 kn 1 #II W' b 3 MA 2 HF If R=2k, € = 2 uF,€ = 19 V,Q =35 [C, and / = 3 mA,what is the potential difference, Vdiff = Vb ~ Va? Caution: Be careful with the signs: 19 V 2 kn 1 #II W' b 3 MA 2 HF... ##### Eolvatqmic lons Homework Name; Write the chemical name for cach formula (Sotne may be molecules or acids'y: a) Mn(SO4) b) Mg(NOs) c) CuCO, d) NBra e) H;PO4 (aq) 0) BaSOs SHzO2) Complete the following table by combining the cations und anions I0 make neutral ionic compoundlonsPO CIUJCr=NH4FeztWrite the chemical names for the following polyatomic 1nS CN" CIO;"" a) SOf?some ma; be molecules acids} 4) Write the chemical formula for each chemical name wrile the two ions and their Eolvatqmic lons Homework Name; Write the chemical name for cach formula (Sotne may be molecules or acids'y: a) Mn(SO4) b) Mg(NOs) c) CuCO, d) NBra e) H;PO4 (aq) 0) BaSOs SHzO 2) Complete the following table by combining the cations und anions I0 make neutral ionic compound lons PO CIUJ Cr= NH4... ##### Point)Evaluate the triple integralFIL TydV where Eis the solid tetrahedon With vertices (0,0,0), (3,0,0), (0,9,0), (0,0,6). point) Evaluate the triple integral FIL TydV where Eis the solid tetrahedon With vertices (0,0,0), (3,0,0), (0,9,0), (0,0,6).... ##### JnknowVn resistance Yaluls series and notes that tne equivalent resistance Rs 660 0. She then student connects resistons with parallel the cqulvalent resistance 140 What are tne resistances (in 0) & connects the same two resistors and measures eacn resistor? smaller resistancclarger resistance JnknowVn resistance Yaluls series and notes that tne equivalent resistance Rs 660 0. She then student connects resistons with parallel the cqulvalent resistance 140 What are tne resistances (in 0) & connects the same two resistors and measures eacn resistor? smaller resistancc larger resistance... ##### Problern 6* Let (Xi; Yi),1 < i < n bc: independlent :d idlentically distributed ralndoI variables with EXi = EYi = 0, EX} = 03,EY? = 03 and corr( Xi, Ya) = p. Compute (24) wherc Zi = Xi + Iy 1<i<n. Problern 6* Let (Xi; Yi),1 < i < n bc: independlent :d idlentically distributed ralndoI variables with EXi = EYi = 0, EX} = 03,EY? = 03 and corr( Xi, Ya) = p. Compute (24) wherc Zi = Xi + Iy 1<i<n.... ##### The diagrams below show two double-strand break DNA repair pathways: nonhomo loRo us end joining (NHEJ) and homology directed repair (HDR). Provide brief explanation for each repair process,Nuclease-Inducted Double strand BreakNHEJDeletionDolor IeinplaleInsertionHDRVartableLnun udelCcee antnonmndincalon The diagrams below show two double-strand break DNA repair pathways: nonhomo loRo us end joining (NHEJ) and homology directed repair (HDR). Provide brief explanation for each repair process, Nuclease-Inducted Double strand Break NHEJ Deletion Dolor Ieinplale Insertion HDR VartableLnun udel Ccee antn... ##### Let X be a continuous random variable with probability density function I(9-12) f(z) 1<1 <3 The variance of Xis 02 = 293 1200 Let Y = Xiz1 X; for 12 independent observations X1, Xz - X1z of this random variable What is the variance of Y?Select one: 293 14400293V12 100293 1200293 100v12293 100 Let X be a continuous random variable with probability density function I(9-12) f(z) 1<1 <3 The variance of Xis 02 = 293 1200 Let Y = Xiz1 X; for 12 independent observations X1, Xz - X1z of this random variable What is the variance of Y? Select one: 293 14400 293V12 100 293 1200 293 100v12 293... ##### Exxlztory ncurotransmltuers open up Sore sodium Ion chanriels the postsynaptic deridrite , Thls allows some sodium ions (at enter tne postsynaptic dendrite, making the membrane potential more positive and bririging closer treshold patential This increases the likelinood of an action potential Inhlbltory ncurotransmlttcrs operi up ion channels the postsynaptIc membrane for positive potassium ions (K hto flow out OR regative chloride ions (Cl) to flow In Whether potassium IOf1s g0 OuI Or cloride i exxlztory ncurotransmltuers open up Sore sodium Ion chanriels the postsynaptic deridrite , Thls allows some sodium ions (at enter tne postsynaptic dendrite, making the membrane potential more positive and bririging closer treshold patential This increases the likelinood of an action potential Inhlbl... ##### A campany wishes to test if the new software system installed improved the mean waiting ' time for a customer to talk to a' service representative to less than 120 seconds. Is the following the correct hypothesis to test for this problem?Ho : X =120 H,:X < 120Yes, this hypothesis Is the correct one to testNo answer text providedNo; this hypothesis i5 Incorrectly writtenNo answer text provided A campany wishes to test if the new software system installed improved the mean waiting ' time for a customer to talk to a' service representative to less than 120 seconds. Is the following the correct hypothesis to test for this problem? Ho : X =120 H,:X < 120 Yes, this hypothesis Is t... ##### Given the following data: $$\begin{array}{ll}{\mathrm{C}(s)+\mathrm{O}_{2}(g) \rightarrow \mathrm{CO}_{2}(g)} & {\Delta H=-393 \mathrm{kJ}} \\ {2 \mathrm{CO}(g)+\mathrm{O}_{2}(g) \rightarrow 2 \mathrm{CO}_{2}(g)} & {\Delta H=-566 \mathrm{kJ}}\end{array}$$ Calculate $\Delta H$ for the reaction $2 \mathrm{C}(s)+\mathrm{O}_{2}(g) \rightarrow \mathrm{CO}(g)$ Given the following data: $$\begin{array}{ll}{\mathrm{C}(s)+\mathrm{O}_{2}(g) \rightarrow \mathrm{CO}_{2}(g)} & {\Delta H=-393 \mathrm{kJ}} \\ {2 \mathrm{CO}(g)+\mathrm{O}_{2}(g) \rightarrow 2 \mathrm{CO}_{2}(g)} & {\Delta H=-566 \mathrm{kJ}}\end{array}$$ Calculate $\Delta H$ for the react... ##### Let pt be the number of deer in a forest this year and abouthalf of the population is female. Hint: You can first think aboutthis problem by imagining a population that currently has 1000deer, and then translate the steps you did for the specificpopulation for a population in general.Suppose every female deer gives birth to exactly one offspringeach year, but due to predation, only 30% of the offspring survivetheir first year. Write a DDS model that describes the deerpopulation next year. Simpli Let pt be the number of deer in a forest this year and about half of the population is female. Hint: You can first think about this problem by imagining a population that currently has 1000 deer, and then translate the steps you did for the specific population for a population in general. Suppose ev... ##### Part ARank Determine the thermal energy changes of systems A through D, described below; and then rank them in order of increasing change in thermal energy: Indicate ties where appropriate_System B C W (J) 10 -10 30 -20 Q(J) 20 -20 50 -10Rank systems from highest to lowest change: To rank items as equivalent, overlap them Part A Rank Determine the thermal energy changes of systems A through D, described below; and then rank them in order of increasing change in thermal energy: Indicate ties where appropriate_ System B C W (J) 10 -10 30 -20 Q(J) 20 -20 50 -10 Rank systems from highest to lowest change: To rank items a... ##### 2_ university has selected a few hundred students for an alternative educational program: To assess the effectiveness of the program in terms of GPA,a random sample of 30 students in the alternative educational program was selected for comparison with all students in the university. The average GPA of all students in the university was 2.50 and the average GPA of the sample of alternative education program students was 2.61 with a standard deviation of 0.6. Did the alternative educational progra 2_ university has selected a few hundred students for an alternative educational program: To assess the effectiveness of the program in terms of GPA,a random sample of 30 students in the alternative educational program was selected for comparison with all students in the university. The average GPA ...
2022-10-02 22:47:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7096024751663208, "perplexity": 10155.1980325209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00782.warc.gz"}
http://server1.wikisky.org/starview?object_type=1&object_id=6355&object_name=82+Gem&locale=EN
WIKISKY.ORG Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login # 82 Gem Contents ### Images DSS Images   Other Images ### Related articles Speckle Observations of Binary Stars with a 0.5 m TelescopeWe present 36 observations of 17 visual binaries of moderate separation(ranging from 0.15" to 0.790") made with the 50 cm Cassegrain telescopeof the Jagiellonian University in Kraków. The speckleinterferometry technique was combined with modest optical hardware and astandard photometric CCD camera. We used broadband VRI filters without aRisley prism to reduce differential color refraction. Thus, we performeda model analysis to investigate the influence of this effect on theresults of the measurements. For binary components of spectral type O-F,the difference of three spectral classes between them should bias theirrelative positions by no more than a couple of tens of milliarcseconds(mas) for moderate zenith distances. The statistical analysis of ourresults confirmed this conclusion. A cross-spectrum approach was appliedto resolve the quadrant ambiguity. Our separations have rms deviationsof 0.012", and our position angles have rms deviations of 1.8d. Relativephotometry in V, R, and I filters appeared to be the less accuratelydetermined parameter. We discuss our errors in detail and compare themto other speckle data. This comparison clearly shows the quite goodaccuracy of our measurements. We also present an example of theenhancement of image resolution for an extended object with an angularsize that is greater than the atmospheric coherence patch, using speckleinterferometry techniques. CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 The Geneva-Copenhagen survey of the Solar neighbourhood. Ages, metallicities, and kinematic properties of ˜14 000 F and G dwarfsWe present and discuss new determinations of metallicity, rotation, age,kinematics, and Galactic orbits for a complete, magnitude-limited, andkinematically unbiased sample of 16 682 nearby F and G dwarf stars. Our˜63 000 new, accurate radial-velocity observations for nearly 13 500stars allow identification of most of the binary stars in the sampleand, together with published uvbyβ photometry, Hipparcosparallaxes, Tycho-2 proper motions, and a few earlier radial velocities,complete the kinematic information for 14 139 stars. These high-qualityvelocity data are supplemented by effective temperatures andmetallicities newly derived from recent and/or revised calibrations. Theremaining stars either lack Hipparcos data or have fast rotation. Amajor effort has been devoted to the determination of new isochrone agesfor all stars for which this is possible. Particular attention has beengiven to a realistic treatment of statistical biases and errorestimates, as standard techniques tend to underestimate these effectsand introduce spurious features in the age distributions. Our ages agreewell with those by Edvardsson et al. (\cite{edv93}), despite severalastrophysical and computational improvements since then. We demonstrate,however, how strong observational and theoretical biases cause thedistribution of the observed ages to be very different from that of thetrue age distribution of the sample. Among the many basic relations ofthe Galactic disk that can be reinvestigated from the data presentedhere, we revisit the metallicity distribution of the G dwarfs and theage-metallicity, age-velocity, and metallicity-velocity relations of theSolar neighbourhood. Our first results confirm the lack of metal-poor Gdwarfs relative to closed-box model predictions (the G dwarfproblem''), the existence of radial metallicity gradients in the disk,the small change in mean metallicity of the thin disk since itsformation and the substantial scatter in metallicity at all ages, andthe continuing kinematic heating of the thin disk with an efficiencyconsistent with that expected for a combination of spiral arms and giantmolecular clouds. Distinct features in the distribution of the Vcomponent of the space motion are extended in age and metallicity,corresponding to the effects of stochastic spiral waves rather thanclassical moving groups, and may complicate the identification ofthick-disk stars from kinematic criteria. More advanced analyses of thisrich material will require careful simulations of the selection criteriafor the sample and the distribution of observational errors.Based on observations made with the Danish 1.5-m telescope at ESO, LaSilla, Chile, and with the Swiss 1-m telescope at Observatoire deHaute-Provence, France.Complete Tables 1 and 2 are only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/418/989 Observational constraints for lithium depletion before the RGBPrecise Li abundances are determined for 54 giant stars mostly evolvingacross the Hertzsprung gap. We combine these data with rotationalvelocity and with information related to the deepening of the convectivezone of the stars to analyse their link to Li dilution in the referredspectral region. A sudden decline in Li abundance paralleling the onealready established in rotation is quite clear. Following similarresults for other stellar luminosity classes and spectral regions, thereis no linear relation between Li abundance and rotation, in spite of thefact that most of the fast rotators present high Li content. The effectsof convection in driving the Li dilution is also quite clear. Stars withhigh Li content are mostly those with an undeveloped convective zone,whereas stars with a developed convective zone present clear sign of Lidilution.Based on observations collected at ESO, La Silla, Chile, and at theObservatoire de Haute Provence, France, operated by the Centre Nationalde la Recherche Scientifique (CNRS). Spectral Classification of the Hot Components of a Large Sample of Stars with Composite Spectra, and Implication for the Absolute Magnitudes of the Cool Supergiant Components.A sample of 135 stars with composite spectra has been observed in thenear-UV spectral region with the Aurélie spectrograph at theObservatoire de Haute-Provence. Using the spectral classifications ofthe cool components previously determined with near infrared spectra, weobtained reliable spectral types of the hot components of the samplesystems. The hot components were isolated by the subtraction methodusing MK standards as surrogates of the cool components. We also derivedthe visual magnitude differences between the components usingWillstrop's normalized stellar flux ratios. We propose a photometricmodel for each of these systems on the basis of our spectroscopic dataand the Hipparcos data. We bring to light a discrepancy for the Gsupergiant primaries between the visual absolute magnitudes deduced fromHipparcos parallaxes and those tabulated by Schmidt-Kaler for the GIbstars: we propose a scale of Mv-values for these stars incomposite systems. By way of statistics, about 75% of the hot componentsare dwarf or subgiant stars, and 25% should be giants. The distributionin spectral types is as follows: 41% of B-type components, 57% of typeA, and 2% of type F; 68% of the hot components have a spectral type inthe range B7 to A2. The distribution of the ΔMv-valuesshows a maximum near 0.75 mag. Binary Star Orbits. II. Preliminary First Orbits for 117 SystemsOrbital elements are presented for 117 binary systems with no previousorbit calculation. For nearly all of these systems, these elements mustbe regarded as preliminary, but the ephemerides presented here should berelatively accurate over the next several decades. Further, the analysisof these systems should highlight the need for their continuedobservation by dedicated programs to improve the veracity of theseelements. CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/386/492, and from theauthors on CD-Rom. Statistics of spectroscopic sub-systems in visual multiple starsA large sample of visual multiples of spectral types F5-M has beensurveyed for the presence of spectroscopic sub-systems. Some 4200 radialvelocities of 574 components were measured in 1994-2000 with thecorrelation radial velocity meter. A total of 46 new spectroscopicorbits were computed for this sample. Physical relations are establishedfor most of the visual systems and several optical components areidentified as well. The period distribution of sub-systems has a maximumat periods from 2 to 7 days, likely explained by a combination of tidaldissipation with triple-star dynamics. The fraction of spectroscopicsub-systems among the dwarf components of close visual binaries withknown orbits is similar to that of field dwarfs, from 11% to 18% percomponent. Sub-systems are more frequent among the components of widevisual binaries and among wide tertiary components to the known visualor spectroscopic binaries - 20% and 30%, respectively. In triple systemswith both outer (visual) and inner (spectroscopic) orbits known, we findan anti-correlation between the periods of inner sub-systems and theeccentricities of outer orbits which must be related to dynamicalstability constraints. Tables 1, 2, and 6 are only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/382/118 ICCD Speckle Observations of Binary Stars. XXIII. Measurements during 1982-1997 from Six Telescopes, with 14 New OrbitsWe present 2017 observations of 1286 binary stars, observed by means ofspeckle interferometry using six telescopes over a 15 year period from1982 April to 1997 June. These measurements constitute the 23dinstallment in CHARA's speckle program at 2 to 4 m class telescopes andinclude the second major collection of measurements from the MountWilson 100 inch (2.5 m) Hooker Telescope. Orbital elements are alsopresented for 14 systems, seven of which have had no previouslypublished orbital analyses. Speckle Interferometry at the US Naval Observatory. II.Position angles and separations resulting from 2406 speckleinterferometric observations of 547 binary stars are tabulated. This isthe second in a series of papers presenting measures obtained using the66 cm refractor at the US Naval Observatory in Washington, DC, with anintensified CCD detector. Program stars range in separation from 0.2" to3.8", with Deltam<=2.5 mag and a limiting magnitude of V=10.0. Theobservation epochs run from 1993 January through 1995 August. Randomerrors are estimated to be 14 mas in separation and 0.52d/rho inposition angle, where rho is the separation in arcseconds. Theinstrumentation and calibration are briefly described. Aspects of thedata analysis related to the avoidance of systematic errors are alsodiscussed. A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Ultraviolet and Optical Studies of Binaries with Luminous Cool Primaries and Hot Companions. V. The Entire IUE SampleWe have obtained or retrieved IUE spectra for over 100 middle- andlate-type giant and supergiant stars whose spectra indicate the presenceof a hot component earlier than type F2. The hot companions areclassified accurately by temperature class from their far-UV spectra.The interstellar extinction of each system and the relative luminositiesof the components are derived from analysis of the UV and opticalfluxes, using a grid of UV intrinsic colors for hot dwarfs. We find thatthere is fair agreement in general between current UV spectralclassification and ground-based hot component types, in spite of thedifficulties of assigning the latter. There are a few cases in which thecool component optical classifications disagree considerably with thetemperature classes inferred from our analysis of UV and opticalphotometry. The extinction parameter agrees moderately well with otherdeterminations of B-V color excess. Many systems are worthy of furtherstudy especially to establish their spectroscopic orbits. Further workis planned to estimate luminosities of the cool components from the dataherein; in many cases, these luminosities' accuracies should becomparable to or exceed those of the Hipparcos parallaxes. Micrometer Measures of Double StarsMicrometer measures of 795 double stars made with the 26 inch (0.66 m)refractor of the US Naval Observatory from 1984 to 1990 are presented. Measurements of double stars 1993.67 - 1998.13624 Micrometer Measurements of 224 pairs with a 32.5 cm Cassegrain, 719Measurements of 310 double stars with a 360 mm Newtonian are given.Tables 1 to 4 are available in electronic form only at the CDS130.79.128.5 or via http://cdsweb.u-strasbg.fr/Abstract.html ICCD Speckle Observations of Binary Stars. XVII. Measurements During 1993-1995 From the Mount Wilson 2.5-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1639H&db_key=AST MSC - a catalogue of physical multiple starsThe MSC catalogue contains data on 612 physical multiple stars ofmultiplicity 3 to 7 which are hierarchical with few exceptions. Orbitalperiods, angular separations and mass ratios are estimated for eachsub-system. Orbital elements are given when available. The catalogue canbe accessed through CDS (Strasbourg). Half of the systems are within 100pc from the Sun. The comparison of the periods of close and widesub-systems reveals that there is no preferred period ratio and allpossible combinations of periods are found. The distribution of thelogarithms of short periods is bimodal, probably due to observationalselection. In 82\% of triple stars the close sub-system is related tothe primary of a wide pair. However, the analysis of mass ratiodistribution gives some support to the idea that component masses areindependently selected from the Salpeter mass function. Orbits of wideand close sub-systems are not always coplanar, although thecorresponding orbital angular momentum vectors do show a weak tendencyof alignment. Some observational programs based on the MSC aresuggested. Tables 2 and 3 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Spectral classifications in the near infrared of stars with composite spectra. II. Study of a sample of 180 starsA sample of 180 supposedly composite-spectrum stars has been studied onthe basis of spectra obtained in the near infrared (8370-8780 Angstroms)at a dispersion of 33 Anstroms/mm. The objective was to study the coolercomponents of the systems. Of our sample, 120 are true compositespectra, 35 are hot spectra of types B, F and 25 are Am stars. We find astrong concentration of the cooler components of the composite spectraaround G8III. In view of the difficulty of classifying compositespectra, because of the super position of an early type dwarf and a latetype giant or supergiant spectrum, we have made several tests to controlthe classification based upon the infrared region. Since all tests gavepositive results, we conclude that our classifications can be consideredas being both reliable and homogeneous. Table \ref{tab1} is alsoavailable electronically at the CDS via anonymous ftp 130.79.128.5 orhttp://cdsweb.u-strasbg.fr/Abstracts.html} Based upon observationscarried out at Observatoire de Haute-Provence (CNRS). The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. Observations of lunar occultations at Observatoire de la Cote d'Azur.The results of 114 photoelectric observations are analysed. They wereobtained at the two stations of O.C.A.: Calern and Nice Observatories.We first give a short description of the two photometers used and thenwe present the interactive reduction method. The astrometric andphotometric parameters are derived from each light-curve. Finally wesummarize the results and discuss about the non point-like occultedstars: we present 38 double star measurements, for 16 objects, and 11determinations of angular diameters, for 4 objects. Binary star speckle measurements during 1989-1993 from the SAO 6 M and 1 M telescopes in ZelenchukWe have continued to survey visual and interferometric binary stars withsignificant orbital motion by means of speckle method at the telescopesof the Special Astrophysical Observatory (SAO) in Zelenchuk. Here wepresent the lists of 267 speckle observations made with the 6 m and the1 m telescopes in the period May 1989-November 1993. The photometric variability of K giantsWe have photometrically monitored 49 of the more than 200 K giants inthe Yale Catalog of Bright Stars (YCBS) which are named or suspectedvariable stars. Only two (HR 3275 and HR 5219) are clearly variable; afew more program stars and K- and M-giant comparison stars aremarginally variable. Most of these appear to be RS Canum Venaticorum orSR variables. Television speckle interferometry of binary stars at the Zeiss-1000 telescope.Not Available Ultraviolet and optical studies of binaries with luminous cool primaries and hot companions. IV - Further IUE detectionsWe have obtained IUE spectra for 31 middle and late-type giant andsupergiant stars whose TD-1 fluxes or ground-based spectra indicate thepresence of a hot component, or whose radial velocities indicate anunseen component. Stellar components earlier than type F1 were detectedin 22 cases. While 20 of the hot secondaries are seen weakly in opticalspectra, two are UV discoveries: HD 58134 and HD 183864. The hotcompanions are classified accurately by temperature class from theirfar-UV spectra. The interstellar extinction of each system and therelative luminosities of the components are derived from the UV andoptical fluxes, using a new grid of UV intrinsic colors for hot dwarfs.We find that many giant stars apparently have companions which are toohot and hence too luminous for consistency with the primary's spectralclassification. ICCD speckle observations of binary stars. X - A further survey for duplicity among the bright starsSpeckle interferometric observations are reported for 1123 starsselected from the Yale Bright Star Catalogue (BSC) in a continuingeffort to detect new binaries among the bright stars. Thirty-twopreviously unresolved binaries have been detected, including companionsto Xi UMa and 15 S Mon. Measures of 107 previously resolved systems,many of which resulted from earlier speckle observations, are alsopresented. No evidence of duplicity within a specific (m, Delta-m, rho)window of detectability was found for 984 bright stars. Many of thesystems discovered earlier have shown significant orbital motions, andwe present preliminary orbital elements for six binaries. This efforthas resulted in the discovery of 75 new, bright binaries. We considersome aspects of the duplicity frequencies among the diverse spectral andluminosity classes represented in this sample. We anticipate that thecompletion of a speckle survey of the BSC would lead to the discovery ofat least 200 additional binary systems with angular separations mostlybelow 0.20 arcsec. Many of these will have periods of the order of onedecade and will be accessible to complementary radial velocity programsof enhanced precision. Further IUE Detections of Hot Companions to Cool StarsMany late-type giants and supergiants have companion stars still on theupper main sequence, but barely detectable in ground-based observations.In 1989-90 we continued our surveys of suspected hot companions with theIUE satellite. Of 32 spectrum binaries or SB1 systems observed, 25reveal clear signatures in the UV of a B or A type component. The far-UVspectra are easily classified by temperature class with respect to IUEstandard stars. Some of the more significant new results: ground UVspectrosc. HD no. V spectral type type binary? Note---------------------------------------------------------- 25555 5.47 G0III + A4 dB8+ var 32835/6 7.65 F5 V + A dB8.5 a 37269AB 5.40 G5 III: +A3 d:A0 var 49126 7.28 F8 IV-V + B9.5 dB8.5 a 52690 6.55 M1 Ib + A-BgB8- 58134 7.64: G5 Ib g:A1- p 63208 6.18 G2 III + A4 gB9.5 var? 1675168.46 G5 III + A -- b 183864 7.32 G2 Ib d:A0 296.0d 218600 8.39 F2 Ib + A-- b In two systems the ground-based luminosity classifications of theirprimaries are seriously inconsistent with their upper main sequencecompanions (note a). Two spectrum binaries with supposedly A-typesecondaries show no evidence of such in the UV (note b). Several othercases demonstrate a trend in our surveys: although some classificationsare confirmed, we frequently find that the IUE spectral class is hotterthan the ground estimate by 2 or more subclasses. This work is supportedunder NASA contract NAS 5-28749. SBP is a staff member of the SpaceTelescope Science Institute. TBA is a member of the GHRS Science Team. The correction in right ascension of 508 stars determinated with PMO photoelectric transit instrument.Not Available UBV photometry of stars whose positions are accurately known. VIResults are presented from UBV photometric observations of 1000 stars ofthe Bright Star Catalogue and the faint extension of the FK5.Observations were carried out between July 1987 and December 1990 withthe 40-cm Cassegrain telescope of the Kvistaberg Observatory. ICCD speckle observations of binary stars. V - Measurements during 1988-1989 from the Kitt Peak and the Cerro Tololo 4 M telescopesAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1990AJ.....99..965M&db_key=AST Submit a new article • - No Links Found -
2019-10-19 12:50:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5715447068214417, "perplexity": 6507.054952128148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00278.warc.gz"}