url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://tex.stackexchange.com/questions/312111/how-to-crop-background-color-in-listings-to-the-longest-line-in-the-code
how to crop background color in listings to the longest line in the code? listings package allows one to put background color to the code listing. The problem is that the background color extents to the as large as the text area the code listings happened to be in. This can make it look ugly if the code was short in width, but the area it is in is large. Here is MWE \documentclass[11pt]{article} \usepackage[T1]{fontenc} \usepackage{color} \usepackage{listings} \definecolor{bg}{RGB}{240,240,240} \begin{document} \begin{tabular}{|p{0.8\textwidth}|p{.2\textwidth}|}\hline \begin{lstlisting}[language=Mathematica,backgroundcolor=\color{bg}] f[x_] := Sin[x]; Plot[f[x],{x,-Pi,Pi}] \end{lstlisting} & plot command \\\hline \end{tabular} \end{document} gives this What I'd like to get is this (ps. I used my highly developed skills in paint.exe to do the above manually). The problem is one does not know how "wide" the code will be, in order to may be put maybe a minipage around it of that specific width or a frame or such trick in order to limit the area. Any tricks one can do to help solve this? Does Latex have a command to find what is the length of longest line in the listing? If so, then one can use this length (plus a little bit more) to make a frame or minipage with it. • I don't know any automated way to do this, but you can limit the width of the background with the lstlisting parameter linewidth=14em . May 29, 2016 at 20:46 You can use tcblisting environment from tcolorbox package. With hbox key it'll be sized according to the dimensions of the content. \documentclass[11pt]{article} \usepackage[T1]{fontenc} \usepackage{color} \usepackage{listings} \usepackage{tcolorbox} \tcbuselibrary{listings} \definecolor{bg}{RGB}{240,240,240} \begin{document} \begin{tabular}{|p{0.8\textwidth}|p{.2\textwidth}|}\hline \begin{lstlisting}[language=Mathematica,backgroundcolor=\color{bg}] f[x_] := Sin[x]; Plot[f[x],{x,-Pi,Pi}] \end{lstlisting} & plot command \\\hline \end{tabular} \begin{tabular}{|p{0.8\textwidth}|p{.2\textwidth}|}\hline \begin{tcblisting}{colback=bg,size=minimal,hbox,listing only,listing options={language=Mathematica}} f[x_] := Sin[x]; Plot[f[x],{x,-Pi,Pi}] \end{tcblisting} & plot command \\\hline \end{tabular} \end{document} • Thanks. Nice solution. For some reason it does not work in longtable which I am using, so I posted separate question on that issue here. I am actually using longtable and not tabular. May 30, 2016 at 17:49
2022-05-21 03:25:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097564578056335, "perplexity": 1260.6780854506585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534773.36/warc/CC-MAIN-20220521014358-20220521044358-00698.warc.gz"}
https://codegolf.stackexchange.com/questions/37484/how-to-slow-down-a-drunkard-on-his-way-home
# How to slow down a drunkard on his way home Consider a square n by n grid graph that looks like this. It is important to notice that this graph is 11 by 11. At any given point a man stands at an intersection and he only ever moves vertically or horizontally by one step at a time to the next intersection. Sadly he has drunk a little too much so he chooses the direction he moves randomly from the up to 4 possible directions (up, down, left, right). This is up to 4 as if he is standing at a wall he has only 3 options of course and in a corner he only has 2. He starts in the bottom left hand corner and his goal is to get home which is the top right hand corner. The time is simply the number of steps it takes him. However, you are a malicious adversary who wants him to get home as slowly as possible. You can delete any number of edges from the graph at any time during during his walk. The only restriction is that you must always leave some way for him to get home and you can't delete an edge he has already used. The challenge is to devise as malicious an adversary as possible and then test it on a 100 by 100 20 by 20 graph with a random drunken walker. Your score is simply the average time it takes the random walker to get home over 10 1000 runs. You can use any language and libraries you like as long as they are freely available and easily installable in Linux. What do I need to implement? You should implement code for the random walker and also for the adversary and the code should be combined so that the output when run is simply the average of 1000 runs using your adversary code. The random walker code should be very simple to write as he just chooses from (x-1, y), (x+1, y), (x, y-1), and (x, y+1) making sure that none of those have been deleted or are out of range. The adversary code is of course more difficult and also needs to remember which edges the drunkard has already traversed so he doesn't try to delete any of them and to make sure there is still a route home for the drunkard, which is a little trickier to do quickly. Addendum 10 runs isn't really enough but I didn't want to punish people who managed to get really long walks. I have now increased it to 1000 due to popular request. However, if your walk is so long you can't do 1000 runs in a realistic amount of time, please just report for the maximum number of runs you can. High score table for 100 by 100. • 976124.754 by Optimizer. • 103000363.218 by Peter Taylor. Edit 1. Changed the graph size to 20 by 20 to help the running time of people's tests. I will make a new high table score for that size as people submit the scores. High score table for 20 by 20. 230,794.38 (100k runs) by justhalf 227,934 by Sparr 213,000 (approx) by Peter Taylor 199,094.3 by stokastic 188,000 (approx) by James_pic 64,281 by Geobits • I don't understand; can't you just delete all the edges at the beginning except the ones that form the longest path? – Peter Olson Sep 8 '14 at 21:16 • I don't see any rule showing that the drunkard can't re-walk the same edge twice. If he can take the same path between two points twice, and chooses turns at random, then logically isn't graph with the longest average (random) traversal the one with the most edges? That is, wouldn't the optimal (longest) graph be the one with no deleted edges? – millinon Sep 8 '14 at 21:19 • I am not a fan of requiring every entry to reinvent the wheel (walker). If someone posts a test harness/framework then I will upvote them and use it. – Sparr Sep 8 '14 at 21:32 • The advantage of removing a part of a path to make him go back to take the long way around is completely lost when his path is random; supposedly it's equally likely that he'll turn back at some point without needing you to remove an edge. I'd like to see some test data showing the average time with no edges removed, and then with certain edges removed as you seem to suggest. As far as this challenge, I think it would be much more interesting if the drunkard's path were deterministic. – millinon Sep 8 '14 at 21:42 • 10 rounds is not nearly enough. Even with a static 10x10 maze, let alone an intelligent adversary and a 100x100 maze, the standard deviation is around 50% of the average case. I'm running 10000 rounds and I still wouldn't consider the results comparison-worthy. – Sparr Sep 8 '14 at 23:03 # 230,794.38 on 20x20, 100k runs Latest Update: I finally built perfect dynamic 2-path solution. I said perfect since the previous version is actually not symmetric, it was easier to get longer path if the drunkard took one path over the other. The current one is symmetric, so it can get higher expected number of steps. After few trials, it seems to be around 230k, an improvement over the previous one which is about 228k. But statistically speaking those numbers are still within their huge deviation, so I don't claim that this is significantly better, but I believe this should be better than the previous version. The code is at the bottom of this post. It is updated so that it's much faster than the previous version, completing 1000 runs in 23s. Below is sample run and sample maze: Perfect Walker Average: 230794.384 Max: 1514506 Min:25860 Completed in 2317.374s _ _ _ _ _ _ _ _ _ _ _ _. | | | | | | | | | | | | | | | _ _ _ _ | | | | | | | | | | | | | | | |_ _ _ _ | | | | | | | | | | | | | | | _ _ _ _| | | | | | | | | | | | | | | | |_ _ _ _ | | | | | | | | | | | | | | | _ _ _ _| | | | | | | | | | | | | | | | |_ _ _ _ | | | | | | | | | | | | | | | _ _ _ _| | | | | | | | | | | | | | |_| |_ _ _ _ | | | | | | | | | | | | | _ _ _ _ _ _| | | | | | | | | | | | | | |_ _ _ _ _ _ | | | | | | | | | | | | | _ _ _ _ _ _| | | | | | | | | | | | | | |_ _ _ _ _ _ | | | | | | | | | | | | | _ _ _ _ _ _| | | | | | |_| |_| |_| |_| |_ _ _ _ _ _ | | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_| |_| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Previous submissions Finally I can match Sparr's result! =D Based on my previous experiments (see bottom of this post), the best strategy is to have double path and close one as the drunkard reaches any of them, and the variable comes from how good we can dynamically predict where the drunkard will go as to increase the chance of him getting into longer path. So based on my DOUBLE_PATH strategy, I built another one, which changes the maze (my DOUBLE_PATH maze was easily modifiable) depending on the drunkard movement. As he takes a path with more than one available options, I will close the paths so as to leave only two possible options (one from which he came, another the untravelled). This sounds similar to what Sparr has achieved, as the result shows. The difference with his is too small for it to be considered better, but I would say that my approach is more dynamic than him, since my maze is more modifiable than Sparr's =) The result with a sample final maze: EXTREME_DOUBLE_PATH Average: 228034.89 Max: 1050816 Min:34170 Completed in 396.728s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ## Experiments Section The best turns out to be the same strategy as stokastic, I take pride in experimenting using various strategies and printing nice outputs :) Each of the printed maze below is the last maze after the drunkard has reached home, so they might be slightly different from run to run due to the randomness in the drunkard movement and dinamicity of the adversary. I'll describe each strategy: ### Single Path This is the simplest approach, which will create a single path from entry to exit. SINGLE_PATH Average: 162621.612 Max: 956694 Min:14838 Completed in 149.430s _ _ _ _ _ _ _ _ _ _ | |_| |_| |_| |_| |_| |_| |_| |_| |_| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Island (level 0) This is an approach that tries to trap the drunkard in an almost isolated island. Doesn't work as good as I expected, but this is one of my first ideas, so I include it. There are two paths leading to the exit, and when the drunkard gets near to one of them, the adversary closes it, forcing him to find the other exit (and possibly gets trapped again in the island) ISLAND Average: 74626.070 Max: 428560 Min:1528 Completed in 122.512s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Double Path This is the most discussed strategy, which is to have two equal length paths to the exit, and close one of them as the drunkard gets near to one of them. DOUBLE_PATH Average: 197743.472 Max: 1443406 Min:21516 Completed in 308.177s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| _ _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ _ |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Island (level 1) Inspired by the multiple paths of island and the high walk count in single path, we connect the island to the exit and make single path maze in the island, creating in total three paths to exit, and similar to previous case, close any of the exit as the drunkard gets near. This works slightly better than pure single path, but still doesn't defeat the double path. ISLAND Average: 166265.132 Max: 1162966 Min:19544 Completed in 471.982s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _|_ | | |_| |_| |_| |_| |_| |_| |_| |_| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Island (level 2) Trying to expand the previous idea, I created nested island, creating in total five paths, but it doesn't seem to work that well. ISLAND Average: 164222.712 Max: 927608 Min:22024 Completed in 793.591s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ | | _ _ _ _ _ _ _ _|_| | | | |_| |_| |_| |_| |_| |_| |_| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_|_|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Island (level 3) Noticing that double path actually works better than single path, let's make the island in double path! The result is an improvement over Island (level 1), but it still doesn't beat pure double path. For comparison, the result for double path of the size of the island is 131,134.42 moves on average. So this does add quite significant number of moves (around 40k), but not enough to beat double path. ISLAND Average: 171730.090 Max: 769080 Min:29760 Completed in 587.646s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_ | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | _ _ _ _ _ _ _ _| |_ _ _ _ _ _ _ _ | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ### Island (level 4) Again, experimenting with nested island, and again it doesn't work so well. ISLAND Average: 149723.068 Max: 622106 Min:25752 Completed in 830.889s _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ |_| | | _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|_| | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | | |_ _ _ _ _ _ _ _ _ _ _ _ _ _| | | | | _ _ _ _ _ _ _| |_ _ _ _ _ _ _ | | | |_|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | | |_|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| | |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _| ## Conclusion All in all, this proves that having a single long path from drunkard current position to the exit works best, which is achieved by the double path strategy, since after closing an exit, the drunkard will have to travel the maximum distance possible to get to the exit. This further hints that the basic strategy should still be double path, and we can only modify how dynamic the paths are created, which has been done by Sparr. So I believe his strategy is the way to go! ## Code import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.Queue; import java.util.TreeSet; public class Walker { enum Strategy{ SINGLE_PATH, ISLAND, DOUBLE_PATH, EXTREME_DOUBLE_PATH, PERFECT_DOUBLE_PATH, } int width,height; int x,y; //walker's position int dX,dY; //destination Point[][] points; int stepCount = 0; public static void main(String[]args){ int side = 20; // runOnce(side, Strategy.EXTREME_DOUBLE_PATH, 0); runOnce(side, Strategy.PERFECT_DOUBLE_PATH, 0); // for(Strategy strategy: Strategy.values()){ // runOnce(side, strategy, 0); // } // runOnce(side, Strategy.ISLAND, 1); // runOnce(side, Strategy.ISLAND, 2); // Scanner scanner = new Scanner(System.in); // System.out.println("Enter side, strategy (SINGLE_PATH, ISLAND, DOUBLE_PATH, EXTREME_DOUBLE_PATH), and level:"); // while(scanner.hasNext()){ // side = scanner.nextInt(); // Strategy strategy = Strategy.valueOf(scanner.next()); // int level = scanner.nextInt(); // scanner.nextLine(); // runOnce(side, strategy, level); // System.out.println("Enter side, strategy (SINGLE_PATH, ISLAND, DOUBLE_PATH, EXTREME_DOUBLE_PATH), and level:"); // } // scanner.close(); } private static Walker runOnce(int side, Strategy strategy, int level) { Walker walker = null; long total = 0; int max = 0; int min = Integer.MAX_VALUE; double count = 1000; long start = System.currentTimeMillis(); for(int i=0; i<count; i++){ walker = new Walker(0,0,side,side,side-1,side-1, strategy, level, false); total += walker.stepCount; max = Math.max(walker.stepCount, max); min = Math.min(walker.stepCount, min); // System.out.println("Iteration "+i+": "+walker.stepCount); } System.out.printf("%s\nAverage: %.3f\nMax: %d\nMin:%d\n",strategy, total/count, max, min); System.out.printf("Completed in %.3fs\n", (System.currentTimeMillis()-start)/1000.0); walker.printPath(); return walker; } private void createIsland(int botLeftX, int botLeftY, int topRightX, int topRightY){ for(int i=botLeftY+1; i<topRightY; i++){ if(i>botLeftY+1) deletePath(points[botLeftX][i].right()); if(i<topRightY-1) deletePath(points[topRightX][i].left()); } for(int i=botLeftX+1; i<topRightX; i++){ if(i>botLeftX+1) deletePath(points[i][botLeftY].up()); if(i<topRightX-1) deletePath(points[i][topRightY].down()); } } private void createSinglePath(int botLeftX, int botLeftY, int topRightX, int topRightY){ for(int i=botLeftY; i<topRightY; i++){ if(i==topRightY-1 && (topRightY+1-botLeftY)%2==0){ for(int j=botLeftX; j<topRightX; j++){ if(j==topRightX-1 && (j-botLeftX)%2==0){ deletePath(points[topRightX][topRightY].down()); } else { deletePath(points[j][topRightY-1+((j-botLeftX)%2)].right()); } } } else { for(int j=botLeftX+(i-botLeftY)%2; j<topRightX+((i-botLeftY)%2); j++){ deletePath(points[j][i].up()); } } } } private void createDoublePath(int botLeftX, int botLeftY, int topRightX, int topRightY){ for(int i=botLeftY; i<topRightY; i++){ if(i>botLeftY && (width%4!=1 || i<topRightY-1)) deletePath(points[width/2-1][i].right()); if(i==topRightY-1 && (topRightY+1-botLeftY)%2==1){ for(int j=botLeftX; j<topRightX; j++){ if((j-botLeftX)%2==0 || j<topRightX-1){ deletePath(points[j][topRightY-1+((j-botLeftX)%2)].right()); } else { deletePath(points[topRightX-1][topRightY-1].right()); } } } else { if((i-botLeftY)%2==0){ for(int j=botLeftX+1; j<topRightX; j++){ deletePath(points[j][i].up()); } } else { for(int j=botLeftX; j<topRightX+1; j++){ if(j!=width/2 && j!=width/2-1){ deletePath(points[j][i].up()); } } } } } } public Walker(int startingX,int startingY, int Width, int Height, int destinationX, int destinationY, Strategy strategy, int level, boolean animate){ width = Width; height = Height; dX = destinationX; dY = destinationY; x=startingX; y=startingY; points = new Point[width][height]; for(int y=0; y<height; y++){ for(int x=0; x<width; x++){ points[x][y] = new Point(x,y); } } for(int y=0; y<height; y++){ for(int x=0; x<width; x++){ if(x<width-1) new Edge(points[x][y], points[x+1][y]); if(y<height-1) new Edge(points[x][y], points[x][y+1]); } } if(strategy == Strategy.SINGLE_PATH) createSinglePath(0,0,width-1,height-1); if(strategy == Strategy.DOUBLE_PATH) createDoublePath(0,0,width-1,height-1); List<EdgeList> edgeLists = new ArrayList<EdgeList>(); if(strategy == Strategy.ISLAND){ List<Edge> edges = new ArrayList<Edge>(); if(level==0){ createIsland(0,0,width-1,height-1); deletePath(points[width-2][height-2].right()); deletePath(points[width-2][height-2].up()); } else { for(int i=0; i<level; i++){ createIsland(i,i,width-1-i, height-1-i); } createDoublePath(level,level,width-1-level,height-1-level); for(int i=height-1; i>=height-level; i--){ } } } int[] availableVerticals = new int[height]; if(strategy == Strategy.EXTREME_DOUBLE_PATH){ for(int i=1; i<width-1; i++){ deletePath(points[i][0].up()); } availableVerticals[0] = 2; for(int i=1; i<height; i++){ availableVerticals[i] = width; } } boolean[][] available = new boolean[width][height]; if(strategy == Strategy.PERFECT_DOUBLE_PATH){ for(int x=0; x<width; x++){ for(int y=0; y<height; y++){ if(x%2==1 && y%2==1){ available[x][y] = true; } else { available[x][y] = false; } } } } // printPath(); while(!walk()){ if(strategy == Strategy.ISLAND){ if(x==y && (x==1 || (x>=2 && x<=level))){ if(!hasBeenWalked(points[x][x].down())){ deletePath(points[x][x].down()); } else if(!hasBeenWalked(points[x][x].left())){ deletePath(points[x][x].left()); } } } if(strategy == Strategy.EXTREME_DOUBLE_PATH){ Point cur = points[x][y]; int untravelled = 0; for(Edge edge: cur.edges) if(edge!=null && !edge.walked) untravelled++; if(untravelled>1){ if(cur.up()!=null && availableVerticals[y]>2 && !cur.up().walked){ deletePath(cur.up()); availableVerticals[y]--; } if(cur.down()!=null && !cur.down().walked){ deletePath(cur.down()); availableVerticals[y-1]--; } if(cur.up()!=null && cur.left()!=null && !cur.left().walked){ deletePath(cur.left()); deletePath(points[x][y+1].left()); } if(cur.up()!=null && cur.right()!=null && !cur.right().walked){ deletePath(cur.right()); if(y<height-1) deletePath(points[x][y+1].right()); } } } if(strategy == Strategy.PERFECT_DOUBLE_PATH){ Point cur = points[x][y]; int untravelled = 0; for(Edge edge: cur.edges) if(edge!=null && !edge.walked) untravelled++; if(x%2!=1 || y%2!=1){ if(untravelled>1){ if(cur.down()==null && hasBeenWalked(cur.right())){ if(canBeDeleted(cur.up())) deletePath(cur.up()); } if(cur.down()==null && hasBeenWalked(cur.left())){ if(x%2==0 && y%2==1 && canBeDeleted(cur.right())) deletePath(cur.right()); else if(cur.right()!=null && canBeDeleted(cur.up())) deletePath(cur.up()); } if(cur.left()==null && hasBeenWalked(cur.up())){ if(canBeDeleted(cur.right())) deletePath(cur.right()); } if(cur.left()==null && hasBeenWalked(cur.down())){ if(x%2==1 && y%2==0 && canBeDeleted(cur.up())) deletePath(cur.up()); else if (cur.up()!=null && canBeDeleted(cur.right())) deletePath(cur.right()); } } } else { if(!hasBeenWalked(cur.left())){ if(x>1 && available[x-2][y]){ if(untravelled>1){ available[x-2][y] = false; deletePath(cur.up()); } } else if(cur.up()!=null){ if(canBeDeleted(cur.left())) deletePath(cur.left()); if(canBeDeleted(points[x][y+1].left())) deletePath(points[x][y+1].left()); } } if(!hasBeenWalked(cur.down())){ if(y>1 && available[x][y-2]){ if(untravelled>1){ available[x][y-2] = false; deletePath(cur.right()); } } else if(cur.right()!=null){ if(canBeDeleted(cur.down())) deletePath(cur.down()); if(canBeDeleted(points[x+1][y].down())) deletePath(points[x+1][y].down()); } } } } if(strategy == Strategy.DOUBLE_PATH || strategy == Strategy.EXTREME_DOUBLE_PATH || strategy == Strategy.PERFECT_DOUBLE_PATH){ if(x==width-2 && y==height-1 && points[width-1][height-1].down()!=null){ deletePath(points[width-1][height-1].left()); } if(x==width-1 && y==height-2 && points[width-1][height-1].left()!=null){ deletePath(points[width-1][height-1].down()); } } else if(strategy == Strategy.ISLAND){ for(EdgeList edgeList: edgeLists){ boolean deleted = false; for(Edge edge: edgeList.edges){ if(edge.start.x == x && edge.start.y == y){ if(!hasBeenWalked(edge)){ deletePath(edge); edgeList.edges.remove(edge); if(edgeList.edges.size() == 1){ edgeLists.remove(edgeList); } deleted = true; break; } } } if(deleted) break; } } if(animate)printPath(); } } public boolean hasBeenWalked(Edge edge){ if(edge == null) return false; return edge.walked; } public boolean canBeDeleted(Edge edge){ if(edge == null) return false; return !edge.walked; } List<Edge> result = new ArrayList<Edge>(); for(Edge edge: points[x][y].edges){ } return result; } public void printPath(){ StringBuilder builder = new StringBuilder(); for(int y=height-1; y>=0; y--){ for(int x=0; x<width; x++){ Point point = points[x][y]; if(this.x==x && this.y==y){ if(point.up()!=null) builder.append('?'); else builder.append('.'); } else { if(point.up()!=null) builder.append('|'); else builder.append(' '); } if(point.right()!=null) builder.append('_'); else builder.append(' '); } builder.append('\n'); } System.out.print(builder.toString()); } public boolean walk(){ ArrayList<Edge> possibleMoves = new ArrayList<Edge>(); Point cur = points[x][y]; for(Edge edge: cur.edges){ } int random = (int)(Math.random()*possibleMoves.size()); Edge move = possibleMoves.get(random); move.walked = true; if(move.start == cur){ x = move.end.x; y = move.end.y; } else { x = move.start.x; y = move.start.y; } stepCount++; if(x==dX && y == dY){ return true; } else { return false; } } public boolean isSolvable(){ TreeSet<Point> reachable = new TreeSet<Point>(); next.offer(points[x][y]); while(next.size()>0){ Point cur = next.poll(); ArrayList<Point> neighbors = new ArrayList<Point>(); for(Point neighbor: neighbors){ if(!reachable.contains(neighbor)){ if(neighbor == points[dX][dY]) return true; next.offer(neighbor); } } } return false; } public boolean deletePath(Edge toDelete){ if(toDelete == null) return true; // if(toDelete.walked){ // return false; // } int startIdx = toDelete.getStartIdx(); int endIdx = toDelete.getEndIdx(); toDelete.start.edges[startIdx] = null; toDelete.end.edges[endIdx] = null; // if(!isSolvable()){ // toDelete.start.edges[startIdx] = toDelete; // toDelete.end.edges[endIdx] = toDelete; // System.err.println("Invalid deletion!"); // return false; // } return true; } static class EdgeList{ List<Edge> edges; public EdgeList(Edge... edges){ this.edges = new ArrayList<Edge>(); } } static class Edge implements Comparable<Edge>{ Point start, end; boolean walked; public Edge(Point start, Point end){ walked = false; this.start = start; this.end = end; this.start.edges[getStartIdx()] = this; this.end.edges[getEndIdx()] = this; if(start.compareTo(end)>0){ Point tmp = end; end = start; start = tmp; } } public Edge(int x1, int y1, int x2, int y2){ this(new Point(x1,y1), new Point(x2,y2)); } public boolean exists(){ return start.edges[getStartIdx()] != null || end.edges[getEndIdx()] != null; } public int getStartIdx(){ if(start.x == end.x){ if(start.y < end.y) return 0; else return 2; } else { if(start.x < end.x) return 1; else return 3; } } public int getEndIdx(){ if(start.x == end.x){ if(start.y < end.y) return 2; else return 0; } else { if(start.x < end.x) return 3; else return 1; } } public boolean isVertical(){ return start.x==end.x; } @Override public int compareTo(Edge o) { int result = start.compareTo(o.start); if(result!=0) return result; return end.compareTo(o.end); } } static class Point implements Comparable<Point>{ int x,y; Edge[] edges; public Point(int x, int y){ this.x = x; this.y = y; edges = new Edge[4]; } public Edge up(){ return edges[0]; } public Edge right(){ return edges[1]; } public Edge down(){ return edges[2]; } public Edge left(){ return edges[3]; } public int compareTo(Point o){ int result = Integer.compare(x, o.x); if(result!=0) return result; result = Integer.compare(y, o.y); if(result!=0) return result; return 0; } } } • This is very impressive. How long does it take to run? If the winning entries remain this close we will have to increase the number of runs to see if we can separate them. – user9206 Sep 10 '14 at 9:05 • The timing is already included in the snippet. Around 400s for 1000 runs. That's including solvability check at each path deletion. I can remove that to have around 170s for 1000 runs. So I can do 20k runs in about an hour. – justhalf Sep 10 '14 at 9:10 • Actually optimizing further I might be able to run 100k in 3.5 hours. – justhalf Sep 10 '14 at 9:45 • My score is with 100k runs and took 10 minutes. @justhalf very nice on the more flexible double path maze. I know how to make an even better one, but I don't have the patience to implement it right now. – Sparr Sep 10 '14 at 14:55 • Happy to see the symmetric solution implemented. I've got yet another idea to improve this solution, and this time I think I might implement it myself :) – Sparr Sep 15 '14 at 21:55 # 227934 (20x20) My third attempt. Uses the same general approach as @stokastic with two paths to the exit. When the walker reaches the end of one path, it closes, requiring him to return to get to the end of the other path to exit. My improvement is to generate the paths as the walker progresses, so that whichever path he is progressing further along in the first half of the process will end up being longer than the other path. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <math.h> #include <iostream> #define DEBUG 0 #define ROUNDS 10000 #define Y 20 #define X 20 #define H (Y*2+1) #define W (X*2+1) int maze[H][W]; int scores[ROUNDS]; int x, y; void print_maze(){ char line[W+2]; line[W+1]=0; for(int row=0;row<H;row++) { for(int col=0;col<W;col++) { switch(maze[row][col]) { case 0: line[col]=' '; break; case 1: line[col]=row%2?'-':'|'; break; case 8: line[col]=(row==y*2+1&&col==x*2+1)?'@':'?'; break; case 9: line[col]=(row==y*2+1&&col==x*2+1)?'@':'*'; break; } } line[W]='\n'; printf("%s",line); } printf("%d %d\n",y,x); } int main(){ srand (time(NULL)); long long total_turns = 0; for(int round=0;round<ROUNDS;round++) { for (int r=0;r<H;r++) { for (int c=0;c<W;c++) { maze[r][c]=0; } } maze[1][1]=9; maze[1][2]=1; maze[2][1]=1; maze[1][3]=8; maze[3][1]=8; int progress_l = 0; int progress_r = 0; int side = 0; int closed_exit = 0; x=0; y=0; if (DEBUG) print_maze(); long long turn = 0; int in = 0; while (x!=X-1||y!=Y-1) { turn++; int r = y*2+1; int c = x*2+1; int dx=0, dy=0; if (DEBUG) { std::cin>>in; switch(in) { case 0: dy=-1; dx=0; break; case 1: dy=0; dx=1; break; case 2: dy=1; dx=0; break; case 3: dy=0; dx=-1; break; default: dy=0; dx=0; break; } } else { int exits = maze[r-1][c] + maze[r][c+1] + maze[r+1][c] + maze[r][c-1]; int exit_choice = -1; do { if (rand()%exits == 0) { exit_choice = exits; break; } else { exits--; } }while(exits); --exits; if (maze[r-1][c]&&!dx&&!dy) { if (exits) { --exits; } else { dy = -1; dx = 0; } } if (maze[r][c+1]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 0; dx = 1; } } if (maze[r+1][c]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 1; dx = 0; } } if (maze[r][c-1]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 0; dx = -1; } } } x+=dx; y+=dy; if(x==X-1 && y==Y-1) continue; if (x==0&&y==1) side=-1; if (x==1&&y==0) side=1; if (maze[y*2+1][x*2+1]==8) { // room needs another exit, maybe if (side==-1) { // left half of maze if (y==1) { // top of a column if (x%2) { // going up, turn right maze[y*2+1][x*2+2]=1; maze[y*2+1][x*2+3]=8; } else { // going right, turn down maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } } else if (y==Y-1) { // bottom of a column if (x%2 && x<(X-progress_r-3)) { // going right, turn up if there's room maze[y*2+0][x*2+1]=1; maze[y*2-1][x*2+1]=8; progress_l=x+1; } else { // going down or exiting, go right if (x!=X-2 or closed_exit==1) { maze[y*2+1][x*2+2]=1; maze[y*2+1][x*2+3]=8; } else { closed_exit = -1; } } } else { // in a column if (maze[y*2+0][x*2+1]) { // going down maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } else { // going up maze[y*2+0][x*2+1]=1; maze[y*2-1][x*2+1]=8; } } } else { // right half of maze if (y==0) { // top row if (x<X-1) { // go right maze[y*2+1][x*2+2]=1; maze[y*2+1][x*2+3]=8; } else { // go down maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } } else if (y==Y-2) { // heading right to the exit if (x<X-1) { // go right maze[y*2+1][x*2+2]=1; maze[y*2+1][x*2+3]=8; } else { // go down if (x!=X-1 or closed_exit==-1) { maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } else { closed_exit = 1; } } } else if (y==Y-3) { // bottom of a column if (x>progress_l+1) { // do we have room for another column? if (!(x%2)&&y>1) { // going left, turn up maze[y*2+0][x*2+1]=1; maze[y*2-1][x*2+1]=8; } else { // going down, turn left maze[y*2+1][x*2+0]=1; maze[y*2+1][x*2-1]=8; progress_r=X-x-1; } } else { // abort, move down to escape row maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } } else if (y==1) { // top of a column if (!(x%2)) { // going up, turn left maze[y*2+1][x*2+0]=1; maze[y*2+1][x*2-1]=8; } else { // going left, turn down maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } } else { // in a column if (maze[y*2+0][x*2+1]) { // going down maze[y*2+2][x*2+1]=1; maze[y*2+3][x*2+1]=8; } else { // going up maze[y*2+0][x*2+1]=1; maze[y*2-1][x*2+1]=8; } } } maze[y*2+1][x*2+1]=9; } if (DEBUG) print_maze(); } // print_maze(); printf("turns:%lld\n",turn); scores[round] = turn; total_turns += turn; } printf("%d rounds in a %d*%d maze\n",ROUNDS,X,Y); long long avg = total_turns/ROUNDS; printf("average: % 10lld\n",avg); long long var = 0; for(int r=0;r<ROUNDS;r++){ var += (scores[r]-avg)*(scores[r]-avg); } var/=ROUNDS; // printf("variance: %lld\n",var); int stddev=sqrt(var); printf("stddev: % 10d\n",stddev); } output (with time): ... turns:194750 turns:506468 turns:129684 turns:200712 turns:158664 turns:156550 turns:311440 turns:137900 turns:86948 turns:107134 turns:81806 turns:310274 100000 rounds in a 20*20 maze average: 227934 stddev: 138349 real 10m54.797s ... example of my maze, with roughly equal length halves to the path, showing the left/lower path cut off from the exit (bottom right): _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | _ _ _ _ _ _ _ _ _ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |_| |_| |_| |_| |_| | | | | | | | | | |_ _ _ _ _ _ _ _ _ _ |_| |_| |_| |_| |_ _ _ _ _ _ _ _ _ _ ! PS: I am aware of a very minor improvement to this algorithm that requires more clever code to generate a different shape for the two paths, staircases instead of consistent height zig zags. • Color me impressed. You have my vote sir! – stokastic Sep 10 '14 at 0:35 • That's pretty impressive. Remember when we just drew on the drunks' faces? – Dennis Sep 10 '14 at 0:40 • It's pretty hard to discern your graph, perhaps you can change your graph printing to something similar to mine? – justhalf Sep 10 '14 at 2:39 • @justhalf your wish is my command – Sparr Sep 10 '14 at 3:42 • @justhalf I've got it drawn out on paper. Just need to write the logic. If I don't get it done in a couple more days, I'll give you the sketch? :) – Sparr Sep 19 '14 at 17:11 ## 199094.3 for 20x20 I have implemented a solution that creates two paths to the finish, and closes exactly one of them once the drunkard reaches it. This simulates a path length which at the very least will be 1.5x the length of a single path from start to end. After 27 runs I hit an average of about 135 million. Unfortunately it takes several minutes per walk, so I will have to run it for the next few hours. One caveat - my double path generator only works if the size of the graph is in the form 4*n + 2, meaning the closest I can get to 100 is 102 or 98. I am going to post results using 98, which I expect to still surpass the zigzag path method. I will work on a better pathing system later. Currently outputs results in the form of (numSteps, currentAverage) after each walk. EDIT: fixed so code now works on graph sizes that are any multiple of 2, rather than 4*n + 2. Code: (add 'True' argument to walker constructor on line 187 for turtle drawing of the graph). import random import turtle WIDTH = 20 HEIGHT = 20 L, U, R, D = 1, 2, 4, 8 def delEdge(grid, x1, y1, x2, y2): # check that coordinates are in-bounds if not (0 <= x1 < WIDTH): return False if not (0 <= y1 < HEIGHT): return False if not (0 <= x2 < WIDTH): return False if not (0 <= y2 < HEIGHT): return False # swap order such that x1 <= x2 and y1 <= y2 if x2 < x1: x2 ^= x1 x1 ^= x2 x2 ^= x1 if x2 < x1: print "Swap failure: {}, {}".format(x1, x2) if y2 < y1: y2 ^= y1 y1 ^= y2 y2 ^= y1 if y2 < y1: print "Swap failure: {}, {}".format(y1, y2) # check that only one of the deltas is = 1 dx = x2 - x1 dy = y2 - y1 if dx and dy: return False if not (dx or dy): return False if dx > 1: return False if dy > 1: return False #print "<{}, {}>, <{}, {}>".format(x1, y1, x2, y2) if dx > 0: try: grid[x1][y1].remove(R) except: pass try: grid[x2][y2].remove(L) except: pass if dy > 0: try: grid[x1][y1].remove(D) except: pass try: grid[x2][y2].remove(U) except: pass return True def newGrid(): grid = [[[] for y in xrange(HEIGHT)] for x in xrange(WIDTH)] for x in xrange(WIDTH): for y in xrange(HEIGHT): if x > 0: grid[x][y].append(L) if x < WIDTH-1: grid[x][y].append(R) if y > 0: grid[x][y].append(U) if y < HEIGHT-1: grid[x][y].append(D) return grid class walker: def __init__(self, grid, mode, draw=False): self.x = 0 self.y = 0 self.dx = WIDTH-1 self.dy = HEIGHT-1 self.grid = grid self.mode = mode self.draw = draw self.numSteps = 0 self.initGrid() def initGrid(self): if self.mode == 0: #pass if self.draw: drawGrid(grid) elif self.mode == 1: for y in xrange(HEIGHT-1): if y % 2 == 0: for x in xrange(WIDTH - 1): delEdge(grid, x, y, x, y+1) else: for x in xrange(1, WIDTH): delEdge(grid, x, y, x, y+1) if self.draw: drawGrid(grid) elif self.mode == 2: for y in xrange(HEIGHT/2): if y % 2 == 0: for x in xrange(1, WIDTH-1): delEdge(grid, x, y, x, y+1) else: for x in xrange(2, WIDTH): delEdge(grid, x, y, x, y+1) for y in xrange(HEIGHT/2, HEIGHT-1): if y%2 == 0: for x in xrange(1, WIDTH-1): delEdge(grid, x, y, x, y+1) else: for x in xrange(0, WIDTH-2): delEdge(grid, x, y, x, y+1) for y in xrange(1, HEIGHT-1): midpoint = HEIGHT/2 if HEIGHT % 4 == 0: midpoint = HEIGHT/2 + 1 if y < midpoint: delEdge(grid, 0, y, 1, y) else: delEdge(grid, WIDTH-1, y, WIDTH-2, y) if self.draw: drawGrid(grid) def walk(self): self.numSteps += 1 choices = grid[self.x][self.y] direction = random.choice(choices) #print (self.x, self.y), grid[self.x][self.y], direction if direction == L: self.x -= 1 elif direction == U: self.y -= 1 elif direction == R: self.x += 1 elif direction == D: self.y += 1 def main(self): hasBlocked = False while (self.x, self.y) != (self.dx, self.dy): #print (self.x, self.y), (self.dx, self.dy) self.walk() if self.mode == 2: if not hasBlocked: if (self.x, self.y) == (WIDTH-2, HEIGHT-1): delEdge(self.grid, WIDTH-2, HEIGHT-1, WIDTH-1, HEIGHT-1) hasBlocked = True elif (self.x, self.y) == (WIDTH-1, HEIGHT-2): delEdge(self.grid, WIDTH-1, HEIGHT-1, WIDTH-1, HEIGHT-2) hasBlocked = True return self.numSteps def drawGrid(grid): size = 3 turtle.speed(0) turtle.delay(0) turtle.ht() for x in xrange(WIDTH): for y in xrange(HEIGHT): dirs = grid[x][y] for dir in dirs: if dir == L: turtle.pu() turtle.setpos((x*4, y*4)) turtle.pd() turtle.setpos(((x-1)*4, y*4)) elif dir == R: turtle.pu() turtle.setpos((x*4, y*4)) turtle.pd() turtle.setpos(((x+1)*4, y*4)) elif dir == U: turtle.pu() turtle.setpos((x*4, y*4)) turtle.pd() turtle.setpos((x*4, (y-1)*4)) elif dir == D: turtle.pu() turtle.setpos((x*4, y*4)) turtle.pd() turtle.setpos((x*4, (y+1)*4)) turtle.mainloop() numTrials = 100 totalSteps = 0.0 i = 0 try: while i < numTrials: grid = newGrid() w = walker(grid, 2) steps = w.main() totalSteps += steps print steps, totalSteps/(i+1) i += 1 print totalSteps / numTrials except KeyboardInterrupt: print totalSteps / i Raw data: (current numSteps, running average) 358796490 358796490.0 49310430 204053460.0 106969130 171692016.667 71781702 146714438.0 49349086 127241367.6 40874636 112846912.333 487607888 166384194.571 56423642 152639125.5 71077302 143576700.667 101885368 139407567.4 74423642 133499937.818 265170542 144472488.167 59524778 137938048.923 86919630 134293876.143 122462528 133505119.6 69262650 129489965.25 85525556 126903823.529 161165512 128807250.667 263965384 135920836.632 128907594 135570174.5 89535930 133378067.619 97344576 131740181.636 98772132 130306788.174 140769524 130742735.5 198274280 133443997.28 95417374 131981434.846 226667006 135488307.852 • I reduced the graph size to 20 by 20 to make run times quicker. I hope it helps. – user9206 Sep 9 '14 at 18:12 • You are currently winning :) – user9206 Sep 9 '14 at 19:08 • Is your 20 by 20 score over 1000 runs? – user9206 Sep 9 '14 at 19:20 • @Lembik yes it is. – stokastic Sep 9 '14 at 19:28 • @Dennis au contraire :) – Sparr Sep 9 '14 at 23:30 ## 4-path approach, 213k The one-path approach is and scores an average of N^2. The two-path approach is but then the first time the drunkard gets within reach of the end point, it's cut: It scores an average of (N/2)^2 + N^2. The four-path approach uses two cuts: Assume that the outer loop is of length xN and the inner loop of length (1-x)N. For simplicity, I'll normalise to N=1. From start to the first cut scores an average of (x/2)^2. From first cut to second cut has two options, of lengths x and 1-x; this gives an average of (1-x)x^2 + x(1-x)^2 = x-x^2. Finally the remaining path gives 1. So the total score is N^2 (1 + x - 3/4 x^2). I initially assumed that keeping the available paths of equal length at each step would be optimal, so my initial approach used x = 1/2 giving a score of 1.3125 N^2. But after doing the above analysis it turns out that the optimal split is given when x = 2/3 with score 1.3333 N^2. 1000 walks with average 210505.738 in 202753ms 1000 walks with average 212704.626 in 205191ms with code import java.awt.Point; import java.util.*; // http://codegolf.stackexchange.com/q/37484/194 public class RandomWalker { private static final int SIZE = 19; private static final Point dest = new Point(SIZE, SIZE); private final Random rnd = new Random(); private Point p = new Point(0, 0); private int step = 0; private Set<Set<Point>> edges; private Map<Set<Point>, String> cuttableEdgeNames; private Set<String> cutSequences; private String cutSequence = ""; public static void main(String[] args) { long start = System.nanoTime(); long total = 0; int walks = 0; while (walks < 1000 && total < 1L << 40) { RandomWalker rw = new RandomWalker(); total += rw.walk(); walks++; } long timeTaken = System.nanoTime() - start; System.out.println(walks + " walks with average " + total / (double)walks + " in " + (timeTaken / 1000000) + "ms"); } RandomWalker() { "+-+ +-+ +-+ +-+ +-+ +-+ +-+-+-+-+-+-+-+", "| | | | | | | | | | | | | |", "+ + + + + + + + + + + + + +-+ +-+ +-+ +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + +-+ + + + + + + +", "| | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + +-+-+ + + + + + +", "| | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + +-+ + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ +-+ +-+ +-+ +-+ +-+ + + + + + + + + +", "| | | | | | | | | |", "+ +-+ +-+ +-+ +-+ +-+ + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + + + + + + + +", "| | | | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + +-+ + + + + + + +", "| | | | | | | | | | | | | | | | | |", "+ + + + + + + + + + + +-+ + + + + + + +", "| | | | | | | | | | | | | | | | | | | d", "+ + + + + + + + + + + + + + +-+ +-+ +c+", "| | | | | | | | | | | | | | |", "+ + + + + + + + + + + + + +-+-+-+-+-+ +", "| | | | | | | | | | | | | f b", "+-+ +-+ +-+ +-+ +-+ +-+ +-+-+-+-+-+e+a+" ); cutSequences = new HashSet<String>(); } edges = new HashSet<Set<Point>>(); cuttableEdgeNames = new HashMap<Set<Point>, String>(); // Horizontal edges for (int y = 0; y <= SIZE; y++) { for (int x0 = 0; x0 < SIZE; x0++) { char ch = row[y * 2].charAt(x0 * 2 + 1); if (ch == ' ') continue; Set<Point> edge = new HashSet<Point>(); if (ch != '-') cuttableEdgeNames.put(edge, "" + ch); } } // Vertical edges for (int y0 = 0; y0 < SIZE; y0++) { for (int x = 0; x <= SIZE; x++) { char ch = row[y0 * 2 + 1].charAt(x * 2); if (ch == ' ') continue; Set<Point> edge = new HashSet<Point>(); if (ch != '|') cuttableEdgeNames.put(edge, "" + ch); } } } int walk() { while (!p.equals(dest)) { List<Point> neighbours = neighbours(p); int idx = rnd.nextInt(neighbours.size()); p = neighbours.get(idx); step++; } return step; } List<Point> neighbours(Point p) { List<Point> rv = new ArrayList<Point>(); if (p.x > 0) handlePossibleNeighbour(rv, p, new Point(p.x - 1, p.y)); if (p.x < SIZE) handlePossibleNeighbour(rv, p, new Point(p.x + 1, p.y)); if (p.y > 0) handlePossibleNeighbour(rv, p, new Point(p.x, p.y - 1)); if (p.y < SIZE) handlePossibleNeighbour(rv, p, new Point(p.x, p.y + 1)); return rv; } private void handlePossibleNeighbour(List<Point> neighbours, Point p1, Point p2) { } private boolean edgeExists(Point p1, Point p2) { Set<Point> edge = new HashSet<Point>(); // Is it cuttable? String id = cuttableEdgeNames.get(edge); if (id != null) { String prefix = cutSequence + id; for (String seq : cutSequences) { if (seq.startsWith(prefix)) { // Cut it cutSequence = prefix; edges.remove(edge); return false; } } } return edges.contains(edge); } } • Ah, I see, that's why my island approach doesn't work, I didn't make the path length balanced. Just to clarify my understanding, the length from f to c in your code is about N/2, be it through e (and d) or not, right? – justhalf Sep 10 '14 at 17:14 • how is the y-E path length N instead of length N/2? – Sparr Sep 10 '14 at 19:52 • @justhalf, yes. There are 400 vertices, so there are 401 edges (after one cut the graph is a Hamiltonian cycle); the two outer paths are 100 edges each, and the inner loop is therefore 101 edges. – Peter Taylor Sep 10 '14 at 20:51 • got it. two observations: a) larger mazes would benefit from greater 2^n paths. b) if you make your path length dynamic, you'll beat the current leaders with dynamic two-path solutions (myself and @justhalf) – Sparr Sep 10 '14 at 20:59 • @Sparr: it's N^2, not 2^N. And yes, making this dynamic will make it the best, the challenge is how to make it dynamic while keeping the four-path property. @PeterTaylor: Nice images! – justhalf Sep 11 '14 at 2:17 I experimented with slicing the grid almost entirely across every k rows. This effectively converts it into something similar to a random walk on a k by N * N/k grid. The most effective option is to slice every row so that we force the drunkard to zigzag. For the 20x20 case (SIZE=19) I have time java RandomWalker 1000 walks with average 148577.604 real 0m14.076s user 0m13.713s sys 0m0.360s with code import java.awt.Point; import java.util.*; // http://codegolf.stackexchange.com/q/37484/194 // This handles a simpler problem where the grid is mutilated before the drunkard starts to walk. public class RandomWalker { private static final int SIZE = 19; private final Random rnd = new Random(); public static void main(String[] args) { RandomWalker rw = new RandomWalker(); long total = 0; int walks = 0; while (walks < 1000 && total < 1L << 40) { total += rw.walk(); walks++; } System.out.println(walks + " walks with average " + total / (double)walks); } int walk() { Point dest = new Point(SIZE, SIZE); Point p = new Point(0, 0); int step = 0; while (!p.equals(dest)) { List<Point> neighbours = neighbours(p); int idx = rnd.nextInt(neighbours.size()); p = neighbours.get(idx); step++; } return step; } List<Point> neighbours(Point p) { List<Point> rv = new ArrayList<Point>(); if (p.x > 0) handlePossibleNeighbour(rv, p, new Point(p.x - 1, p.y)); if (p.x < SIZE) handlePossibleNeighbour(rv, p, new Point(p.x + 1, p.y)); if (p.y > 0) handlePossibleNeighbour(rv, p, new Point(p.x, p.y - 1)); if (p.y < SIZE) handlePossibleNeighbour(rv, p, new Point(p.x, p.y + 1)); return rv; } private void handlePossibleNeighbour(List<Point> neighbours, Point p1, Point p2) { } private boolean edgeExists(Point p1, Point p2) { return p1.x != p2.x || p1.x == SIZE * (Math.max(p1.y, p2.y) & 1); } } • Am I right in thinking all the edge deletion happens before the walk starts in your solution? – user9206 Sep 9 '14 at 10:47 • @Lembik, yes. I thought the comment at the top would make that clear. – Peter Taylor Sep 9 '14 at 10:57 • It does, thank you. I wonder how much difference you can make by deleting edges during the walk. – user9206 Sep 9 '14 at 10:58 • Out of curiosity, how long does this take to run (altogether and per run)? – stokastic Sep 9 '14 at 16:53 • @stokastic, about 3 seconds per run. – Peter Taylor Sep 9 '14 at 17:28 # For those who don't want to reinvent the wheel Don't worry! I'll reinvent it for you :) This is in Java, by the way. I created a Walker class that deals with walking randomly. It also includes a helpful method for determining if a move is valid (if it has already been walked upon). I am assuming all of you smart people can figure out to put random numbers in for the constructor, I left it up to you so you could test certain cases. Also, just call walk() function to (you guessed it!) make the drunkard walk (randomly). I will implement canComeHome() function some other time. Preferably after I look up the best way to do that. import java.util.ArrayList; import java.util.Queue; import java.util.TreeSet; public class Walker { int width,height; int x,y; //walker's position (does anyone else keep thinking about zombies?!?) int dX,dY; //destination TreeSet<Edge> pathsNoLongerAvailable = new TreeSet<Edge>(); TreeSet<Edge> previouslyTraveled = new TreeSet<Edge>(); int stepCount = 0; public static void main(String[]args){ int side = 10; Walker walker = null; int total = 0; double count = 1000; for(int i=0; i<count; i++){ walker = new Walker(0,0,side,side,side-1,side-1); total += walker.stepCount; System.out.println("Iteration "+i+": "+walker.stepCount); } System.out.printf("Average: %.3f\n", total/count); walker.printPath(); } public Walker(int startingX,int startingY, int Width, int Height, int destinationX, int destinationY){ width = Width; height = Height; dX = destinationX; dY = destinationY; x=startingX; y=startingY; while(!walk()){ // Do something } } public void printPath(){ for(int i=0; i<width-1; i++){ if(!pathsNoLongerAvailable.contains(new Edge(i,height-1,i+1,height-1))){ System.out.print(" _"); } else { System.out.print(" "); } } System.out.println(); for(int i=height-2; i>=0; i--){ for(int j=0; j<2*width-1; j++){ if(j%2==0){ if(!pathsNoLongerAvailable.contains(new Edge(j/2,i,j/2,i+1))){ System.out.print("|"); } else { System.out.print(" "); } } else { if(!pathsNoLongerAvailable.contains(new Edge(j/2,i,j/2+1,i))){ System.out.print("_"); } else { System.out.print(" "); } } } System.out.println(); } } public boolean walk(){ ArrayList<int[]> possibleMoves = new ArrayList<int[]>(); if(x!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x-1,y))){ } if(x!=width-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x+1,y))){ } if(y!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y-1))){ } if(y!=height-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y+1))){ } int random = (int)(Math.random()*possibleMoves.size()); int[] move = possibleMoves.get(random); x+=move[0]; y+=move[1]; stepCount++; if(x==dX && y == dY){ return true; } else { return false; } } public boolean isSolvable(){ TreeSet<Point> reachable = new TreeSet<Point>(); next.offer(new Point(x,y)); while(next.size()>0){ Point cur = next.poll(); int x = cur.x; int y = cur.y; ArrayList<Point> neighbors = new ArrayList<Point>(); if(x!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x-1,y))){ } if(x!=width-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x+1,y))){ } if(y!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y-1))){ } if(y!=height-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y+1))){ } for(Point neighbor: neighbors){ if(!reachable.contains(neighbor)){ if(neighbor.compareTo(new Point(dX, dY))==0){ return true; } next.offer(neighbor); } } } return false; } public boolean hasBeenWalked(int x1, int y1, int x2, int y2){ return previouslyTraveled.contains(new Edge(x1, y1, x2, y2)); } public boolean hasBeenWalked(Edge edge){ return previouslyTraveled.contains(edge); } public void deletePath(int startX, int startY, int endX, int endY){ return; } if(!isSolvable()){ System.out.println("Invalid deletion!"); } } static class Edge implements Comparable<Edge>{ Point start, end; public Edge(int x1, int y1, int x2, int y2){ start = new Point(x1, y1); end = new Point(x2, y2); if(start.compareTo(end)>0){ Point tmp = end; end = start; start = tmp; } } @Override public int compareTo(Edge o) { int result = start.compareTo(o.start); if(result!=0) return result; return end.compareTo(o.end); } } static class Point implements Comparable<Point>{ int x,y; public Point(int x, int y){ this.x = x; this.y = y; } public int compareTo(Point o){ int result = Integer.compare(x, o.x); if(result!=0) return result; result = Integer.compare(y, o.y); if(result!=0) return result; return 0; } } } • This contains some bugs and inconsistencies. previouslyTraveled.add(new int[]{x,y,move[0],move[1]}) should be x+move[0] and y+move[1]. The Width-1 and Height-1, and inefficiency in checking the deleted paths. I've edited your code (with additional function to print the maze). Feel free to rollback if you think that's inappropriate. – justhalf Sep 9 '14 at 7:34 • Your Edge doesn't correctly implement Comparable<Edge>. If you want edges to compare as equals even if you reverse them, you need to take into account the reversal as well in the non-equal case. The easiest way of doing this would be to change the constructor to keep the points ordered. – Peter Taylor Sep 9 '14 at 8:16 • @PeterTaylor: Thanks for the heads up. I thought about the non-equal case a bit, but couldn't get myself to understand why it matters. Do you know where I can look for implementation requirement for Comparable? – justhalf Sep 9 '14 at 8:22 • docs.oracle.com/javase/7/docs/api/java/lang/Comparable.html The key is that it needs to define a total ordering. But if A and B are the same edge reversed and C is different, you can get A.compareTo(B) == B.compareTo(A) == 0 but A.compareTo(C) < 0 and B.compareTo(C) > 0. – Peter Taylor Sep 9 '14 at 8:24 • How about now? I added another class. And I added function to check if it's solvable (or canComeHome()) – justhalf Sep 9 '14 at 8:30 ### 64,281 Update since the grid was changed from 100x100 to 20x20 (1000 tests). Score on 100x100 (100 tests) was roughly 36M. While this isn't going to beat a 1D walk, I wanted to play with an idea I had. The basic idea is that the grid is split into square rooms, with only one path leading 'homeward' from each. The open path is whichever the drunk gets close to last, meaning he has to explore every possible exit, only to have all but one of them slammed in his face. After playing with room sizes, I came to the same conclusion as Peter, slicing it up smaller is better. The best scores come with a room size of 2. Average score over 100 trials: 36051265 The code is sloppy, don't mind the mess. You can flip on the SHOW switch and it will show an image of the paths every SHOW_INT steps so you can watch it in action. A completed run looks something like: (This is the image from the previous 100x100 grid. 20x20 is just like this, but, well, smaller. Code below has been updated for new size/runs.) import java.awt.Color; import java.awt.Graphics; import java.awt.Point; import java.awt.image.*; import java.util.*; import javax.swing.*; public class DrunkWalk { boolean SHOW = false; int SHOW_INT = 10; int SIZE = 20; Random rand = new Random(); Point pos; int[][] edges; int[][] wally; int[] wallx; int roomSize = 2; JFrame frame; final BufferedImage img; public static void main(String[] args){ long total=0,runs=1000; for(int i=0;i<runs;i++){ int steps = new DrunkWalk().run(); total += steps; System.out.println("("+i+") "+steps); } System.out.println("\n Average " + (total/runs) + " over " + runs + " trials."); } DrunkWalk(){ edges = new int[SIZE][SIZE]; for(int x=0;x<SIZE;x++){ for(int y=0;y<SIZE;y++){ if(x>0) edges[x][y] |= WEST; if(x+1<SIZE) edges[x][y] |= EAST; if(y>0) edges[x][y] |= NORTH; if(y+1<SIZE) edges[x][y] |= SOUTH; } } wallx = new int[SIZE/roomSize+1]; wally = new int[SIZE/roomSize+1][SIZE/roomSize+1]; pos = new Point(SIZE-1,SIZE-1); img = new BufferedImage(SIZE*6+1,SIZE*6+1, BufferedImage.TYPE_INT_RGB); frame = new JFrame(){ public void paint(Graphics g) { g.drawImage(img, 50, 50, null); } }; frame.setSize(700,700); if(SHOW) frame.show(); } void draw(){ try { } catch (InterruptedException e) { e.printStackTrace(); } Graphics g = img.getGraphics(); g.setColor(Color.WHITE); g.clearRect(0, 0, img.getWidth(), img.getHeight()); for(int x=0;x<SIZE;x++){ for(int y=0;y<SIZE;y++){ if((edges[x][y]&EAST)==EAST) g.drawLine(x*6, y*6, x*6+5, y*6); if((edges[x][y]&SOUTH)==SOUTH) g.drawLine(x*6, y*6, x*6, y*6+5); } } g.setColor(Color.RED); g.drawOval(pos.x*6-2, pos.y*6-2, 5, 5); g.drawOval(pos.x*6-1, pos.y*6-1, 3, 3); frame.repaint(); } int run(){ int steps = 0; Point home = new Point(0,0); while(!pos.equals(home)){ if(SHOW&&steps%SHOW_INT==0){ System.out.println(steps); draw(); } step(); steps++; } if(SHOW) draw(); return steps; } int rx = pos.x / roomSize; int ry = pos.y / roomSize; int maxWalls = roomSize - 1; if(wally[rx][ry] < maxWalls){ if(pos.y%roomSize==0) if(delete(pos.x,pos.y,NORTH)) wally[rx][ry]++; } maxWalls = SIZE-1; if(pos.x%roomSize==0){ if(wallx[rx] < maxWalls) if(delete(pos.x, pos.y,WEST)) wallx[rx]++; } } void step(){ List<Integer> choices = getNeighbors(pos); Collections.shuffle(choices); int dir = choices.get(0); pos.x += dir==WEST?-1:dir==EAST?1:0; pos.y += dir==NORTH?-1:dir==SOUTH?1:0; } boolean delete(int x, int y, int dir){ if((edges[x][y] & dir) != dir) return false; edges[x][y] -= dir; if(dir == NORTH) if(y>0) edges[x][y-1] -= SOUTH; if(dir == SOUTH) if(y+1<SIZE) edges[x][y+1] -= NORTH; if(dir == EAST) if(x+1<SIZE) edges[x+1][y] -= WEST; if(dir == WEST) if(x>0) edges[x-1][y] -= EAST; return true; } List<Integer> getNeighbors(Point p){ if(p.x==SIZE || p.y==SIZE){ System.out.println("wtf"); System.exit(0); } List<Integer> choices = new ArrayList<Integer>(); if((edges[p.x][p.y] & NORTH) == NORTH) if((edges[p.x][p.y] & SOUTH) == SOUTH) if((edges[p.x][p.y] & EAST) == EAST) if((edges[p.x][p.y] & WEST) == WEST) return choices; } final static int NORTH=1,EAST=2,SOUTH=4,WEST=8; } • I just noticed that he should be going from bot/left->top/right, while mine goes bot/right->top/left. I can change it if it really matters, but... – Geobits Sep 9 '14 at 16:00 • This is very nice and is the first dynamic solution I think. I am interested that your path isn't quite as long as the static ones yet. – user9206 Sep 9 '14 at 16:51 • If by "not quite as long" you mean ~1/3 as long as one and ~36x as long the other? :P – Geobits Sep 9 '14 at 16:53 # 188k, with 2 paths The best entries all seem to take the approach of generating 2 paths, and then cutting one off when the drunk nears the end of the path. I don't think I can beat justhalf's entry, but I couldn't help but wonder: Why 2 paths? Why not 3, or 5, or 20? TL;DR: 2 paths seems to be optimal So I did an experiment. Based on Stretch Maniac's framework, I wrote an entry to test various numbers of paths. You can tweak the featureSize parameter to vary the number of paths. A featureSize of 20 give 1 path, 10 gives 2 paths, 7 gives 3, 5 gives 4, and so on. import java.util.ArrayList; import java.util.BitSet; import java.util.HashSet; import java.util.Objects; import java.util.Queue; import java.util.Set; public class Walker { final int width,height; int x,y; //walker's position (does anyone else keep thinking about zombies?!?) final int dX,dY; //destination final int featureSize; Set<Edge> pathsNoLongerAvailable = new HashSet<>(); Set<Edge> previouslyTraveled = new HashSet<>(); int stepCount = 0; private final BitSet remainingExits; public static void main(String[]args){ int side = 20; Walker walker = null; int total = 0; int featureSize = 10; double count = 1000; for(int i=0; i<count; i++){ walker = new Walker(0,0,side,side,side-1,side-1, featureSize); total += walker.stepCount; System.out.println("Iteration "+i+": "+walker.stepCount); } System.out.printf("Average: %.3f\n", total/count); walker.printPath(); } public Walker(int startingX,int startingY, int Width, int Height, int destinationX, int destinationY, int featureSize){ width = Width; height = Height; dX = destinationX; dY = destinationY; x=startingX; y=startingY; this.featureSize = featureSize; deleteBars(); remainingExits = new BitSet(); for (int yy = 0; yy < height; yy++) { if (!pathsNoLongerAvailable.contains(new Edge(width - 2, yy, width - 1, yy))) { remainingExits.set(yy); } } while(!walk()){ if (x == width - 2 && remainingExits.get(y) && remainingExits.cardinality() > 1) { deletePath(x, y, x + 1, y); remainingExits.set(y, false); } } } private void deleteBars() { for (int xx = 0; xx < width - 1; xx++) { for (int yy = 0; yy < height / featureSize + 1; yy++) { if (xx != 0) deletePath(xx, featureSize * yy + featureSize - 1, xx, featureSize * yy + featureSize); boolean parity = xx % 2 == 0; if (yy == 0) parity ^= true; // First path should be inverted for (int i = 0; i < featureSize && featureSize * yy + i < height; i++) { if (i == 0 && !parity) continue; if ((i == featureSize - 1 || featureSize * yy + i == height - 1) && parity) continue; deletePath(xx, featureSize * yy + i, xx + 1, featureSize * yy + i); } } } } public void printPath(){ for(int i=0; i<width-1; i++){ if(!pathsNoLongerAvailable.contains(new Edge(i,height-1,i+1,height-1))){ System.out.print(" _"); } else { System.out.print(" "); } } System.out.println(); for(int i=height-2; i>=0; i--){ for(int j=0; j<2*width-1; j++){ if(j%2==0){ if(!pathsNoLongerAvailable.contains(new Edge(j/2,i,j/2,i+1))){ System.out.print("|"); } else { System.out.print(" "); } } else { if(!pathsNoLongerAvailable.contains(new Edge(j/2,i,j/2+1,i))){ System.out.print("_"); } else { System.out.print(" "); } } } System.out.println(); } } public boolean walk(){ ArrayList<int[]> possibleMoves = new ArrayList<int[]>(); if(x!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x-1,y))){ } if(x!=width-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x+1,y))){ } if(y!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y-1))){ } if(y!=height-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y+1))){ } int[] move = possibleMoves.get(random); x+=move[0]; y+=move[1]; stepCount++; if(x==dX && y == dY){ return true; } else { return false; } } public boolean isSolvable(){ Set<Point> reachable = new HashSet<>(); next.offer(new Point(x,y)); while(next.size()>0){ Point cur = next.poll(); int x = cur.x; int y = cur.y; ArrayList<Point> neighbors = new ArrayList<>(); if(x!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x-1,y))){ } if(x!=width-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x+1,y))){ } if(y!=0 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y-1))){ } if(y!=height-1 && !pathsNoLongerAvailable.contains(new Edge(x,y,x,y+1))){ } for(Point neighbor: neighbors){ if(!reachable.contains(neighbor)){ if(neighbor.compareTo(new Point(dX, dY))==0){ return true; } next.offer(neighbor); } } } return false; } public boolean hasBeenWalked(int x1, int y1, int x2, int y2){ return previouslyTraveled.contains(new Edge(x1, y1, x2, y2)); } public boolean hasBeenWalked(Edge edge) { return previouslyTraveled.contains(edge); } public void deletePath(int startX, int startY, int endX, int endY){ return; } if(!isSolvable()){ System.out.println("Invalid deletion!"); } } public static class Edge implements Comparable<Edge>{ Point start, end; public Edge(int x1, int y1, int x2, int y2){ start = new Point(x1, y1); end = new Point(x2, y2); if(start.compareTo(end)>0){ Point tmp = end; end = start; start = tmp; } } @Override public int compareTo(Edge o) { int result = start.compareTo(o.start); if(result!=0) return result; return end.compareTo(o.end); } @Override public String toString() { return start.toString() + "-" + end.toString(); } @Override public int hashCode() { int hash = 7; hash = 83 * hash + Objects.hashCode(this.start); hash = 83 * hash + Objects.hashCode(this.end); return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Edge other = (Edge) obj; if (!Objects.equals(this.start, other.start)) { return false; } if (!Objects.equals(this.end, other.end)) { return false; } return true; } } static class Point implements Comparable<Point>{ int x,y; public Point(int x, int y){ this.x = x; this.y = y; } public int compareTo(Point o){ int result = Integer.compare(x, o.x); if(result!=0) return result; result = Integer.compare(y, o.y); if(result!=0) return result; return 0; } @Override public String toString() { return "(" + x + "," + y + ")"; } @Override public int hashCode() { int hash = 7; hash = 23 * hash + this.x; hash = 23 * hash + this.y; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Point other = (Point) obj; if (this.x != other.x) { return false; } if (this.y != other.y) { return false; } return true; } } } There are a few optimisations that I could do but haven't, and it doesn't support any of the adaptive trickery that justhalf uses. Anyway, here's the results for various featureSize values: 20 (1 path): 156284 10 (2 paths): 188553 7 (3 paths): 162279 5 (4 paths): 152574 4 (5 paths): 134287 3 (7 paths): 118843 2 (10 paths): 94171 1 (20 paths): 64515 And here's a map with 3 paths: _ _ _ _ _ _ _ _ _ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |_| |_| |_| |_| |_| |_| |_| |_| |_| | |_ _ _ _ _ _ _ _ _ _| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |_| |_| |_| |_| |_| |_| |_| |_| |_| | | _ _ _ _ _ _ _ _ _ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |_| |_| |_| |_| |_| |_| |_| |_| |_| | | • Thanks for this. It seems all the money is in the adaptive trickery now :) – user9206 Sep 10 '14 at 12:24 • Why do you cut the path at the bottom? You can cut the path between the lower path and the middle path for better score, I think. – justhalf Sep 10 '14 at 17:01 • @justhalf Yes, I expect it would. I decided not to, as it would have made the code more complicated, and it wouldn't have been a winning entry either way. – James_pic Sep 10 '14 at 18:59 • The three paths (assuming optimal 3-path) will on average be the same as single path: let N be the path length (which is n^2-1), single path will on average require N^2 moves, while the three paths (N/3)^2 + (2N/3)^2 + (2N/3)^2 = N^2 plus some relatively small value, so three paths has no significant gain over single path, let alone double path. (The calculation is based on probability result which states that random movement on 1-D path of length N requires on average N^2 movement from one end to the other.) – justhalf Sep 11 '14 at 2:13 • @justhalf Nice. I was struggling to come up with a good first-principles argument for why 2 was best, but this nails it. – James_pic Sep 11 '14 at 8:08 # 131k (20x20) My first attempt was to remove all of the horizontal edges except the top and bottom row, then each time the walker reached the bottom of a column I would remove the edge ahead of him, until he had visited the bottom of every column and would finally be able to reach the exit. This resulted in an average of 1/8 as many steps as @PeterTaylor's 1d walk approach. Next I decided to try something a bit more circuitous. I have split the maze into a series of nested hollow chevrons, and require him to traverse the perimeter of each chevron at least 1.5 times. This has an average time of about 131k steps. #include <stdio.h> #include <stdlib.h> #include <time.h> #include <iostream> #include <math.h> #define DEBUG 0 #define ROUNDS 10000 #define Y 20 #define X 20 #define H (Y*2+1) #define W (X*2+1) int maze[H][W]; int scores[ROUNDS]; int x, y; void print_maze(){ char line[W+2]; line[W+1]=0; for(int row=0;row<H;row++) { for(int col=0;col<W;col++) { switch(maze[row][col]) { case 0: line[col]=' '; break; case 1: line[col]=row%2?'-':'|'; break; case 9: line[col]=(row==y*2+1&&col==x*2+1)?'@':' '; break; } } line[W]='\n'; printf("%s",line); } printf("%d %d\n",y,x); } int main(){ srand (time(NULL)); long long total_turns = 0; for(int round=0;round<ROUNDS;round++) { for (int r=0;r<H;r++) { for (int c=0;c<W;c++) { if (r==0 || r==H-1 || c==0 || c==W-1) maze[r][c]=0; // edges else if (r%2) { // rows with cells and E/W paths if (c%2) maze[r][c] = 9; // col with cells else if (r==1 || r==H-2) maze[r][c]=1; // E/W path on N/Smost row else if (c>r) maze[r][c]=1; // E/W path on chevron perimeter else maze[r][c]=0; // cut path between cols } else { // rows with N/S paths if (c%2==0) maze[r][c] = 0; // empty space else if (c==1 || c==W-2) maze[r][c]=1; // N/S path on E/Wmost row else if (r>c) maze[r][c]=1; // N/S path on chevron perimeter else maze[r][c]=0; } } } int progress = 0; int first_cut = 0; x=0; y=0; if(DEBUG) print_maze(); long long turn = 0; while (x!=X-1||y!=Y-1) { if(DEBUG) std::cin.ignore(); turn++; int r = y*2+1; int c = x*2+1; int exits = maze[r-1][c] + maze[r][c+1] + maze[r+1][c] + maze[r][c-1]; int exit_choice = -1; do { if (rand()%exits == 0) { exit_choice = exits; break; } else { exits--; } }while(exits); int dx=0, dy=0; --exits; if (maze[r-1][c]&&!dx&&!dy) { if (exits) { --exits; } else { dy = -1; dx = 0; } } if (maze[r][c+1]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 0; dx = 1; } } if (maze[r+1][c]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 1; dx = 0; } } if (maze[r][c-1]&&!dx&&!dy) { if (exits) { --exits; } else { dy = 0; dx = -1; } } x+=dx; y+=dy; if (first_cut==0) { if(x==X-1 && y==progress*2+1) { first_cut = 1; maze[y*2+2][x*2+1]=0; } if(y==Y-1 && x==progress*2+1) { first_cut = 2; maze[y*2+1][x*2+2]=0; } } else if (first_cut==1) { if (y==Y-1 && x==progress*2) { maze[y*2+1][x*2+2]=0; progress++; first_cut=0; } else if (y==Y-2 && x==progress*2+1) { maze[y*2+2][x*2+1]=0; progress++; first_cut=0; } } else if (first_cut==2) { if (x==X-1 && y==progress*2) { maze[y*2+2][x*2+1]=0; progress++; first_cut=0; } else if (x==X-2 && y==progress*2+1) { maze[y*2+1][x*2+2]=0; progress++; first_cut=0; } } if(DEBUG) print_maze(); } // printf("turns:%lld\n",turn); scores[round] = turn; total_turns += turn; } long long avg = total_turns/ROUNDS; printf("average: % 10lld\n",avg); long long var = 0; for(int r=0;r<ROUNDS;r++){ var += (scores[r]-avg)*(scores[r]-avg); } var/=ROUNDS; // printf("variance: %lld\n",var); int stddev=sqrt(var); printf("stddev: % 10d\n",stddev); } # Do Nothing Since the man moves randomly, one might think that removing any node will only increase his chances of getting home in the long term. First, lets have a look at the one-dimensional case, this can be achieved by removing nodes until you end up with a squiggly path, without deadends or cycles, that visits (almost) every gridpoint. On an N x N grid the maximal length of such a path is L = N*N - 2 + N%2 (98 for a 10x10 grid). Walking along the path can be described by a transition matrix as generated by T1d. (The slight asymmetry makes it hard to find an analytical solution, except for very small or infinite matrices, but we obtain a numerical solution faster than it would take to diagonalize the matrix anyway). The state vector has a single 1 at the starting position and after K steps (T1d**K) * state gives us the probability distribution of being at a certain distance from the start (that is equivalent to averaging over all 2**K possible walks along the path!) Running the simulation for 10*L**2 steps and saving the last element of the state vector after each step which gives us the probability of having made it to the goal after a certain total number of steps - the cumulative probability distribution cd(t). Differentiating it gives us the probability p of reaching the goal exactly at a certain time. To find the average time we integrate t*p(t) dt The average time to reach the goal is proportional to L**2 with a factor that goes very quickly to 1. The standard deviation is almost constant at around 79% of the average time. This graph shows the average time to reach the goal for different path lengths (corresponding to grid sizes of 5x5 to 15x15) Here is how the probability of reaching the goal looks like. The second curve looks filled out because at every odd timestep the position is odd and therefore cannot be at the goal. From that we can see that the balanced dual-path strategy works best here. For larger grids, where the overhead of making more paths is negligible compared to their size, we might be better off increasing the number of paths, similar to how Peter Taylor described it, but keeping the lengths balanced # What if we dont remove any nodes at all? Then we would have twice as many walkable nodes, plus four possible directions instead of two. It would seem that it makes it very unlikely to ever get anywhere. However, simulations show otherwise, after just 100 steps on a 10x10 grid the man is pretty likely to reach his goal, so trappin him in islands is a futile attempt, since you are trading a potential N**2 long winding path with an average completion time of N**4 for an island which is passed in N**2 steps from numpy import * import matplotlib.pyplot as plt def L(N): # maximal length of a path on an NxN grid return N*N - 2 + N%2 def T1d(N): # transition along 1d path m = ( diag(ones(N-1),1) + diag(ones(N-1),-1) )/2. m[1,0] = 1 m[-2,-1] = 0 m[-1,-1] = 1 return m def walk(stepmatrix, state, N): data = zeros(N) for i in xrange(N): data[i] = state[-1] state = dot(stepmatrix, state) return data def evaluate(data): rho = diff(data)/data[-1] t = arange(len(rho)) av = sum(rho*t) stdev = sum((t-av)**2 * rho)**.5 print 'average: %f\nstd: %f'%(av, stdev) return rho, av, stdev gridsize = 10 M = T1d(L(gridsize)) initpos = zeros(L(gridsize)) initpos[0] = 1 cd = walk(M, initpos, L(gridsize)**2*5) plt.subplot(2,1,1) plt.plot(cd) plt.title('p of reaching the goal after N steps') plt.subplot(2,1,2) plt.plot(evaluate(cd)[0]) plt.title('p of reaching the goal at step N') plt.show() ''' # uncomment to run the 2D simulation # /!\ WARNING /!\ generates a bunch of images, dont run on your desktop x = [k-n, k+n] if k%n != 0: x.append(k-1) if k%n != n-1: x.append(k+1) x = [i for i in x if 0<= i <n*n] return x N = 10 # gridsize MM = zeros((N*N, N*N)) # build transition matrix for i in range(N*N):
2019-11-14 03:47:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3942197263240814, "perplexity": 1168.3377334644968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667945.28/warc/CC-MAIN-20191114030315-20191114054315-00323.warc.gz"}
http://thegrantlab.org/bio3d/reference/pdb2aln.ind.html
Find the best alignment between a PDB structure and an existing alignment. Then, given a set of column indices of the original alignment, returns atom selections of equivalent C-alpha atoms in the PDB structure. pdb2aln.ind(aln, pdb, inds = NULL, ...) aln an alignment list object with id and ali components, similar to that generated by read.fasta, read.fasta.pdb, pdbaln, and seqaln. the PDB object to be aligned to aln. a numeric vector containing a subset of column indices of aln. If NULL, non-gap positions of alnali are used. additional arguments passed to pdb2aln. ## Details Call pdb2aln to align the sequence of pdb to aln. Then, find the atomic indices of C-alpha atoms in pdb that are equivalent to inds, the subset of column indices of alnali. The function is a rountine utility in a combined analysis of molecular dynamics (MD) simulation trajectories and crystallographic structures. For example, a typical post-analysis of MD simulation is to compare the principal components (PCs) derived from simulation trajectories with those derived from crystallographic structures. The C-alpha atoms used to fit trajectories and do PCA must be the same (or equivalent) to those used in the analysis of crystallographic structures, e.g. the 'non-gap' alignment positions. Call pdb2aln.ind with providing relevant alignment positions, one can easily get equivalent atom selections ('select' class objects) for the simulation topology (PDB) file and then do proper trajectory analysis. ## Value Returns a list containing two "select" objects: a atom and xyz indices for the alignment. b atom and xyz indices for the PDB. Note that if any element of inds has no corresponding CA atom in the PDB, the output a$atom and b$atom will be shorter than inds, i.e. only indices having equivalent CA atoms are returned. ## References Grant, B.J. et al. (2006) Bioinformatics 22, 2695--2696. ## Author Xin-Qiu Yao, Lars Skjaerven & Barry Grant seq2aln, seqaln.pair, pdb2aln ## Examples if (FALSE) { ##--- Read aligned PDB coordinates (CA only) ##--- Read the topology file of MD simulations ##--- For illustration, here we read another pdb file (all atoms) #--- Map the non-gap positions to PDB C-alpha atoms #pc.inds <- gap.inspect(pdbs$ali) #npc.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, inds=pc.inds$f.inds) #npc.inds$a #npc.inds$b #--- Or, map the non-gap positions with a known close sequence in the alignment #npc.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, aln.id="1bg2", inds=pc.inds$f.inds) #--- Map core positions core <- core.find(pdbs) core.inds <- pdb2aln.ind(aln=pdbs, pdb=pdb, inds = core$c1A.atom) core.inds$a core.inds$b ##--- Fit simulation trajectories to one of the X-ray structures based on ##--- core positions #xyz <- fit.xyz(pdbs$xyz[1,], pdb$xyz, core.inds$a$xyz, core.inds$b$xyz) ##--- Do PCA of trajectories based on non-gap positions #pc.traj <- pca(xyz[, npc.inds$b$xyz]) }
2020-10-23 09:01:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4694121479988098, "perplexity": 10705.896657261126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880878.30/warc/CC-MAIN-20201023073305-20201023103305-00491.warc.gz"}
http://physics.stackexchange.com/questions/92969/why-does-the-speed-of-light-have-no-uncertainty
# Why does the speed of light have no uncertainty? I could understand that the definition of a second wouldn't have an uncertainty when related to the transition of the Cs atom, so it doesn't have an error because it's an absolute reference and we measure other stuff using the physical definition of a second, like atomic clocks do. But why doesn't the speed of light have uncertainty? Isn't the speed of light something that's measured physically? Check out that at NIST. - The second and the speed of light are precisely defined, and the metre is then specified as a function of $c$ and the second. So when you experimentally measure the speed of light you are effectively measuring the length of the metre i.e. the experimental error is the error in the measurement of the metre not the error in the speed of light or the second. It may seem odd to treat the metre as variable and the speed of light as a fixed quantity, but it's not as odd as you may think. The speed of light is not just some number, it's a fundamental property of the universe and is related to its geometry. By contrast the metre is just a length that happens to be convenient for humans. See What is so special about speed of light? for more info. - It's a very weird convention to take the error to be in length... Thank you. –  The Quantum Physicist Jan 9 at 10:38 @TheQuantumPhysicist The speed of light was actually calculated (precisely) from Maxwell's Equation: $c = \frac{1}{\sqrt{\epsilon_o\mu_o}}$. This is constant for every frame (intertial or not). We found out that even if we go faster and faster, the speed of light remains to be $c$, it is our perception of length and time which keeps changing. Normal Newtonian Mechanics aren't valid! –  mikhailcazi Jan 12 at 6:53 @mikhailcazi The embarrassing part in this question is that it makes me look like a beginner in physics, while I'm a PhD :) –  The Quantum Physicist Jan 12 at 6:56 @TheQuantumPhysicist Haha, sorry then! My bad. :) –  mikhailcazi Jan 12 at 7:02 -1. This is wrong. The speed of light indeed fluctuates. The c is the MEAN speed of light over large distances. –  Anixx Jan 15 at 4:58 To repeat Wikipedia: The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its value is exactly 299,792,458 metres per second, a figure that is exact because the length of the metre is defined from this constant and the international standard for time. In other words, it's exact because we have a definition of the second: the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom and the metre is the distance light travels in $1/299,792,458$ of a second. That leaves no room for error in the definition of the speed of light. - no you are biting your own tail. you say the metre is defined from the speed of light, then the speed of light is known to be [this constant] * 1m/1s. this constant must have been rounded for convenience, and the metre redefined from its original definition to fit the rounding. originally it was a fraction of earth's arc, so it cannot have anything to do with the speed of light. I doubt that it is the definition of the second that was rounded to fit this constant. But it doesn't answer the OP. the OP speaks of uncertainty, and speed of light, IS uncertain. –  v.oddou Jan 10 at 3:36 the measurements are said to ought to be the same whatever the direction of the light for example, and whatever the speed of the emitting particle. but, measurments are noisy, and some experiments even go as far as to measure statistical elements on this noise, like variance, according to directions. and I recall seeing some results where 1 direction had more variance. the conclusion being that maybe the universe is not isotropic. I think this is this kind of talk that OP expects. –  v.oddou Jan 10 at 3:38 As you can read in this Wikipedia article, it was decided recently to base all SI units on seven constants of nature. To be able to do so, these constants have to be set to absolute values. Therefore it was decided, that these constants are fixed without error margin at their commonly accepted values to derive all other SI units from those now fundamental constants. - In SI system, a meter is defined to be 1/299,792,458 light-second (in other words, the distance traveled by light in vacuum in 1/299,792,458 second), and the speed of light in vacuum therefore is defined to be 299,792,458 m/s. - Channeling Adrian Monk here, but why couldn't they have defined a meter to be exactly 1/300,000,000 light-second? –  Michael Jan 9 at 15:18 @Michael, The meter was originally defined as 1/10,000,000 of the distance from the equator to the north pole. The definition changed in 1983. –  Brian S Jan 9 at 15:22 @Michael Technically, they (CGPM) can. But meter is a widely used unit and therefore it is highly desirable to make the new definition as close to the historical definition as possible. –  Isidore Seville Jan 9 at 16:19 @Michael: It is only a coincidence that the meter is so close to 1/300k light-sec. The first proposal for the meter was the length of pendulum needed to have a period of 2 seconds (to make clock that can tick every second). Due to the variations in the Earth's gravity, it was ultimately decided to make it based on the circumference of the Earth. It is just luck that all three of these definitions are within rounding error of each other. –  Gabe Jan 9 at 20:28 The reason is that measurements of speed of light became very, very precise. Much more than measurements of Earth's diameter or any physical object like 1 metre rod. So it is better to settle on some fixed value of metres per second in c. Something has to be fixed, let it be something we can easily measure in any laboratory. - $c$ is a fundamental constant, so it has no uncertainty. I fact, lengths are defined using time and $c$. It's important to note that defining the light speed to be a fixed number is not just a matter of convention. It's a property of spacetime. Physicists take $c$ to be a fundamental constant because nature suggests so. You should read some introductory book on special relativity to understand how this property arises. I recommend Russel's ABC of Relativity and the first chapters of Rindler's Relativity. Of course you should have some background in newtonian mechanics and galilean relativity to understand it better. - $c$ is not a fundamental constant of nature. if the meter was defined independently of $c$ as was done before 1959, then $c$ would be a quantity to measure and there would be measurement error. the only truly fundamental constants of nature are the dimensionless ones, like $\alpha$ or $\frac{m_p}{m_e}$. but dimensionful constants are really only about the units used to express them. –  robert bristow-johnson Jan 10 at 2:11 GabrielF: "I[n] fact, lengths are defined using time and $c$." -- Right on!, +1. Just a nitpick concerning terminology: In fact, distance values (or also: values of quasi-distances) are defined using durations (namely: ping durations between participants who find constant ping durations between each other) and $c$. "not just a matter of convention. It's a property of spacetime." -- In order to characterize "spacetime" at all (especially, by geometric relations between participants, such as their (quasi-)distance ratios) it must be defined how to measure geometric relations. –  user12262 Jan 15 at 19:21 The speed of light indeed fluctuates in vacuum. A single photon can propagate slightly faster or slower than light. This can be interpreted as appearance of virtual photons ahead of the propagating one and consequent annihilation of the first one with one of the appeared. Only statistically the speed of light is constant. - In order to answer this question, one has to realize that the term "speed of light" has two components. There is the actual physical speed of light (electromagnetic radiation), and the value associated with it. It should be readily apparent that the value is not a constant because it depends on the system of units used. It is 186,000 miles per second in one system, 299,792,458 m/s in another system, etc.. If we redefine the length and/or the time, we obtain different values. However, since the actual speed of light depends only on the properties of the medium through which it propagates, therefore, the speed of light is "constant" (or absolute) in a homogeneous medium. - Guill:"[...] the value is not a constant because it depends on the system of units used." -- This statement appears inconsistent with common usage of this terminology, e.g. "to obtain the same physical value expressed in terms of a different unit" or "the value of a physical quantity Z is expressed as the product of a unit ... and a numerical factor". Maybe you're just missing the notion "numerical factor". –  user12262 Jan 15 at 19:10
2014-04-20 11:13:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8742101788520813, "perplexity": 396.472413332941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
https://www.jiskha.com/questions/73301/find-the-exact-value-of-sinx-2-if-cosx-2-3-and-270-x-360-a-1-3-b-1-3-c-sqrt-6-6
# Trig Find the exact value of sinx/2 if cosx = 2/3 and 270 < x < 360. A)1/3 B)-1/3 C)sqrt 6/6 D)-sqrt 6/6 C, since I KNOW cosx is always positive but I don't know the work involved. I know the half angle formula 1. 👍 2. 👎 3. 👁 1. First of all, x/2 will be in the second quadrant, since x is in the fourth quadrant. The sine of x/2 will therefore be positive. Use the formula for sin (x/2) in terms of cos x. sin(x/2) = sqrt([1-cos(x)]/2) = sqrt (1/6) = sqrt6/6 You got the right answer, but you it ssmes to have been a lucky guess. Cos x is NOT always positive, but it is in this case. 1. 👍 2. 👎 2. Thank you. It wasn't really a guess it was either C or D and then I just knew it was positive so that just leaves C. 1. 👍 2. 👎 3. Jon, Perhaps it would help if you drew an x-y axis system with a unit radius vector in each of the four quadrants. sin T = y/1 so + cos T = x/1 so - tan T = y/x so + sin T = y/1 so + cos T = x/1 so - because x is - in q 2 tan T = y/x so - sin T = y/1 so - cos T = x/1 so - tan T = y/x so + because top and bottom both - sin T = y/1 so - cos T = x/1 so + tan T = y/x so - sin has same sign as its inverse csc cos has same sign as its inverse sec tan has same sign as its inverse ctan 1. 👍 2. 👎 sin T = y/1 so + cos T = x/1 so + tan T = y/x so + 1. 👍 2. 👎 ## Similar Questions If cos theta = 0.8 and 270 2. ### Math Use a double or half angle identity to find the exact value of the following expression: if cos x= 4/5 and 270° < x < 360°, find sin 2x Please help! Thanks 3. ### Trigonometry 4. Find the exact value for sin(x+y) if sinx=-4/5 and cos y = 15/17. Angles x and y are in the fourth quadrant. 5. Find the exact value for cos 165degrees using the half-angle identity. 1. Solve: 2 cos^2x - 3 cosx + 1 = 0 for 0 4. ### trigonometry how do i simplify (secx - cosx) / sinx? i tried splitting the numerator up so that i had (secx / sinx) - (cosx / sinx) and then i changed sec x to 1/ cosx so that i had ((1/cosx)/ sinx) - (cos x / sinx) after that i get stuck 1. ### math;) The equation 2sinx+sqrt(3)cotx=sinx is partially solved below. 2sinx+sqrt(3)cotx=sinx sinx(2sinx+sqrt(3)cotx)=sinx(sinx) 2sin^2x+sqrt(3)cosx=sin^2x sin^2x+sqrt(3)cosx=0 Which of the following steps could be included in the 2. ### Math How do I solve this? tan^2x= 2tanxsinx My work so far: tan^2x - 2tanxsinx=0 tanx(tanx - 2sinx)=0 Then the solutions are: TanX=0 and sinX/cosX = 2 sin X Divide through by sinX: we have to check this later to see if allowed (ie sinX 3. ### Trig Identities Prove the following identities: 13. tan(x) + sec(x) = (cos(x)) / (1-sin(x)) *Sorry for any confusing parenthesis.* My work: I simplified the left side to a. ((sinx) / (cosx)) + (1 / cosx) , then b. (sinx + 1) / cosx = (cos(x)) / 4. ### Calculus determine the absolute extreme values of the function f(x)=sinx-cosx+6 on the interval 0 Angles x and y are located in the first quadrant such that sinx=3/5 and cosy=5/13. a) Determine an exact value for cosx. b) Determine an exact value for siny. 2. ### Trig Help Given that cos2x=7/12 and "270 equal or < 2x equal or < 360", find sinx. Please help and Thank you 3. ### precal 1/tanx-secx+ 1/tanx+secx=-2tanx so this is what I did: =tanx+secx+tanx-secx =(sinx/cosx)+ (1/cosx)+(sinx/cosx)-(1/cosx) =sinx/cosx+ sinx /cosx= -2tanxI but I know this can't be correct because what I did doesn't end as a 4. ### math-trig Use the double angle identities to find sin2x if sinx= 1/square of 17 and cosx
2021-06-17 16:58:04
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273071646690369, "perplexity": 3577.657693644031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00443.warc.gz"}
http://www.oalib.com/relative/4088090
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Computer Science , 2015, Abstract: In conventional multi-armed bandits (MAB) and other reinforcement learning methods, the learner sequentially chooses actions and obtains a reward (which can be possibly missing, delayed or erroneous) after each taken action. This reward is then used by the learner to improve its future decisions. However, in numerous applications, ranging from personalized patient treatment to personalized web-based education, the learner does not obtain rewards after each action, but only after sequences of actions are taken, intermediate feedbacks are observed, and a final decision is made based on which a reward is obtained. In this paper, we introduce a new class of reinforcement learning methods which can operate in such settings. We refer to this class as staged multi-armed bandits (S-MAB). S-MAB proceeds in rounds, each composed of several stages; in each stage, the learner chooses an action and observes a feedback signal. Upon each action selection a feedback signal is observed, whilst the reward of the selected sequence of actions is only revealed after the learner selects a stop action that ends the current round. The reward of the round depends both on the sequence of actions and the sequence of observed feedbacks. The goal of the learner is to maximize its total expected reward over all rounds by learning to choose the best sequence of actions based on the feedback it gets about these actions. First, we define an oracle benchmark, which sequentially selects the actions that maximize the expected immediate reward. This benchmark is known to be approximately optimal when the reward sequence associated with the selected actions is adaptive submodular. Then, we propose our online learning algorithm, for which we prove that the regret is logarithmic in the number of rounds and linear in the number of stages with respect to the oracle benchmark. Computer Science , 2013, Abstract: Stochastic multi-armed bandits solve the Exploration-Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk-aversion where the objective is to compete against the arm with the best risk-return trade-off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, investigate their theoretical guarantees, and report preliminary empirical results. Computer Science , 2015, Abstract: Thompson sampling is one of the earliest randomized algorithms for multi-armed bandits (MAB). In this paper, we extend the Thompson sampling to Budgeted MAB, where there is random cost for pulling an arm and the total cost is constrained by a budget. We start with the case of Bernoulli bandits, in which the random rewards (costs) of an arm are independently sampled from a Bernoulli distribution. To implement the Thompson sampling algorithm in this case, at each round, we sample two numbers from the posterior distributions of the reward and cost for each arm, obtain their ratio, select the arm with the maximum ratio, and then update the posterior distributions. We prove that the distribution-dependent regret bound of this algorithm is $O(\ln B)$, where $B$ denotes the budget. By introducing a Bernoulli trial, we further extend this algorithm to the setting that the rewards (costs) are drawn from general distributions, and prove that its regret bound remains almost the same. Our simulation results demonstrate the effectiveness of the proposed algorithm. Computer Science , 2012, Abstract: We consider a multi-armed bandit problem where the decision maker can explore and exploit different arms at every round. The exploited arm adds to the decision maker's cumulative reward (without necessarily observing the reward) while the explored arm reveals its value. We devise algorithms for this setup and show that the dependence on the number of arms, k, can be much better than the standard square root of k dependence, depending on the behavior of the arms' reward sequences. For the important case of piecewise stationary stochastic bandits, we show a significant improvement over existing algorithms. Our algorithms are based on a non-uniform sampling policy, which we show is essential to the success of any algorithm in the adversarial setup. Finally, we show some simulation results on an ultra-wide band channel selection inspired setting indicating the applicability of our algorithms. Computer Science , 2013, Abstract: We study exploration in Multi-Armed Bandits in a setting where $k$ players collaborate in order to identify an $\epsilon$-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the $k$ players to communicate only once, they are able to learn $\sqrt{k}$ times faster than a single player. That is, distributing learning to $k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor $k$ speed-up in learning performance, with communication only logarithmic in $1/\epsilon$. Djallel Bouneffouf Computer Science , 2013, Abstract: We present Exponentiated Gradient LINUCB, an algorithm for con-textual multi-armed bandits. This algorithm uses Exponentiated Gradient to find the optimal exploration of the LINUCB. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms. Computer Science , 2012, Abstract: We study the problem of identifying the top $m$ arms in a multi-armed bandit game. Our proposed solution relies on a new algorithm based on successive rejects of the seemingly bad arms, and successive accepts of the good ones. This algorithmic contribution allows to tackle other multiple identifications settings that were previously out of reach. In particular we show that this idea of successive accepts and rejects applies to the multi-bandit best arm identification problem. Computer Science , 2008, Abstract: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. Li Zhou Computer Science , 2015, Abstract: The natural of contextual bandits makes it suitable for many machine learning applications such as user modeling, Internet advertising, search engine, experiments optimization etc. In this survey we cover three different types of contextual bandits algorithms, and for each type we introduce several representative algorithms. We also compare the regrets and assumptions between these algorithms. Computer Science , 2013, Abstract: We study the stochastic multi-armed bandit problem when one knows the value $\mu^{(\star)}$ of an optimal arm, as a well as a positive lower bound on the smallest positive gap $\Delta$. We propose a new randomized policy that attains a regret {\em uniformly bounded over time} in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows $\Delta$, and bounded regret of order $1/\Delta$ is not possible if one only knows $\mu^{(\star)}$ Page 1 /100 Display every page 5 10 20 Item
2019-07-20 17:43:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002479076385498, "perplexity": 440.31331793658705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526560.40/warc/CC-MAIN-20190720173623-20190720195623-00121.warc.gz"}
https://kerodon.net/tag/00PP
# Kerodon $\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$ Example 2.5.3.4 ($2$-Simplices of the Differential Graded Nerve). Let $\operatorname{\mathcal{C}}$ be a differential graded category. Then an element of $\operatorname{N}_{2}^{\operatorname{dg}}(\operatorname{\mathcal{C}})$ is given by the following data: • A triple of objects $X_{0}, X_1, X_2 \in \operatorname{Ob}(\operatorname{\mathcal{C}})$. • A triple of $0$-cycles $f_{10} \in \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X_0, X_1)_{0} \quad \quad f_{20} \in \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X_0, X_2)_{0} \quad \quad f_{21} \in \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X_1, X_2)_{0}.$ • A $1$-chain $f_{210} \in \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X_0, X_2)_{1}$ satisfying the identity $\partial (f_{210}) = f_{20} - (f_{21} \circ f_{10}).$ Here the $1$-chain $f_{210}$ can be regarded as a witness to the assertion that the $0$-cycles $f_{20}$ and $f_{21} \circ f_{10}$ are homologous: that is, they represent the same element of the homology group $\mathrm{H}_0( \operatorname{Hom}_{\operatorname{\mathcal{C}}}(X_0, X_2) )$. We can present this data graphically by the diagram $\xymatrix@C =50pt@R=50pt{ & X_1 \ar [dr]^{f_{21}} \ar@ {=>}[]+<0pt,-15pt>;+<0pt,-60pt>^-{f_{210}} & \\ X_0 \ar [ur]^{f_{10}} \ar [rr]_{f_{20}} & & X_2. }$
2021-12-07 17:42:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188924431800842, "perplexity": 352.72829719264547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00313.warc.gz"}
https://latex.org/forum/viewtopic.php?f=5&t=13267
## LaTeX forum ⇒ General ⇒ subfiles | Nested Files Topic is solved LaTeX specific issues not fitting into one of the other forums of this category. srvs Posts: 1 Joined: Tue May 24, 2011 10:10 am ### subfiles | Nested Files Dear all, I have our project's folder set up as followed: -Chapters --Subchapters -Figures -Includes main.tex I want to use the subfiles package so that everyone can compile their own chapter, however, the subdirectories cause numerous file not found errors. Compiling a file from e.g. chapters requires the figures to point to ../Figures instead of Figures. This is annoying because then all the paths would have to be changed when compiling the main document. The same goes for the preamble, which is also included from Includes. This isn't such a big problem as I can just type the preamble in the main file instead, but all the path errors are annoying as it kind of nullifies the advantages of the subfiles package. main.tex \documentclass{} % \usepackage{subfiles} % \begin{document} \subfile{Chapters/abstract} \end{document} abstract.tex \documentclass[../main.tex]{subfiles} \begin{document} \includegraphics{example.pdf} \end{document} Compiling from abstract.tex causes a file not found error, obviously. Ideally I'd be pointing to all the files with an explicit path, however I can't just use C:\... as I'm not the only one compiling, and everyone has their path different. Is there some sort of way to use a variable that always points to the main directory depending on the user's directory setup, so that everyone can compile from wherever without changing paths of figures etc? My question is similar to http://tex.stackexchange.com/questions/ ... -sty-files but I couldn't figure out how to use the mentioned texinputs or texmf solutions. LaTeXguide.org • LaTeX-Cookbook.net drm0hr Posts: 2 Joined: Mon Jan 30, 2012 9:08 pm ### Re: subfiles | Nested Files Did you ever figure out how to do this? I've run into the same problem and am curious if it is possible to include figures this way. drm0hr Posts: 2 Joined: Mon Jan 30, 2012 9:08 pm ### subfiles | Nested Files  Topic is solved I have figured out a (messy) work around. I am using the subfiles package and a slightly different file structure than you: Chapters --Chapter_1 ---Figures ----image.jpg ---chapter_1.tex main.tex Again the issue was that the subfile "chapter_1.tex" needs to use the path "Figures/" to find the images, where as "main.tex" needs to use "Chapters/Chapter_1/Figures/". I solved this by adding the following to the subfile: chapter_1.tex \documentclass[../../main.tex]{subfiles} \let \originalcmd \graphicspath \renewcommand{\graphicspath}[1]{\originalcmd{{Figures/}}} \begin{document} \graphicspath{{Chapters/Chapter_1/Figures/}} ... \includegraphics{example.pdf} ... \end{document} and making no changes to the main file. main.tex \documentclass{} % \usepackage{subfiles} % \begin{document} \subfile{Chapters/Chapter_1/chapter_1.tex} \end{document} Essentially what I did was tell the \graphicspath command to ignore all inputs and instead use "Figures/" as an argument. Since this reassignment is in the preamble of the subfile, it only takes place when the subfile is being compiled. When "main.tex" is compiled, the subfile's preamble is ignored and the proper \graphicspath{Chapters/Chapter_1/Figures/} is used. wirylattice Posts: 1 Joined: Mon Jun 11, 2012 9:55 pm ### subfiles | Nested Files Instead of using renewcommand for graphicspath you can pass multiple directories. Slightly less complicated: \documentclass[../../main.tex]{subfiles} \begin{document} \graphicspath{{Figures/}{Chapters/Chapter_1/Figures/}} In this way both Figures/ and Chapters/Chapter_1/Figures/ will be attempted to be scanned for the file needed, though only one of them will be present depending on which file is being compiled. bvkatwijk Posts: 1 Joined: Mon Jul 22, 2013 6:24 pm ### subfiles | Nested Files I've figured out a different solution which may come in handy. Basically its a way to make every subfile point to the main directory for a valid relative path if its compiled on its own. This solution works for files as well as pictures. So suppose we have a directory structure like: main.tex folderA --fileA.tex --folderB ----fileB.tex And we wish for fileA to be able to find fileB, either when its compiled on its own or subfile'd by main.tex. This works for me: main.tex: %documentclass, packages and preamble \begin{document} \newcommand{\main}{.} %Command \main defined in document body instead of preamble %(since subfiles use the main's preamble) %so its only defined in main.tex, not in subfiles \subfile{folderA/fileA} \end{document} A subfile fileA.tex: \documentclass[../main.tex]{subfiles} %command /main will only be defined if it isn't already \providecommand{\main}{..} \begin{document} %Insert fileB which is in folderB inside the current folderA \subfile{\main/folderA/folderB/fileB} \end{document}
2022-05-21 16:22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8419010043144226, "perplexity": 2471.890797687874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00464.warc.gz"}
http://physics.stackexchange.com/questions/47908/are-physical-probabilities-also-quantized
# Are physical probabilities also quantized? In physics there is quanta and energy occurs per this unit. Is it it then reasonable that probability also is quantized since energy is? -
2014-04-23 10:10:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9668101668357849, "perplexity": 6936.198959415599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-evaluate-a-4-b-2-if-a-3-and-b-9
# How do you evaluate a^4 – b^2 if a = 3 and b = 9? Jan 4, 2016 ${a}^{4} - {b}^{2} = 0$ #### Explanation: $G i v e n ,$ $a = 3 , b = 9$ $T h e n , {a}^{4} = 3 \cdot 3 \cdot 3 \cdot 3 = 81$ ${b}^{2} = 9 \cdot 9 = 81$ $S o , 81 - 81 = 0$
2019-12-05 23:11:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8106836676597595, "perplexity": 1169.585332755329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00241.warc.gz"}
http://www.math.harvard.edu/archive/21b_fall_10/exhibits/lincoln/index.html
M A T H 2 1 B Mathematics Math21b Fall 2010 Linear Algebra and Differential Equations Exhibit: die Lincoln Matrix Course Head: Oliver Knill Office: SciCtr 434 The following 25 x 25 matrix encodes a well known person. 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 8 8 1 1 1 1 8 8 8 8 8 8 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 1 1 1 1 8 8 8 8 8 8 8 8 8 1 1 8 8 8 8 8 8 8 8 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 1 1 8 8 8 8 8 8 8 8 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 1 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 1 1 8 8 8 8 8 8 8 8 8 8 1 1 1 1 8 8 8 8 8 8 8 8 8 1 1 8 1 1 1 1 8 8 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 1 1 8 1 1 1 1 8 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 1 8 8 1 1 8 8 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 1 8 8 8 8 8 8 8 1 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 8 1 8 8 8 8 8 8 8 1 8 8 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 1 8 8 8 8 8 1 1 8 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 1 1 8 8 8 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 8 8 8 8 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 8 8 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 1 1 1 1 1 1 1 8 8 1 8 8 8 8 8 8 8 8 8 8 8 8 8 1 8 1 1 1 1 1 8 8 8 1 1 1 8 8 8 8 8 8 8 8 8 8 8 8 1 8 8 8 8 8 8 1 1 8 1 1 1 1 8 8 8 8 Here is the Mathematica code which generated the matrix: A=Import["lincolnmatrix.gif"]; B=A[[1]]; n=Length[B]; m=Length[B[[1]]]; U=Table[ B[[i,j,1]]+B[[i,j,2]]+B[[i,j,3]],{i,n},{j,m}]; f[x_]:=If[x<200,1,If[x<400,1,8]]; V=Table[f[U[[n-i+1,j]]],{i,n},{j,m}]; S=ListDensityPlot[V,Axes->False,Frame->False] and here is the source Please send questions and comments to math21b@fas.harvard.edu Math21b (Exam Group 1)| Oliver Knill | Fall 2010 | Department of Mathematics | Faculty of Art and Sciences | Harvard University
2018-10-21 16:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27703046798706055, "perplexity": 3.6644413510941853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514162.67/warc/CC-MAIN-20181021161035-20181021182535-00104.warc.gz"}
http://www.mzan.com/article/48199153-python-text-loopsolved.shtml
Home Python Text Loop(solved) # Python Text Loop(solved) SenpaiPuppy 1# SenpaiPuppy Published in 2018-01-11 02:59:11Z What I’m trying to do is when it gets to else and prints error I want it to loop back to choice1. choice1 = input("Were would " + name + " like to go?\nThe Kitchen\nThe Couch\nOutside") if choice1 == "The Kitchen": choices.append("The Kitchen") print(name + " walked towards The Kitchen.") elif choice1 == "The Couch": choices.append("The Couch") print(name + " went and had sat on The Couch.") elif choice1 == "Outside": choices.append("Outside") print(name + " put on their pack and went out the door.") else: print("error") If there’s an error/it reaches else I want it to loop back to choice1. Olivier Pons 2# Olivier Pons Reply to 2018-01-11 08:37:40Z I'll try my best to answer the pythonic way (even though I dont consider myself Python expert): Python is so great with lists and dictionnaries that you can avoid things like switch/case stuff. Here's your comparison the Python way: name = "Olivier" possible_choices = { "The Kitchen": lambda name: "{} walked towards The Kitchen.".format(name), "The Couch": lambda name: "{} went and had sat on The Couch.".format(name), "Outside": lambda name: "{} put on their pack and went out the door.".format(name), } while True: choice = input("Were would {} like to go?\n{}\n>".format( name, '\n'.join(possible_choices))) if choice in possible_choices: print(possible_choices[choice](name)) break; # break the loop print("error") # loops With that: you have no "switch/case" because Python makes it possible, if you just add a key + lambda everything will work it's shorter thus: it's easier to read it's easier to maintain it's easier to understand thus it'll cost less to your company in the long run Learn how to use lambdas (= anonymous functions) which are used in every modern language, and this "if choice in possible_choices" is so clear that it makes it almost English! guichao 3# while True: choice1 = raw_input("Were would " + name + " like to go?\nThe Kitchen\nThe Couch\nOutside") if choice1 == "The Kitchen": choices.append("The Kitchen") print(name + " walked towards The Kitchen.") break elif choice1 == "The Couch": choices.append("The Couch") print(name + " went and had sat on The Couch.") break elif choice1 == "Outside": choices.append("Outside") print(name + " put on their pack and went out the door.") break else: print("error") continue You could use a while loop and have the if/elif statements make it escape the loop. Like this: a = True while a: choice1 = input("Were would " + name + " like to go?\nThe Kitchen\nThe Couch\nOutside") if choice1 == "The Kitchen": choices.append("The Kitchen") print(name + " walked towards The Kitchen.") a = False elif choice1 == "The Couch": choices.append("The Couch") print(name + " went and had sat on The Couch.") a = False elif choice1 == "Outside": choices.append("Outside") print(name + " put on their pack and went out the door.") a = False else: print("error") a is a boolean statement set to true. The script enters a while loop that runs while a == True. All of the if/elif statements make a false, causing the loop to end.
2018-01-23 06:16:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22707965970039368, "perplexity": 11822.05548763105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891750.87/warc/CC-MAIN-20180123052242-20180123072242-00544.warc.gz"}
http://mail-archives.apache.org/mod_mbox/ant-dev/200504.mbox/%3COFF2E7B9A3.F09075FF-ON85256FE0.0064A471@harland.net%3E
ant-dev mailing list archives Site index · List index Message view Top From GBartl...@harland.net Subject Suggested change to Move.java to check failonerror flag when unable to delete source file as part of move Date Mon, 11 Apr 2005 18:22:30 GMT A couple of weeks back I sent the following to the ant user list without any response. I am thinking that the dev list would have been a better place for this request. Can someone here give this a read and let me know if this change can be accepted back into the ant source. Thanks, Gary Bartlett Greetings I have some ant scripts that manage archiving files between NT servers in our production environment. A while back the move task began failing with an error related to deleting the source file (see below) [move] Warning: Unable to delete file \\server\foo\bar.log Under ant 162 when this happens the move task stops processing the list of files to move - but the ant script does not fail. I have the move task configured with failonerror=false - but looking at the code it seems that this parameter is not honored by the move class. I made a couple of changes locally to have the move class check this flag and conditionally issue a warning as opposed to an abort - and was wondering if these changes could get committed into the base code. Here are the changes: Move.java 193,194c244 < boolean failOnError = getFailOnError(); < String message = "Unable to delete " --- > throw new BuildException("Unable to delete " 196,202c246 < + fromFile.getAbsolutePath() < + " FailOnError is "+failOnError; < if (failOnError) { < throw new BuildException(message); < } else { < log(message); < } --- > + fromFile.getAbsolutePath()); Copy.java (added accessor to the failonerror flag) 269,271d267 < public boolean getFailOnError() { < return failonerror; < } -------------------------------------------------------- --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org
2017-07-22 09:26:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25589239597320557, "perplexity": 11805.606988344132}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00077.warc.gz"}
https://pwacker.com/fenchelrockafellar.html
# A visual explanation of the Fenchel–Rockafellar Theorem Posted on October 8, 2018 The Fenchel–Rockafellar Theorem (or Fenchel's duality theorem) is a really cool way of transforming a (potentially crazy complicated) optimization problem into its (often more well-behaved) dual problem. In this article, we will try to understand how this works (both intuitively and by looking closely at its proof). The exposition of the material is strongly based on Haim Brezis' book "Functional Analysis, Sobolev Spaces and Partial Differential Equations". The structure of this article is "tree-like": The current page is its main stem (covering only the most basic parts) but you are welcome to divert on side branches ("examples", "visualization", "rigorous proof" etc.). This is supposed to help you concentrate on the stuff you are actually interested in (so you do not need to skip large parts of the article). ## Mathematical prerequisites We assume basic familiarity with the following notions: • Banach spaces. A B.S. $$(E, \|\cdot\|)$$ is a complete normed vector space. In the following, all basic quantities lie in this Banach space $$E$$. We will also need its dual space $$E^*$$, which is the space of continuous linear functionals on $$E$$. The dual pairing between functionals $$f\in E^*$$ and elements $$x\in E$$ is denoted by $$\langle f, x\rangle$$ Note: If you aren't on good terms with Banach spaces, you can always mentally replace: • $$E \stackrel{\text{(read as)}}{\longmapsto} \mathbb R^n$$ • $$E^* \stackrel{\text{(read as)}}{\longmapsto} \mathbb R^n$$ • $$\langle f, x\rangle \stackrel{\text{(read as)}}{\longmapsto} f^T\cdot x$$ (the scalar product of vectors in $$\mathbb R^n$$) • Convexity of functions $$\phi: E\to \mathbb R\cup \{+\infty\}$$ and convex sets (in vector spaces) and the following definitions: • The domain of $$\phi: E\to \mathbb R\cup \{+\infty\}$$ is $$D(\phi) = \{x\in E: \phi(x) < +\infty\}$$ • The epigraph of $$\phi$$ is $$\operatorname{epi}(\phi) = \{[x, \lambda] \in E\times \mathbb R:~ \phi(x) \leq \lambda\}$$ • The Hahn-Banach separation theorem in the following form: Let $$E$$ be a Banach space. Given two convex sets $$A, B\subset E$$ with the following properties: 1. $$A\cap B = \emptyset$$ 2. One of the sets is open. (This is only necessary for infinite-dimensional $$E.$$) Then there is a closed hypersurface separating $$A$$ and $$B$$, i.e. there is a continuous, non-zero functional $$f\in E^*$$ and a number $$\alpha \in \mathbb R$$ such that $f(x) \begin{cases}\leq \alpha &\text{ for all} \quad x\in A\\\geq \alpha &\text{ for all} \quad x\in B\end{cases}$ ## Convex conjugate Let $$\phi: E\to \mathbb R\cup \{+\infty\}$$ be a function. Then its convex conjugate is a function $$\phi^*: E^*\to\mathbb R$$ defined as $\phi^*(f) = \sup_{x\in E}\{\langle f, x\rangle - \phi(x)\}.$ In our article we will always just use its negative version $$-\phi^*(f)$$ (this is not $$(-\phi)^*(f)$$) so we think of $$-\phi^*$$ as an inseparable unit for now. ## The Fenchel–Rockafellar Theorem: Statement Let $$\phi,\psi: E\to \mathbb R\cup \{+\infty\}$$ be two convex functions. Assume that there is a $$x_0\in D(\phi)\cap D(\psi)$$ such that $$\phi$$ is continuous at $$x_0$$. Then \begin{aligned}\inf_{x\in E}\{\phi(x) + \psi(x)\} &= \sup_{f\in E^*}\{-\phi^*(f)-\psi^*(-f)\} \\ &= \max_{f\in E^*}\{-\phi^*(f)-\psi^*(-f)\} \end{aligned} ## The Fenchel–Rockafellar Theorem: Why is it awesome? The Fenchel–Rockafellar Theorem translates a "primal" (vanilla) minimization problem into an (often more well-behaved) "dual" maximization problem. The cool part is that the maximum is guaranteed to exist. ## The Fenchel–Rockafellar Theorem: Why can we expect it to hold? A very rough idea is the following: Instead of minimizing $$\phi + \psi = \phi - (-\psi)$$, i.e. the signed distance between the graphs of $$\phi$$ and $$-\psi$$, we can also look for hypersurfaces separating the graphs and maximize the respective signed distances between the graphs and the hypersurface. ## The Fenchel–Rockafellar Theorem: Proof The main idea is to apply the Hahn-Banach theorem to separate the graphs of $$\phi$$ and $$-\psi$$.
2021-06-17 17:35:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9707176685333252, "perplexity": 368.57770255916444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00171.warc.gz"}
https://www.physicsforums.com/threads/do-fractional-spins-like-4-5-exist.951707/
Do fractional spins like 4/5 exist? • B Main Question or Discussion Point In the case of spin 1/2 the state has to be rotated twice 180° to recover the initial one. If we consider a square made of arcs of equators on a sphere. The interior angle on the sphere is chosen to be 108°. Then the sphere is rolled along those arcs on a plane. The square hence draws segments with a 108° angle between them. On the plane this closes only if we roll the sphere twice on one segment and it draws a pentagon. Hence the sphere had to be rotate 5/4 times 360°. Could this in some sense describe a spin 4/5 system ? Related Quantum Physics News on Phys.org kith No. The commutation relations for angular momentum operators imply integer or half-integer values. This is a basic result which is proven in probably all textbooks. PeterDonis Mentor 2019 Award The commutation relations for angular momentum operators imply integer or half-integer values. It's perhaps worth noting that this theorem holds in 3 or more dimensions, but not in 2 dimensions; in 2 dimensions there is a continuous range of allowed statistics, corresponding to a continuous range of angles $\theta$ that represent the phase shift on an exchange of particles. (Ordinary fermions and bosons correspond to $\theta = \pi$ and $\theta = 2 \pi$ respectively, which are the only possibilities in 3 or more dimensions.) Quasiparticles with "fractional" statistics are called "anyons", more here: https://en.wikipedia.org/wiki/Anyon Note that anyon statistics have been observed in real systems where the effective degrees of freedom are restricted to two dimensions, for example in the fractional Hall effect, as noted in the article.
2020-03-31 08:07:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7049350738525391, "perplexity": 464.2012820680382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00178.warc.gz"}
https://labs.tib.eu/arxiv/?author=G.%20Gilmore
• We highlight the power of the Gaia DR2 in studying many fine structures of the Hertzsprung-Russell diagram (HRD). Gaia allows us to present many different HRDs, depending in particular on stellar population selections. We do not aim here for completeness in terms of types of stars or stellar evolutionary aspects. Instead, we have chosen several illustrative examples. We describe some of the selections that can be made in Gaia DR2 to highlight the main structures of the Gaia HRDs. We select both field and cluster (open and globular) stars, compare the observations with previous classifications and with stellar evolutionary tracks, and we present variations of the Gaia HRD with age, metallicity, and kinematics. Late stages of stellar evolution such as hot subdwarfs, post-AGB stars, planetary nebulae, and white dwarfs are also analysed, as well as low-mass brown dwarf objects. The Gaia HRDs are unprecedented in both precision and coverage of the various Milky Way stellar populations and stellar evolutionary phases. Many fine structures of the HRDs are presented. The clear split of the white dwarf sequence into hydrogen and helium white dwarfs is presented for the first time in an HRD. The relation between kinematics and the HRD is nicely illustrated. Two different populations in a classical kinematic selection of the halo are unambiguously identified in the HRD. Membership and mean parameters for a selected list of open clusters are provided. They allow drawing very detailed cluster sequences, highlighting fine structures, and providing extremely precise empirical isochrones that will lead to more insight in stellar physics. Gaia DR2 demonstrates the potential of combining precise astrometry and photometry for large samples for studies in stellar evolution and stellar population and opens an entire new area for HRD-based studies. • ### The Gaia-ESO Survey: Evidence of atomic diffusion in M67?(1804.06293) April 17, 2018 astro-ph.GA, astro-ph.SR Investigating the chemical homogeneity of stars born from the same molecular cloud at virtually the same time is very important for our understanding of the chemical enrichment of the interstellar medium and with it the chemical evolution of the Galaxy. One major cause of inhomogeneities in the abundances of open clusters is stellar evolution of the cluster members. In this work, we investigate variations in the surface chemical composition of member stars of the old open cluster M67 as a possible consequence of atomic diffusion effects taking place during the main-sequence phase. The abundances used are obtained from high-resolution UVES/FLAMES spectra within the framework of the Gaia-ESO Survey. We find that the surface abundances of stars on the main sequence decrease with increasing mass reaching a minimum at the turn-off. After deepening of the convective envelope in sub-giant branch stars, the initial surface abundances are restored. We found the measured abundances to be consistent with the predictions of stellar evolutionary models for a cluster with the age and metallicity of M67. Our findings indicate that atomic diffusion poses a non-negligible constraint on the achievable precision of chemical tagging methods. • ### The Gaia-ESO Survey: kinematical and dynamical study of four young open clusters(1803.01908) March 5, 2018 astro-ph.GA, astro-ph.SR Context. The origin and dynamical evolution of star clusters is an important topic in stellar astrophysics. Several models have been proposed to understand the formation of bound and unbound clusters and their evolution, and these can be tested by examining the kinematical and dynamical properties of clusters over a wide range of ages and masses. Aims. We use the Gaia-ESO Survey products to study four open clusters (IC 2602, IC 2391, IC 4665, and NGC 2547) that lie in the age range between 20 and 50 Myr. Methods. We employ the gravity index $\gamma$ and the equivalent width of the lithium line at 6708 $\AA$, together with effective temperature $\rm{T_{eff}}$, and the metallicity of the stars in order to discard observed contaminant stars. Then, we derive the cluster radial velocity dispersions $\sigma_c$, the total cluster mass $\rm{M}_{tot}$, and the half mass radius $r_{hm}$. Using the $Gaia$-DR1 TGAS catalogue, we independently derive the intrinsic velocity dispersion of the clusters from the astrometric parameters of cluster members. Results. The intrinsic radial velocity dispersions derived by the spectroscopic data are larger than those derived from the TGAS data, possibly due to the different masses of the considered stars. Using $\rm{M}_{tot}$ and $r_{hm}$ we derive the virial velocity dispersion $\sigma_{vir}$ and we find that three out of four clusters are supervirial. This result is in agreement with the hypothesis that these clusters are dispersing, as predicted by the "residual gas expulsion" scenario. However, recent simulations show that the virial ratio of young star clusters may be overestimated if it is determined using the global velocity dispersion, since the clusters are not fully relaxed. • ### Is the Milky Way still breathing? RAVE-Gaia streaming motions(1710.03763) Feb. 15, 2018 astro-ph.GA We use data from the Radial Velocity Experiment (RAVE) and the Tycho-Gaia astrometric solution catalogue (TGAS) to compute the velocity fields yielded by the radial (VR), azimuthal (Vphi) and vertical (Vz) components of associated Galactocentric velocity. We search in particular for variation in all three velocity components with distance above and below the disc midplane, as well as how each component of Vz (line-of-sight and tangential velocity projections) modifies the obtained vertical structure. To study the dependence of velocity on proper motion and distance we use two main samples: a RAVE sample including proper motions from the Tycho-2, PPMXL and UCAC4 catalogues, and a RAVE-TGAS sample with inferred distances and proper motions from the TGAS and UCAC5 catalogues. In both samples, we identify asymmetries in VR and Vz. Below the plane we find the largest radial gradient to be dVR / dR = -7.01+- 0.61 km\s kpc, in agreement with recent studies. Above the plane we find a similar gradient with dVR / dR= -9.42+- 1.77 km\s kpc. By comparing our results with previous studies, we find that the structure in Vz is strongly dependent on the adopted proper motions. Using the Galaxia Milky Way model, we demonstrate that distance uncertainties can create artificial wave-like patterns. In contrast to previous suggestions of a breathing mode seen in RAVE data, our results support a combination of bending and breathing modes, likely generated by a combination of external or internal and external mechanisms. • ### The Gaia-ESO Survey: open clusters in Gaia-DR1 - a way forward to stellar age calibration(1711.07699) Nov. 21, 2017 astro-ph.GA, astro-ph.SR We describe the methodologies that, taking advantage of Gaia-DR1 and the Gaia-ESO Survey data, enable the comparison of observed open star cluster sequences with stellar evolutionary models. The final, long-term goal is the exploitation of open clusters as age calibrators. We perform a homogeneous analysis of eight open clusters using the Gaia-DR1 TGAS catalogue for bright members, and information from the Gaia-ESO Survey for fainter stars. Cluster membership probabilities for the Gaia-ESO Survey targets are derived based on several spectroscopic tracers. The Gaia-ESO Survey also provides the cluster chemical composition. We obtain cluster parallaxes using two methods. The first one relies on the astrometric selection of a sample of bona fide members, while the other one fits the parallax distribution of a larger sample of TGAS sources. Ages and reddening values are recovered through a Bayesian analysis using the 2MASS magnitudes and three sets of standard models. Lithium depletion boundary (LDB) ages are also determined using literature observations and the same models employed for the Bayesian analysis. For all but one cluster, parallaxes derived by us agree with those presented in Gaia Collaboration et al. (2017), while a discrepancy is found for NGC 2516; we provide evidence supporting our own determination. Inferred cluster ages are robust against models and are generally consistent with literature values. The systematic parallax errors inherent in the Gaia DR1 data presently limit the precision of our results. Nevertheless, we have been able to place these eight clusters onto the same age scale for the first time, with good agreement between isochronal and LDB ages where there is overlap. Our approach appears promising and demonstrates the potential of combining Gaia and ground-based spectroscopic datasets. • ### The Gaia-ESO Survey: Churning through the Milky Way(1711.05751) Nov. 15, 2017 astro-ph.GA We attempt to determine the relative fraction of stars that have undergone significant radial migration by studying the orbital properties of metal-rich ([Fe/H]$>0.1$) stars within 2 kpc of the Sun using a sample of more than 3,000 stars selected from iDR4 of the Gaia-ESO Survey. We investigate the kinematic properties, such as velocity dispersion and orbital parameters, of stellar populations near the sun as a function of [Mg/Fe] and [Fe/H], which could show evidence of a major merger in the past history of the Milky Way. This was done using the stellar parameters from the Gaia-ESO Survey along with proper motions from PPMXL to determine distances, kinematics, and orbital properties for these stars to analyze the chemodynamic properties of stellar populations near the Sun. Analyzing the kinematics of the most metal-rich stars ([Fe/H]$>0.1$), we find that more than half have small eccentricities ($e<0.2$) or are on nearly circular orbits. Slightly more than 20\% of the metal-rich stars have perigalacticons $R_p>7$ kpc. We find that the highest [Mg/Fe], metal-poor populations have lower vertical and radial velocity dispersions compared to lower [Mg/Fe] populations of similar metallicity by $\sim10$ km s$^{-1}$. The median eccentricity increases linearly with [Mg/Fe] across all metallicities, while the perigalacticon decreases with increasing [Mg/Fe] for all metallicities. Finally, the most [Mg/Fe]-rich stars are found to have significant asymmetric drift and rotate more than 40 km s$^{-1}$ slower than stars with lower [Mg/Fe] ratios. While our results cannot constrain how far stars have migrated, we propose that migration processes are likely to have played an important role in the evolution of the Milky Way, with metal-rich stars migrating from the inner disk toward to solar neighborhood and past mergers potentially driving enhanced migration of older stellar populations in the disk. • ### The Gaia-ESO Survey: Matching Chemo-Dynamical Simulations to Observations of the Milky Way(1709.01523) Sept. 5, 2017 astro-ph.GA The typical methodology for comparing simulated galaxies with observational surveys is usually to apply a spatial selection to the simulation to mimic the region of interest covered by a comparable observational survey sample. In this work we compare this approach with a more sophisticated post-processing in which the observational uncertainties and selection effects (photometric, surface gravity and effective temperature) are taken into account. We compare a `solar neighbourhood analogue' region in a model Milky Way-like galaxy simulated with RAMSES-CH with fourth release Gaia-ESO survey data. We find that a simple spatial cut alone is insufficient and that observational uncertainties must be accounted for in the comparison. This is particularly true when the scale of uncertainty is large compared to the dynamic range of the data, e.g. in our comparison, the [Mg/Fe] distribution is affected much more than the more accurately determined [Fe/H] distribution. Despite clear differences in the underlying distributions of elemental abundances between simulation and observation, incorporating scatter to our simulation results to mimic observational uncertainty produces reasonable agreement. The quite complete nature of the Gaia-ESO survey means that the selection function has minimal impact on the distribution of observed age and metal abundances but this would become increasingly more important for surveys with narrower selection functions. • ### Climbing the cosmic ladder with stellar twins in RAVE with Gaia(1705.11049) July 23, 2017 astro-ph.GA, astro-ph.SR We apply the twin method to determine parallaxes to 232,545 stars of the RAVE survey using the parallaxes of Gaia DR1 as a reference. To search for twins in this large dataset, we apply the t-stochastic neighbour embedding t-SNE projection which distributes the data according to their spectral morphology on a two dimensional map. From this map we choose the twin candidates for which we calculate a chi^2 to select the best sets of twins. Our results show a competitive performance when compared to other model-dependent methods relying on stellar parameters and isochrones. The power of the method is shown by finding that the accuracy of our results is not significantly affected if the stars are normal or peculiar since the method is model free. We find twins for 60% of the RAVE sample which is not contained in TGAS or that have TGAS uncertainties which are larger than 20%. We could determine parallaxes with typical errors of 28%. We provide a complementary dataset for the RAVE stars not covered by TGAS, or that have TGAS uncertainties which are larger than 20%, with model-free parallaxes scaled to the Gaia measurements. • ### PLATO as it is: a legacy mission for Galactic archaeology(1706.03778) July 7, 2017 astro-ph.GA, astro-ph.SR Deciphering the assembly history of the Milky Way is a formidable task, which becomes possible only if one can produce high-resolution chrono-chemo-kinematical maps of the Galaxy. Data from large-scale astrometric and spectroscopic surveys will soon provide us with a well-defined view of the current chemo-kinematical structure of the Milky Way, but will only enable a blurred view on the temporal sequence that led to the present-day Galaxy. As demonstrated by the (ongoing) exploitation of data from the pioneering photometric missions CoRoT, Kepler, and K2, asteroseismology provides the way forward: solar-like oscillating giants are excellent evolutionary clocks thanks to the availability of seismic constraints on their mass and to the tight age-initial-mass relation they adhere to. In this paper we identify five key outstanding questions relating to the formation and evolution of the Milky Way that will need precise and accurate ages for large samples of stars to be addressed, and we identify the requirements in terms of number of targets and the precision on the stellar properties that are needed to tackle such questions. By quantifying the asteroseismic yields expected from PLATO for red-giant stars, we demonstrate that these requirements are within the capabilities of the current instrument design, provided that observations are sufficiently long to identify the evolutionary state and allow robust and precise determination of acoustic-mode frequencies. This will allow us to harvest data of sufficient quality to reach a 10% precision in age. This is a fundamental pre-requisite to then reach the more ambitious goal of a similar level of accuracy, which will only be possible if we have to hand a careful appraisal of systematic uncertainties on age deriving from our limited understanding of stellar physics, a goal which conveniently falls within the main aims of PLATO's core science. • ### The Gaia-ESO Survey: double, triple and quadruple-line spectroscopic binary candidates(1707.01720) July 6, 2017 astro-ph.SR The Gaia-ESO Survey (GES) is a large spectroscopic survey that provides a unique opportunity to study the distribution of spectroscopic multiple systems among different populations of the Galaxy. We aim at detecting binarity/multiplicity for stars targeted by the GES from the analysis of the cross-correlation functions (CCFs) of the GES spectra with spectral templates. We develop a method based on the computation of the CCF successive derivatives to detect multiple peaks and determine their radial velocities, even when the peaks are strongly blended. The parameters of the detection of extrema (DOE) code have been optimized for each GES GIRAFFE and UVES setup to maximize detection. This code therefore allows to automatically detect multiple line spectroscopic binaries (SBn, n>1). We apply this method on the fourth GES internal data release and detect 354 SBn candidates (342 SB2, 11 SB3 and even one SB4), including only 9 SB2 known in the literature. This implies that about 98% of these SBn candidates are new (because of their faint visual magnitude that can reach V=19). Visual inspection of the SBn candidate spectra reveals that the most probable candidates have indeed a composite spectrum. Among SB2 candidates, an orbital solution could be computed for two previously unknown binaries: 06404608+0949173 (known as V642 Mon) in NGC 2264 and 19013257-0027338 in Berkeley 81. A detailed analysis of the unique SB4 (four peaks in the CCF) reveals that HD 74438 in the open cluster IC 2391 is a physically bound stellar quadruple system. The SB candidates belonging to stellar clusters are reviewed in detail to discard false detections. We warn against the use of atmospheric parameters for these system components rather than by SB-specific pipelines. Our implementation of an automatic detection of spectroscopic binaries within the GES has allowed an efficient discovery of many new multiple systems. • Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018. • ### Very metal-poor stars observed by the RAVE survey(1704.05695) April 19, 2017 astro-ph.GA, astro-ph.SR We present a novel analysis of the metal-poor star sample in the complete Radial Velocity Experiment (RAVE) Data Release 5 catalog with the goal of identifying and characterizing all very metal-poor stars observed by the survey. Using a three-stage method, we first identified the candidate stars using only their spectra as input information. We employed an algorithm called t-SNE to construct a low-dimensional projection of the spectrum space and isolate the region containing metal-poor stars. Following this step, we measured the equivalent widths of the near-infrared CaII triplet lines with a method based on flexible Gaussian processes to model the correlated noise present in the spectra. In the last step, we constructed a calibration relation that converts the measured equivalent widths and the color information coming from the 2MASS and WISE surveys into metallicity and temperature estimates. We identified 877 stars with at least a 50% probability of being very metal-poor $(\rm [Fe/H] < -2\,\rm dex)$, out of which 43 are likely extremely metal-poor $(\rm [Fe/H] < -3\,\rm dex )$. The comparison of the derived values to a small subsample of stars with literature metallicity values shows that our method works reliably and correctly estimates the uncertainties, which typically have values $\sigma_{\rm [Fe/H]} \approx 0.2\,\mathrm{dex}$. In addition, when compared to the metallicity results derived using the RAVE DR5 pipeline, it is evident that we achieve better accuracy than the pipeline and therefore more reliably evaluate the very metal-poor subsample. Based on the repeated observations of the same stars, our method gives very consistent results. The method used in this work can also easily be extended to other large-scale data sets, including to the data from the Gaia mission and the upcoming 4MOST survey. • ### The Gaia-ESO Survey: Exploring the complex nature and origins of the Galactic bulge populations(1704.03325) April 11, 2017 astro-ph.GA Abridged: We used the fourth internal data release of the Gaia-ESO survey to characterize the bulge chemistry, spatial distribution, kinematics, and to compare it chemically with the thin and thick disks. The sample consist on ~2500 red clump stars in 11 bulge fields ($-10^\circ\leq l\leq+8^\circ$ and $-10^\circ\leq b\leq-4^\circ$), and a set of ~6300 disk stars selected for comparison. The bulge MDF is confirmed to be bimodal across the whole sampled area, with metal-poor stars dominating at high latitudes. The metal-rich stars exhibit bar-like kinematics and display a bimodality in their magnitude distribution, a feature which is tightly associated with the X-shape bulge. They overlap with the metal-rich end of the thin disk sequence in the [Mg/Fe] vs. [Fe/H] plane. Metal-poor bulge stars have a more isotropic hot kinematics and do not participate in the X-shape bulge. With similar Mg-enhancement levels, the position of the metal-poor bulge sequence "knee" is observed at [Fe/H]$_{knee}=-0.37\pm0.09$, being 0.06 dex higher than that of the thick disk. It suggests a higher SFR for the bulge than for the thick disk. Finally, we present a chemical evolution model that suitably fits the whole bulge sequence by assuming a fast ($<1$ Gyr) intense burst of stellar formation at early epochs. We associate metal-rich stars with the B/P bulge formed from the secular evolution of the early thin disk. On the other hand, the metal-poor subpopulation might be the product of an early prompt dissipative collapse dominated by massive stars. Nevertheless, our results do not allow us to firmly rule out the possibility that these stars come from the secular evolution of the early thick disk. This is the first time that an analysis of the bulge MDF and $\alpha$-abundances has been performed in a large area on the basis of a homogeneous, fully spectroscopic analysis of high-resolution, high S/N data. • ### The Gaia-ESO Survey: Galactic evolution of sulphur and zinc(1704.02981) April 10, 2017 astro-ph.GA, astro-ph.SR Due to their volatile nature, when sulfur and zinc are observed in external galaxies, their determined abundances represent the gas-phase abundances in the interstellar medium. This implies that they can be used as tracers of the chemical enrichment of matter in the Universe at high redshift. Comparable observations in stars are more difficult and, until recently, plagued by small number statistics. We wish to exploit the Gaia ESO Survey (GES) data to study the behaviour of sulfur and zinc abundances of a large number of Galactic stars, in a homogeneous way. By using the UVES spectra of the GES sample, we are able to assemble a sample of 1301 Galactic stars, including stars in open and globular clusters in which both sulfur and zinc were measured. We confirm the results from the literature that sulfur behaves as an alpha-element. We find a large scatter in [Zn/Fe] ratios among giant stars around solar metallicity. The lower ratios are observed in giant stars at Galactocentric distances less than 7.5 kpc. No such effect is observed among dwarf stars, since they do not extend to that radius. Given the sample selection, giants and dwarfs are observed at different Galactic locations, and it is plausible, and compatible with simple calculations, that Zn-poor giants trace a younger population more polluted by SN Ia yields. It is necessary to extend observations in order to observe both giants and dwarfs at the same Galactic location. Further theoretical work on the evolution of zinc is also necessary. • ### The Mass Distribution of Population III Stars(1511.03428) April 3, 2017 astro-ph.SR Extremely metal-poor stars are uniquely informative on the nature of massive Population III stars. Modulo a few elements that vary with stellar evolution, the present-day photospheric abundances observed in extremely metal-poor stars are representative of their natal gas cloud composition. For this reason, the chemistry of extremely metal-poor stars closely reflects the nucleosynthetic yields of supernovae from massive Population III stars. Here we collate detailed abundances of 53 extremely metal-poor stars from the literature and infer the masses of their Population III progenitors. We fit a simple initial mass function to a subset of 29 of theinferred Population III star masses, and find that the mass distribution is well-represented by a power law IMF with exponent $\alpha = 2.35^{+0.29}_{-0.24}$. The inferred maximum progenitor mass for supernovae from massive Population III stars is $M_{\rm{max}} = 87^{+13}_{-33}$ M$_\odot$, and we find no evidence in our sample for a contribution from stars with masses above $\sim$120 M$_\odot$. The minimum mass is strongly consistent with the theoretical lower mass limit for Population III supernovae. We conclude that the IMF for massive Population III stars is consistent with the initial mass function of present-day massive stars and there may well have formed stars much below the supernova mass limit that could have survived to the present day. • Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs. • ### The Gaia-ESO Survey: radial distribution of abundances in the Galactic disc from open clusters and young field stars(1703.00762) March 2, 2017 astro-ph.GA, astro-ph.SR The spatial distribution of elemental abundances in the disc of our Galaxy gives insights both on its assembly process and subsequent evolution, and on the stellar nucleogenesis of the different elements. Gradients can be traced using several types of objects as, for instance, (young and old) stars, open clusters, HII regions, planetary nebulae. We aim at tracing the radial distributions of abundances of elements produced through different nucleosynthetic channels -the alpha-elements O, Mg, Si, Ca and Ti, and the iron-peak elements Fe, Cr, Ni and Sc - by using the Gaia-ESO idr4 results of open clusters and young field stars. From the UVES spectra of member stars, we determine the average composition of clusters with ages >0.1 Gyr. We derive statistical ages and distances of field stars. We trace the abundance gradients using the cluster and field populations and we compare them with a chemo-dynamical Galactic evolutionary model. Results. The adopted chemo-dynamical model, with the new generation of metallicity-dependent stellar yields for massive stars, is able to reproduce the observed spatial distributions of abundance ratios, in particular the abundance ratios of [O/Fe] and [Mg/Fe] in the inner disc (5 kpc<RGC <7 kpc), with their differences, that were usually poorly explained by chemical evolution models. Often, oxygen and magnesium are considered as equivalent in tracing alpha-element abundances and in deducing, e.g., the formation time-scales of different Galactic stellar populations. In addition, often [alpha/Fe] is computed combining several alpha-elements. Our results indicate, as expected, a complex and diverse nucleosynthesis of the various alpha-elements, in particular in the high metallicity regimes, pointing towards a different origin of these elements and highlighting the risk of considering them as a single class with common features. • ### The Gaia-ESO Survey: low-alpha element stars in the Galactic Bulge(1702.04500) Feb. 15, 2017 astro-ph.GA We take advantage of the Gaia-ESO Survey iDR4 bulge data to search for abundance anomalies that could shed light on the composite nature of the Milky Way bulge. The alpha-elements (Mg, Si, and whenever available, Ca) abundances, and their trends with Fe abundances have been analysed for a total of 776 bulge stars. In addition, the aluminum abundances and their ratio to Fe and Mg have also been examined. Our analysis reveals the existence of low-alpha element abundance stars with respect to the standard bulge sequence in the [alpha/Fe] vs. [Fe/H] plane. 18 objects present deviations in [alpha/Fe] ranging from 2.1 to 5.3 sigma with respect to the median standard value. Those stars do not show Mg-Al anti-correlation patterns. Incidentally, this sign of the existence of multiple stellar populations is reported firmly for the first time for the bulge globular cluster NGC 6522. The identified low-alpha abundance stars have chemical patterns compatible with those of the thin disc. Their link with massive dwarf galaxies accretion seems unlikely, as larger deviations in alpha abundance and Al would be expected. The vision of a bulge composite nature and a complex formation process is reinforced by our results. The used approach, a multi-method and model-driven analysis of high resolution data seems crucial to reveal this complexity. • ### Gaia-ESO Survey: global properties of clusters Trumpler 14 and 16 in the Carina Nebula(1702.04776) Feb. 15, 2017 astro-ph.GA, astro-ph.SR We present the first extensive spectroscopic study of the global population in star clusters Trumpler~16, Trumpler~14 and Collinder~232 in the Carina Nebula, using data from the Gaia-ESO Survey, down to solar-mass stars. In addition to the standard homogeneous Survey data reduction, a special processing was applied here because of the bright nebulosity surrounding Carina stars. We find about four hundred good candidate members ranging from OB types down to slightly sub-solar masses. About one-hundred heavily-reddened early-type Carina members found here were previously unrecognized or poorly classified, including two candidate O stars and several candidate Herbig Ae/Be stars. Their large brightness makes them useful tracers of the obscured Carina population. The spectroscopically-derived temperatures for nearly 300 low-mass members allows the inference of individual extinction values, and the study of the relative placement of stars along the line of sight. We find a complex spatial structure, with definite clustering of low-mass members around the most massive stars, and spatially-variable extinction. By combining the new data with existing X-ray data we obtain a more complete picture of the three-dimensional spatial structure of the Carina clusters, and of their connection to bright and dark nebulosity, and UV sources. The identification of tens of background giants enables us also to determine the total optical depth of the Carina nebula along many sightlines. We are also able to put constraints on the star-formation history of the region, with Trumpler~14 stars found to be systematically younger than stars in other sub-clusters. We find a large percentage of fast-rotating stars among Carina solar-mass members, which provide new constraints on the rotational evolution of pre-main-sequence stars in this mass range. • ### The Gaia-ESO Survey: the present-day radial metallicity distribution of the Galactic disc probed by pre-main-sequence clusters(1702.03461) Feb. 11, 2017 astro-ph.GA, astro-ph.SR The radial metallicity distribution in the Galactic thin disc represents a crucial constraint for modelling disc formation and evolution. Open clusters allow us to derive both the radial metallicity distribution and its evolution over time. In this paper we perform the first investigation of the present-day radial metallicity distribution based on [Fe/H] determinations in late type members of pre-main-sequence clusters. Because of their youth, these clusters are therefore essential for tracing the current inter-stellar medium metallicity. We used the products of the Gaia-ESO Survey analysis of 12 young regions (age<100 Myr), covering Galactocentric distances from 6.67 to 8.70 kpc. For the first time, we derived the metal content of star forming regions farther than 500 pc from the Sun. Median metallicities were determined through samples of reliable cluster members. For ten clusters the membership analysis is discussed in the present paper, while for other two clusters (Chamaeleon I and Gamma Velorum) we adopted the members identified in our previous works. All the pre-main-sequence clusters considered in this paper have close-to-solar or slightly sub-solar metallicities. The radial metallicity distribution traced by these clusters is almost flat, with the innermost star forming regions having [Fe/H] values that are 0.10-0.15 dex lower than the majority of the older clusters located at similar Galactocentric radii. This homogeneous study of the present-day radial metallicity distribution in the Galactic thin disc favours models that predict a flattening of the radial gradient over time. On the other hand, the decrease of the average [Fe/H] at young ages is not easily explained by the models. Our results reveal a complex interplay of several processes (e.g. star formation activity, initial mass function, supernova yields, gas flows) that controlled the recent evolution of the Milky Way. • ### The $Gaia$-ESO Survey: the inner disk intermediate-age open cluster NGC 6802(1702.01109) Feb. 3, 2017 astro-ph.GA, astro-ph.SR Milky Way open clusters are very diverse in terms of age, chemical composition, and kinematic properties. Intermediate-age and old open clusters are less common, and it is even harder to find them inside the solar Galactocentric radius, due to the high mortality rate and strong extinction inside this region. NGC 6802 is one of the inner disk open clusters (IOCs) observed by the $Gaia$-ESO survey (GES). This cluster is an important target for calibrating the abundances derived in the survey due to the kinematic and chemical homogeneity of the members in open clusters. Using the measurements from $Gaia$-ESO internal data release 4 (iDR4), we identify 95 main-sequence dwarfs as cluster members from the GIRAFFE target list, and eight giants as cluster members from the UVES target list. The dwarf cluster members have a median radial velocity of $13.6\pm1.9$ km s$^{-1}$, while the giant cluster members have a median radial velocity of $12.0\pm0.9$ km s$^{-1}$ and a median [Fe/H] of $0.10\pm0.02$ dex. The color-magnitude diagram of these cluster members suggests an age of $0.9\pm0.1$ Gyr, with $(m-M)_0=11.4$ and $E(B-V)=0.86$. We perform the first detailed chemical abundance analysis of NGC 6802, including 27 elemental species. To gain a more general picture about IOCs, the measurements of NGC 6802 are compared with those of other IOCs previously studied by GES, that is, NGC 4815, Trumpler 20, NGC 6705, and Berkeley 81. NGC 6802 shows similar C, N, Na, and Al abundances as other IOCs. These elements are compared with nucleosynthetic models as a function of cluster turn-off mass. The $\alpha$, iron-peak, and neutron-capture elements are also explored in a self-consistent way. • ### The Gaia-ESO Survey: Structural and dynamical properties of the young cluster Chamaeleon I(1701.03741) Jan. 13, 2017 astro-ph.GA, astro-ph.SR The young (~2 Myr) cluster Chamaeleon I is one of the closest laboratories to study the early stages of star cluster dynamics in a low-density environment. We studied its structural and kinematical properties combining parameters from the high-resolution spectroscopic survey Gaia-ESO with data from the literature. Our main result is the evidence of a large discrepancy between the velocity dispersion (sigma = 1.14 \pm 0.35 km s^{-1}) of the stellar population and the dispersion of the pre-stellar cores (~0.3 km s^{-1}) derived from submillimeter observations. The origin of this discrepancy, which has been observed in other young star clusters is not clear. It may be due to either the effect of the magnetic field on the protostars and the filaments, or to the dynamical evolution of stars driven by two-body interactions. Furthermore, the analysis of the kinematic properties of the stellar population put in evidence a significant velocity shift (~1 km s^{-1}) between the two sub-clusters located around the North and South main clouds. This result further supports a scenario, where clusters form from the evolution of multiple substructures rather than from a monolithic collapse. Using three independent spectroscopic indicators (the gravity indicator $\gamma$, the equivalent width of the Li line, and the H_alpha 10\% width), we performed a new membership selection. We found six new cluster members located in the outer region of the cluster. Starting from the positions and masses of the cluster members, we derived the level of substructure Q, the surface density \Sigma and the level of mass segregation $\Lambda_{MSR}$ of the cluster. The comparison between these structural properties and the results of N-body simulations suggests that the cluster formed in a low density environment, in virial equilibrium or supervirial, and highly substructured. • ### The Gaia-ESO Survey: the inner disk, intermediate-age open cluster Trumpler 23(1611.00859) Jan. 10, 2017 astro-ph.GA, astro-ph.SR Context: Trumpler 23 is a moderately populated, intermediate-age open cluster within the solar circle at a Rgc ~6 kpc. It is in a crowded field very close to the Galactic plane and the color-magnitude diagram shows significant field contamination and possible differential reddening; it is a relatively understudied cluster for these reasons, but its location makes it a key object for determining Galactic abundance distributions. Aims: New data from the Gaia-ESO Survey enable the first ever radial velocity and spectroscopic metallicity measurements for this cluster. We aim to use velocities to isolate cluster members, providing more leverage for determining cluster parameters. Methods: Gaia-ESO Survey data for 167 potential members have yielded radial velocity measurements, which were used to determine the systemic velocity of the cluster and membership of individual stars. Atmospheric parameters were also used as a check on membership when available. Literature photometry was used to re-determine cluster parameters based on radial velocity member stars only; theoretical isochrones are fit in the V, V-I diagram. Cluster abundance measurements of ten radial-velocity member stars with high-resolution spectroscopy are presented for 24 elements. These abundances have been compared to local disk stars, and where possible placed within the context of literature gradient studies. Results: We find Trumpler 23 to have an age of 0.80 +/- 0.10 Gyr, significant differential reddening with an estimated mean cluster E(V-I) of 1.02 +0.14/-0.09, and an apparent distance modulus of 14.15 +/- 0.20. We find an average cluster metallicity of [Fe/H] = 0.14 +/- 0.03 dex, a solar [alpha/Fe] abundance, and notably subsolar [s-process/Fe] abundances. • ### Gaia FGK Benchmark stars: Opening the black box of stellar element abundance determination(1612.05013) Dec. 15, 2016 astro-ph.GA, astro-ph.SR Gaia and its complementary spectroscopic surveys combined will yield the most comprehensive database of kinematic and chemical information of stars in the Milky Way. The Gaia FGK benchmark stars play a central role in this matter as they are calibration pillars for the atmospheric parameters and chemical abundances for various surveys. The spectroscopic analyses of the benchmark stars are done by combining different methods, and the results will be affected by the systematic uncertainties inherent in each method. In this paper we explore some of these systematic uncertainties. We determined line abundances of Ca, Cr, Mn and Co for four benchmark stars using six different methods. We changed the default input parameters of the different codes in a systematic way and found in some cases significant differences between the results. Since there is no consensus on the correct values for many of these default parameters, we urge the community to raise discussions towards standard input parameters that could alleviate the difference in abundances obtained by different methods. In this work we provide quantitative estimates of uncertainties in elemental abundances due to the effect of differing technical assumptions in spectrum modelling. • ### Gaia data release 1, the photometric data(1612.02952) Dec. 9, 2016 astro-ph.IM Context. This paper presents an overview of the photometric data that are part of the first Gaia data release. Aims. The principles of the processing and the main characteristics of the Gaia photometric data are presented. Methods. The calibration strategy is outlined briefly and the main properties of the resulting photometry are presented. Results. Relations with other broadband photometric systems are provided. The overall precision for the Gaia photometry is shown to be at the milli-magnitude level and has a clear potential to improve further in future releases.
2020-12-04 05:35:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5482310056686401, "perplexity": 2051.116216088756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00647.warc.gz"}
https://mathdada.com/rectangular-cartesian-coordinates-in-3d/
Rectangular Cartesian Coordinates in Three Dimension Geometry Rectangular Cartesian Coordinates In solid analytical geometry, the position of a point is determined by the coordinates with reference to three axes in space. Let us take three mutually perpendicular straight lines X’OX, Y’OY and Z’OZ meeting at O. Let P be a point in space. Draw PN perpendicular to the plane XOY and NA perpendicular to OX. Let OA = x, AN = y and NP = z. The position of the point P is determined by these three distances called the coordinates (x, y, z) with reference to the three axes OX, OY, OZ, called the x-axis, y-axis and z-axis respectively. The coordinates of the origin O are (0, 0, 0). The planes YOZ, ZOX, XOY are called the co-ordinate planes. Through P draw planes parallel to the coordinates planes. These planes combined with the three co-ordinate planes form a parallelepiped having six rectangular faces. x = PM = length of the perpendicular from P on the yz-plane = AO, y = PL = length of the perpendicular from P on the zx-plane = BO, z = PN = length of the perpendicular from P on the xy-plane = CO. On the plane YOZ, x = 0; on the plane ZOX, y = 0 and on the plane XOY, z = 0. Thus the locus of the point for which x = 0 is the plane yz-plane; for which y = 0 is the zx-plane and for which z = 0 is the xy-plane. If OP be the diagonal of the parallelepiped, then $O{{P}^{2}}=O{{N}^{2}}+N{{P}^{2}}=O{{A}^{2}}+A{{N}^{2}}+N{{P}^{2}}={{x}^{2}}+{{y}^{2}}+{{z}^{2}}$ A plane in space divides it into two parts each of which is called half space. Distance Between Two Points Let the coordinates of the two points P and Q be (x1, y1, z1) and (x2, y2, z2) respectively. Then $PQ=\sqrt{{{\left( {{x}_{2}}-{{x}_{1}} \right)}^{2}}+{{\left( {{y}_{2}}-{{y}_{1}} \right)}^{2}}+{{\left( {{z}_{2}}-{{z}_{1}} \right)}^{2}}}$ Co-ordinate of a Point dividing the join of two points in a given ratio. Let the point R divide PQ in the ratio m:n internally. Let the coordinates of P and Q be (x1, y1, z1) and (x2, y2, z2) respectively. Then the coordinates of R is $\left( \frac{m{{x}_{2}}+n{{x}_{1}}}{m+n},\frac{m{{y}_{2}}+n{{y}_{1}}}{m+n},\frac{m{{z}_{2}}+n{{z}_{1}}}{m+n} \right)$ The co-ordinates of the point dividing PQ externally in the ratio m:n are $\left( \frac{m{{x}_{2}}-n{{x}_{1}}}{m-n},\frac{m{{y}_{2}}-n{{y}_{1}}}{m-n},\frac{m{{z}_{2}}-n{{z}_{1}}}{m-n} \right)$ The coordinates of the middle point of PQ are $\left( \frac{{{x}_{1}}+{{x}_{2}}}{2},\frac{{{y}_{1}}+{{y}_{2}}}{2}.\frac{{{z}_{1}}+{{z}_{2}}}{2} \right)$ Direction Cosines If α, β, γ be the angles which a straight line makes with the lines parallel to the axes of coordinates, the α, β, γ are called the direction angles of the line. If α, β, γ be the angles which a straight line makes with the positive directions of the axes, then $\cos \alpha ,\cos \beta ,\,\cos \gamma$are called direction cosines of the line. They are generally denoted by l, m, n. ${{l}^{2}}+{{m}^{2}}+{{n}^{2}}={{\cos }^{2}}\alpha +{{\cos }^{2}}\beta +\,{{\cos }^{2}}\gamma =1$ The direction cosines of the x-axis, y-axis and z-axis are, by definition (1, 0, 0), (0, 1, 0) and (0, 0, 1) respectively. Direction Ratios Any three numbers a, b, c which are proportional to the direction cosines l, m, n respectively of a given straight line are called the direction ratios or direction numbers of the given line. $\frac{l}{a}=\frac{m}{b}=\frac{n}{c}=\pm \frac{\sqrt{{{l}^{2}}+{{m}^{2}}+{{n}^{2}}}}{\sqrt{{{a}^{2}}+{{b}^{2}}+{{c}^{2}}}}=\pm \frac{1}{\sqrt{{{a}^{2}}+{{b}^{2}}+{{c}^{2}}}}$ $\therefore \,\,l=\pm \frac{a}{\sqrt{{{a}^{2}}+{{b}^{2}}+{{c}^{2}}}},\,\,m=\pm \frac{b}{\sqrt{{{a}^{2}}+{{b}^{2}}+{{c}^{2}}}},\,\,n=\pm \frac{c}{\sqrt{{{a}^{2}}+{{b}^{2}}+{{c}^{2}}}}$ Where the same sign either positive or negative is to be chosen throughout. Direction cosines of a straight line joining two given points Let the points P and Q be (x1, y1, z1) and (x2, y2, z2) respectively and l, m, n be the direction cosines of the straight line PQ whose direction angles are α, β, γ. The direction ratios of the straight line joining P and Q being \left( {{x}_{2}}-{{x}_{1}} \right),\,\,\left( {{y}_{2}}-{{y}_{1}} \right),\,\,\left( {{z}_{2}}-{{z}_{1}} \right) the direction cosines are $\frac{{{x}_{2}}-{{x}_{1}}}{PQ},\frac{{{y}_{2}}-{{y}_{1}}}{PQ},\frac{{{z}_{2}}-{{z}_{1}}}{PQ}$ Projections The projection of a point on any straight line is the point where the line is met by the plane through the point perpendicular to the line. Thus the foot of the perpendicular from a point on a given straight line is the projection of the point on that line. Hence, in the figure-2 A, B, C are the projections of P on the coordinates axes. The projection of a straight line of limited length on another straight line is the length intercepted between the projections of its extremities. Suppose we have a straight line PQ and a plane T in space. Draw Pp and Qq perpendiculars from P and Q on the plane T. Then pq is called the orthogonal projection of PQ on T. If QP is produced to meet the plane in R, then Rpq is the projection of RPQ. Projection of a line segment joining two points on another line. Let the points P and Q be (x1, y1, z1) and (x2, y2, z2) respectively. Through P and Q draw planes parallel to the co-ordinate planes to form a rectangular parallelepiped. Then clearly, AP={{x}_{2}}-{{x}_{1}},\,\,AN={{y}_{2}}-{{y}_{1}},\,\,QN={{z}_{2}}-{{z}_{1}}. The projections of AP, AN, QN on any straight line L, whose direction cosines are l, m, n are \left( {{x}_{2}}-{{x}_{1}} \right)l,\,\,\left( {{y}_{2}}-{{y}_{1}} \right)m,\,\,\left( {{z}_{2}}-{{z}_{1}} \right)n respectively. The projection of PQ on the straight line L is thus the sum of the projections of the components AP, AN and QN on the straight line L and is thus equal to $\left( {{x}_{2}}-{{x}_{1}} \right)l+\left( {{y}_{2}}-{{y}_{1}} \right)m+\,\left( {{z}_{2}}-{{z}_{1}} \right)n$ Angle between two straight lines Let OP and OQ be two straight lines through the origin O parallel to the two given straight lines with direction cosines \left( {{l}_{1}},\,\,{{m}_{1}},\,\,{{n}_{1}} \right) and \left( {{l}_{2}},\,\,{{m}_{2}},\,\,{{n}_{2}} \right) respectively. Let the angle between the two given straight lines, that is, the angle between OP and OQ be θ. Let P and Q be the points with coordinates (x1, y1, z1) and (x2, y2, z2) respectively. $\cos \theta ={{l}_{1}}{{l}_{2}}+{{m}_{1}}{{m}_{2}}+{{n}_{1}}{{n}_{2}}$ Example 01 Find the ratio in which the line segment joining the points (2, –3, 5) and (7, 1, 3) is divided by the xy-plane. Solution: Let the required ratio be m:n.<br>The co-ordinates of the point are \left( \frac{7m+2m}{m+n},\frac{m-3n}{m+n},\frac{3m+5n}{m+n} \right)<br>This point lies on the xy-plane. Hence its z-coordinate is zero. That is $\frac{3m+5n}{m+n}=0$ $\Rightarrow 3m+5n=0$ $\Rightarrow \frac{m}{n}=-\frac{5}{3}$ Hence the division ratio is 5:3 externally. Example 02 Find the centroid of the triangle whose vertices are the points A(x1, y1, z1), B(x2, y2, z2) and C(x3, y3, z3). Solution: Let D be the middle point of the side BC. Therefore the co-ordinates of D are \left( \frac{{{x}_{2}}+{{x}_{3}}}{2},\frac{{{y}_{2}}+{{y}_{3}}}{2}.\frac{{{z}_{2}}+{{z}_{3}}}{2} \right) Now, if G be the centroid, then AG:GD = 2:1.<br>Therefore, if (x, y, z) be the co-ordinates of G, then $x=\frac{1\times {{x}_{1}}+2\times \frac{{{x}_{2}}+{{x}_{3}}}{2}}{2+1}=\frac{{{x}_{1}}+{{x}_{2}}+{{x}_{3}}}{3}$ $y=\frac{1\times {{y}_{1}}+2\times \frac{{{y}_{2}}+{{y}_{3}}}{2}}{2+1}=\frac{{{y}_{1}}+{{y}_{2}}+{{y}_{3}}}{3}$ $z=\frac{1\times {{z}_{1}}+2\times \frac{{{z}_{2}}+{{z}_{3}}}{2}}{2+1}=\frac{{{z}_{1}}+{{z}_{2}}+{{z}_{3}}}{3}$ Example 03 Find the projection of the line segment joining the points (3, 3, 5) and (5, 4, 3) on the straight line joining the points (2, –1, 4) and (0, 1, 5). Solution: The projections of the line segment joining the points (3, 3, 5) and (5, 4, 3) on the axes are (5 – 3), (4 – 3), (3 – 5) that is 2, 1, –2. The direction cosines of the straight line joining the points (2, –1, 4) and (0, 1, 5) are -\frac{2}{3},\frac{2}{3},\frac{1}{3} since \sqrt{{{\left( 2-0 \right)}^{2}}+{{\left( -1-1 \right)}^{2}}+{{\left( 4-5 \right)}^{2}}}=3. Hence the projection of the first line on the second line is $2\left( -\frac{2}{3} \right)+1\left( \frac{2}{3} \right)-2\left( \frac{1}{3} \right)=-\frac{4}{3}$ Example 04 Show that the triangle formed by the points (2, 3, 1), (–2, 2, 0) and (0, 1, –1). Find also the other angles. Solution: Let the given vertices be A, B, C respectively. The direction cosines of BA, BC, CA are respectively $\left( \frac{4}{\sqrt{18}},\frac{1}{\sqrt{18}},\frac{1}{\sqrt{18}} \right);\left( \frac{2}{\sqrt{6}},-\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{6}} \right);\left( \frac{2}{\sqrt{12}},\frac{2}{\sqrt{12}},\frac{2}{\sqrt{12}} \right)$ The angle between BA and BC is ${{\cos }^{-1}}\left( \frac{8-1-1}{\sqrt{18}\sqrt{5}} \right)={{\cos }^{-1}}\frac{1}{\sqrt{13}}$ The angle between BC and CA is ${{\cos }^{-1}}\left( \frac{4-2-2}{\sqrt{6}\sqrt{12}} \right)={{\cos }^{-1}}0=90{}^\circ$ The angle between CA and BA is ${{\cos }^{-1}}\left( \frac{8+2+2}{\sqrt{18}\sqrt{12}} \right)={{\cos }^{-1}}\frac{2}{\sqrt{6}}$ So the triangle is right-angled and the other angles are {{\cos }^{-1}}\frac{1}{\sqrt{3}} and {{\cos }^{-1}}\frac{2}{\sqrt{6}}. 0 Scroll to Top
2021-10-26 11:39:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010686278343201, "perplexity": 447.38415104525365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587877.85/warc/CC-MAIN-20211026103840-20211026133840-00470.warc.gz"}
http://www.iep.utm.edu/nat-ded/
# Natural Deduction Natural Deduction (ND) is a common name for the class of proof systems composed of simple and self-evident inference rules based upon methods of proof and traditional ways of reasoning that have been applied since antiquity in deductive practice. The first formal ND systems were independently constructed in the 1930s by G. Gentzen and S. Jaśkowski and proposed as an alternative to Hilbert-style axiomatic systems. Gentzen introduced a format of ND particularly useful for  theoretical investigations of the structure of proofs. Jaśkowski instead provided a format of ND more suitable for practical purposes of proof search. Since then many other ND systems were developed of apparently different character. What is it that makes them all ND systems despite the differences in the selection of rules, construction of proof, and other features? First of all, in contrast to proofs in axiomatic systems, proofs in ND systems are based on the use of assumptions which are freely introduced but discharged under some conditions. Moreover, ND systems use many inference rules of simple character which show how to compose and decompose formulas in proofs. Finally, ND systems allow for the application of different proof-search strategies. Thanks to these features proofs in ND systems tend to be much shorter and easier to construct than in axiomatic or tableau systems. These properties of ND make them one of the most popular ways of teaching logic in elementary courses. In addition to its educational value, ND is also an important tool in proof-theoretical investigations and in the philosophy of meaning (specifically, of the meaning of logical constants). This article focuses on the description of the main types of ND systems and briefly mentions more advanced issues concerning normal proofs and proof-theoretical semantics. ## 1. History of Natural Deduction When dealing with the history of ND, one should distinguish between the exact date when the first formal systems of ND were presented and much earlier times when the rules of ND were actually applied. Although one may claim that ND techniques were used as early as people did reasoning, it is unquestionable that the exact formulation of ND and the justification of its correctness was postponed until the 20th century. ### a. Origins The first ND systems were developed independently by Gerhard Gentzen and Stanisław Jaśkowski and presented in papers published in 1934 (Gentzen 1934, Jaśkowski 1934). Both approaches, although different in many respects, provided the realization of the same basic idea: formally correct systematization of traditional means of proving theorems in mathematics, science and ordinary discourse. It was a reaction to the artificiality of formalization of proofs in axiomatic systems. Hilbert’s proof theory offered high standards of precise formulation of this notion, but formal axiomatic proofs were really different than ‘real’ proofs offered by mathematicians. The process of actual deduction in axiomatic systems is usually complicated and needs a lot of invention. Moreover, real proofs are usually lengthy, hard to decipher and far from informal arguments provided by mathematicians. In informal proofs, techniques such as conditional proof, indirect proof or proof by cases are commonly used; all are based on the introduction of arbitrary, temporarily accepted assumptions. Hence the goals of Gentzen and Jaśkowski were twofold: (1) theoretical and formally correct justification of traditional proof methods, and (2) providing a system which supports actual proof search. Moreover, Gentzen’s approach provided the programme for proof analysis which strongly influenced modern proof theory and philosophical research on theories of meaning. ### b. Prehistory According to some authors the roots of ND may be traced back to Ancient Greece. Corcoran (1972) proposed an interpretation of Aristotle’s syllogistics in terms of inference rules and proofs from assumptions. One can also look for the genesis of ND system in Stoic logic, where many researchers (for example, Mates 1953) identify a practical application of the Deduction Theorem (DT). But all these examples, even if we agree with the arguments of historians of logic, are only examples of using some proof techniques. There is no evidence of theoretical interest in their justification. In fact the introduction of DT into the realm of modern logic seems to be one of the most important steps on the way leading eventually to the discovery of ND. Although Herbrand did not present a formal proof of it for axiomatic systems until Herbrand (1930), he had already stated it in Herbrand (1928). At the same time Tarski (1930) included DT as one of the axioms of his Consequence Theory; in practice he had used it since 1921. Also other ND-like rules were practically applied in the 1920s by many logicians from the Lvov-Warsaw School, like Leśniewski and Salamucha, as is evident from their papers. Jaśkowski was strongly influenced by Łukasiewicz, who posed on his Warsaw seminar in 1926 the following problem: how to describe, in a formally proper way, proof methods applied in practice by mathematicians. In response to this challenge Jaśkowski presented his first formulation of ND in 1927, at the First Polish Mathematical Congress in Lvov, mentioned in the Proceedings (Jaśkowski 1929). A final solution was delayed until (Jaśkowski 1934) because Jaśkowski had a lengthy break in his research due to illness and family problems. Gentzen also published the first part of his famous paper in 1934, but the first results are present in (Gentzen 1932). This early paper, however, is concerned not with ND but with the first form of Sequent Calculus (SC). Gentzen was influenced by Hertz (1929), where a tree-format notation for proofs, as well as the notion of a sequent, were introduced. One can also look for a source of the shape of his rules in Heyting’s axiomatization of intuitionistic logic (see von Plato 2014). It should be no surprise that the two logicians with no knowledge of each other’s work, independently proposed quite different solutions to the same problem. Axiom systems, although theoretically satisfying, were considered by many researchers as practically inadequate and artificial. Thus the need for more practice-oriented deduction systems was in the air. ## 2. Applications This article distinguishes at least three main fields of application of ND systems: practical, theoretical and philosophical. Since 1934 a lot of systems called ND were offered by many authors in numerous textbooks on elementary logic. In this way ND systems became a standard tool of working logicians, mathematicians, and philosophers. At least in the Anglo-American tradition, ND systems prevail in teaching logic. They also had strong influence on the development of other types of non-axiomatic formal systems such as sequent calculi and tableau systems. In fact, the former were also invented by Gentzen as a theoretical tool for investigations on the properties of ND proofs, whereas the latter may be seen (at least in the case of classical logic) as a further simplification of sequent calculus that is easier for practical applications. But the importance of ND is not only of practical character. Since 1960s the works of Prawitz (1965) and (Raggio 1965) on normal proofs opened up the theoretical perspective in the applications of ND. In fact Prawitz was rediscovering things known to Gentzen but not published by him, which was later shown by von Plato (2008). In addition to extended work on normalization of proofs, ND is also an interesting tool for investigations in theoretical computer science through the Curry-Howard isomorphism. This approach shows that (normal) ND proofs may be interpreted in terms of executions of programs. Finally the special form of rules of ND provided by Gentzen led to extensive studies on the meaning of logical constants. This article takes a look at theoretical and philosophical applications of ND in sections 9 and 10. ## 3. Demarcation Problem The great richness of different forms of systems called ND leads to some theoretical problems concerning the precise meaning of the term 'ND'. It seems that no definition of ND systems was offered which would be generally accepted. This demarcation problem was investigated by many authors; and different criteria were offered for establishing what is, and what is not, an ND system. Detailed survey of these matters may be found in Pelletier (1999) or in Pelletier and Hazen (2012); this article points out only the most important features. ### a. Wide and Narrow Sense of ND Some authors tend to use the term in a broad sense in which it covers almost all that is not an axiomatic system in Hilbert’s sense. Hence sometimes systems like sequent calculi or tableau calculi are treated as ND systems. All these systems are actually in close relationship, but this article chooses to consider ND only in the narrow sense. There are at least three reasons for making this choice: • Historical. Original ideas of Gentzen, who introduced two systems: NK (Natürliche Kalkül) and LK (Logistiche Kalkül). The former is just an ND system, whereas the latter, a sequent calculus, was meant as a technical tool for proving some metatheorems on NK, not as a kind of ND. • Etymological. ND is supposed to reconstruct, in a formally proper way, traditional ways of reasoning. It is disputable whether existing ND systems realize this task in a satisfying way, but certainly systems like tableaux or SC are even worse in this respect. • Practical. Taking the term ND in a wide sense would be a classifying operation of doubtful usefulness. From the point of view of this article's presentation, it is more convenient to use a more narrowly defined concept. ### b. Criteria of Genuine ND But what criteria should be used for delimiting the class of systems called ND? Many proposals seem to be too narrow (that is, strict) since they exclude some systems usually treated as ND, so it is better not to be very demanding in this respect. So, ND system should satisfy three criteria: 1. Possibility of entering and eliminating (discharging) additional assumptions during the course of the proof. Usually it requires some bookkeeping devices for indicating the scope of an assumption, that is, for showing that a part of the proof (a subproof) depends on a temporary assumption, and for marking the end of such a subproof the point at which the assumption is “discharged”. 2. Characterization of logical constants by means of rules rather than axioms. Their role is taken over by the set of primitive rules for introduction and elimination of logical constants, which means that elementary inferences instead of formulas are taken as primitive. 3. The richness of forms of proof construction. Genuine ND systems admit a lot of freedom in proof construction and in the possibility of applying several strategies of proof-search. These three conditions seem to be the essential features of any ND. These characteristics are quite general, but the third at least serves to exclude tableau systems and sequent calculi since genuine ND should allow both direct and indirect proofs, proofs by cases, and so forth. This flexibility of proof construction is vital for ND, whereas, for example in a standard tableau system, we have only indirect proofs and elimination rules. On the other hand, ND does not require that its rules should strictly realise the schema of providing a pair of introduction and elimination rules, and that axioms are not allowed. ## 4. Rules ND systems consist of the set of (schemata) of simple rules characterising logical constants. For example a connective of conjunction $\wedge$ is characterised by means of the following rules: $\begin{array}{ccc}(\wedge I)\ \ \dfrac{\varphi, \psi}{\varphi\wedge\psi}\quad&(\wedge E)\ \ \dfrac{\varphi\wedge\psi}{\varphi}\quad&(\wedge E)\ \ \dfrac{\varphi\wedge\psi}{\psi}\end{array}$ where $\varphi$ and $\psi$ denote any formulas. Material above the horizontal line represents the premises; and that below represents the conclusion of the inference. The letters $I$ and $E$ in the names of the rules come from “introduction” and “elimination” respectively since the first allows introduction of a conjunction into a proof, and the second allows for its elimination in favor of simpler formulas. Often the following horizontal notation is applied (instead of vertical which is more space-consuming): $(\wedge E)\ \$ $\varphi \wedge \psi \vdash \varphi$ and $\varphi \wedge \psi \vdash \psi$ $(\wedge I)\ \$ $\varphi , \psi \vdash \varphi \wedge \psi$ Here $\vdash$ is used to point out that the relation of deducibility holds between premises and the conclusion of a rule instance. In what follows, such phrases are called sequents. In fact such deducibility statements in general do not uniquely characterise inference rules, but it does no harm so they are used in what follows for simplicity's sake. One can easily check that the rules stated above adequately characterise the meaning of classical conjunction which is true iff both conjuncts are true. Hence the syntactic deducibility relation coincides with the semantic relation of $\models$, that is, of logical consequence (or entailment). Unfortunately not all logical constants may be characterised by means of such simple rules. For example, implication $\rightarrow$ in addition to modus ponens (or detachment rule): $(\rightarrow E)\ \$ $\varphi \rightarrow \psi , \varphi \vdash \psi$ which is known from axiomatic systems, requires a more complex rule $(\rightarrow I)$ of the shape: $\begin{array}{cc}& [\varphi] \\ & \vdots \\ \Gamma\ & \psi \\ \hline & \varphi \rightarrow \psi\end{array}$ or: $(\rightarrow I)\ \$ If $\Gamma , \varphi \vdash \psi$, then $\Gamma \vdash \varphi \rightarrow \psi$ where $\Gamma$ and $\varphi$ forms a collection of all active assumptions previously introduced which could have been used in the deduction of $\psi$. When inferring $\varphi\rightarrow\psi$, one is allowed to discharge assumptions of the form $\varphi$. The fact that after deduction of $\varphi \rightarrow \psi$ this assumption is discharged (not active) is pointed out by using [ ] in vertical notation, and by deletion from the set of assumptions in horizontal notation. The latter notation shows better the character of the rule; one deduction is transformed into the other. It shows also that the rule $(\rightarrow I)$ corresponds to an important metatheorem, the Deduction Theorem, which has to be proved in axiomatic formalizations of logic. In what follows, all rules of the shape $\Gamma \vdash \varphi$ will be called inference rules, since they allow for inferring a formula (conclusion) from other formulas (premises) present in the proof. Rules of the form: If $\Gamma_1 \vdash \varphi_1, \ldots, \Gamma_n \vdash \varphi_n$, then $\Gamma \vdash \varphi$ will be called proof construction rules since they allow for constructing a proof on the basis of some proofs already completed. One characteristic feature of such rules is that they involve the process of entering new assumptions as well as conditions under which one can discharge these assumptions and close subordinated proofs (or subproofs) starting with these assumptions. The complete set of rules provided by Gentzen for IPL (Intuitionistic Propositional Logic) is the following: $\begin{array}{ll} (\bot E) & \bot \vdash \varphi \\ (\neg E) & \varphi , \neg \varphi \vdash \bot \\ (\neg I) & \text{If } \Gamma , \varphi \vdash \bot \text{, then } \Gamma \vdash \neg \varphi \\ (\wedge I) & \varphi , \psi \vdash \varphi \wedge \psi \\ (\wedge E) & \varphi \wedge \psi \vdash \varphi \text{ and }\varphi \wedge \psi \vdash \psi \\ (\rightarrow E) & \varphi , \varphi \rightarrow \psi \vdash \psi \\ (\rightarrow I) & \text{If }\Gamma , \varphi \vdash \psi \text{, then }\Gamma \vdash \varphi \rightarrow \psi \\ (\vee I) & \varphi \vdash \varphi \vee \psi \text{ and }\psi \vdash \varphi \vee \psi \\ (\vee E) & \text{If }\Gamma , \varphi \vdash \chi \text{ and }\Delta , \psi \vdash \chi \text{, then }\Gamma , \Delta , \varphi \vee \psi \vdash \chi\end{array}$ What is evident from this set of rules is the Gentzen policy of characterising every constant by a pair of rules, in which one is the rule for introduction a formula with that constant into a proof, and the other is the rule of elimination of such a formula, that is, inferring some simpler consequences from it, sometimes with the aid of other premises. More will be said about philosophical consequences of this approach in section 10. In order to obtain CPL (Classical Propositional Logic), Gentzen added the Law of Excluded Middle $\neg \varphi \vee \varphi$ as an axiom, but the same result can easily be obtained by a suitable inference rule of double negation elimination: $\neg \neg \varphi \vdash \varphi$ or by changing one of the proof construction rules, namely $(\neg I)$ which encodes the weak form of indirect proof into the strong form: $(\neg E)$ If $\Gamma , \neg \varphi \vdash \bot$, then $\Gamma \vdash \varphi$ This solution was applied by Jaśkowski (1934). ## 5. Proof Format In addition to providing suitable rules, one must also decide about the form of a proof. Two basic approaches due to Gentzen and Jaśkowski are based on using trees as a representation of a proof and on using linear sequences of formulas. This article focuses on the most important differences between these two approaches. For detailed comparison see Pelletier and Hazen (2014), and Restall (2014). ### a. Tree Proofs Let us start with an example of a proof in Gentzen’s format, that is, as a tree of formulas: $\begin{array}{cl} \underline{[p]^1\hspace{.5cm} [p\rightarrow q]^3}\hspace{2cm} & ass. \\ \underline{q \hspace{2cm} [q \rightarrow r]^2} & (\rightarrow E) \\ \underline{\hspace{1cm}r\hspace{1cm}} & (\rightarrow E) \\ \underline{\hspace{1cm}p \rightarrow r^1\hspace{1cm}} & (\rightarrow I) \\ \underline{\hspace{.5cm}(q \rightarrow r)\rightarrow (p\rightarrow r)^2\hspace{.5cm}} & (\rightarrow I) \\ (p\rightarrow q)\rightarrow ((q \rightarrow r)\rightarrow (p\rightarrow r))^3 & (\rightarrow I) \end{array}$ Here the root of a tree is labelled with a thesis and its leaves are labelled with (discharged) assumptions: $p\rightarrow q, q\rightarrow r$ and $p$. All assumptions were discharged while $(\rightarrow I)$ was applied successively building implications from $r$—the numbers of assumptions indicate the order in which they were discharged, and the suitable number is attached to the formula inferred by the assumption discharging rule. Before that, $r$ was deduced by two applications of $(\rightarrow E)$, first to two assumptions (active at this moment), then to the third assumption and previously deduced $q$. Gentzen’s tree format of representing proofs has many advantages. It is an excellent representation of real proofs; in particular, deductive dependencies between formulas are directly shown. But if we are concerned with actual deduction, this format of proof is far from being useful and natural. Moreover, one is often forced to repeat identical, or very similar, parts of the proof, since, in tree format, inferences are conducted not on formulas but on their particular occurrences. For example, if $\varphi \wedge \psi$ is an assumption from which we need to infer both $\varphi$ and $\psi$, then a suitable branch starting with $\varphi \wedge \psi$ must be displayed twice. The following example illustrates the point: $\begin{array}{cl} \hspace{.5cm}\underline{[p\wedge (q \wedge p \rightarrow r)]^2}\hspace{2.5cm} & ass.\\ \underline{[q]^1\hspace{1cm} p}\hspace{2cm}\underline{[p\wedge (q \wedge p \rightarrow r)]^3} & (\wedge E) \\ \underline{q \wedge p \hspace{3cm} q \wedge p \rightarrow r} & (\wedge I), (\wedge E) \\ \underline{\hspace{.5cm}r\hspace{.5cm}} & (\rightarrow E) \\ \underline{\hspace{.5cm}q \rightarrow r^1\hspace{.5cm}} & (\rightarrow I) \\ (p\wedge (q \wedge p \rightarrow r))\rightarrow (q\rightarrow r)^{2, 3} & (\rightarrow I)\end{array}$ here, the attachment of two numerals $2, 3$ to the formula in the last line indicates that both occurrences of the same assumption were discharged in this step. Gentzen himself was aware of the disadvantages of his representation of proof, but it proved useful for his theoretical interests described in section 9. It is not surprising that the tree format of proofs is mainly used in theoretical studies on ND, as in Prawitz (1965) or Negri and von Plato (2001). ### b. Linear Proofs Jaśkowski, on the other hand, preferred a linear representation of proofs since he was interested in creating a practical tool for deduction. Linear format has many virtues over Gentzen’s approach. For example, inferences are drawn from assumptions rather than from their occurrences, which means that, for example, one needs to assume $\varphi \wedge \psi$ only once to derive both conjuncts. It is also more natural to construct a linear sequence trying, one by one, each possible application of the rules. But there is a price to be paid for these simplifications—the problem of subordinated proofs. How should we represent that some assumption and its subordinated proof are no longer alive because a suitable proof construction rule was applied? If we apply a proof construction rule which discharges an assumption, we must explicitly show that the subordinate proof dependent on this assumption is dead in the sense that no formula from it may be used below in the proof. In a tree format this is not a problem—to use a formula as a premise for the application of some inference rule we must display it (and the whole subtree which provides a justification for it) directly above the conclusion. In linear format this leads to problems, and some technical devices are necessary which forbid using the assumptions and other formulas inferred inside completed subproofs. Jaśkowski proposed two solutions to this problem: graphical (boxes) and bookkeeping (in the terminology of Pelletier and Hazen 2012). Let us compare these two simple proofs: On the left we have an example of a proof in graphical mode where each assumption opens a new box in which the rest of the proof is carried out. On the other hand when a suitable proof construction rule is applied, the current subproof is boxed which means that nothing inside is allowed in further proof construction. In lines 3 and 5 an additional rule of repetition (often called reiteration) is applied which allows for moving formulas from outer to inner boxes. On the right the same proof is represented in bookkeeping style where instead of boxes we use prefixes (sequences of natural numbers) for indicating the scope of an assumption. Each assumption is preceded with the letter S from latin suppositio and adds a new numeral to the sequence of natural numbers in the prefix. When a proof construction rule is applied, the last item is subtracted from the prefix. Hence a thesis can occur with an empty sequence, signifying that it does not depend on any assumption. No repetition rules are applied in this version of Jaśkowski’s system; hence the proof is two lines shorter. Although Jaśkowski finally chose the second option (perhaps due to editorial problems) nowadays the graphical approach is far more popular, probably due to the great success of Fitch’s textbook (1952) which popularized a simplified version of Jaśkowski’s system (now called Fitch’s approach). In Fitch’s system one is using vertical lines for indicating subproofs. Below is an example of a proof in Fitch’s format: Other devices were also applied such as brackets in Copi (1954), or even just indentation of subordinate proofs. The original Jaśkowski’s boxes were used by Kalish and Montague (1964) with the additional device being of great heuristic value; each box is preceded by a show-line which displays the current aim of the proof. Show-lines are not parts of a proof in the sense that one is forbidden to use them as premises for rule application. But after completing a subproof, a box is closed and the opening show-line becomes a new ordinary line in the proof (which is pointed out by deleting a prefix “show”). The second solution of Jaśkowski was not so popular. One can mention here Quine’s system (1950) (with asterisks instead of numerals) or Słupecki and Borkowski’s system (1958) popular in Poland. ## 6. Other Approaches Gentzen (1936) introduced yet another variant of ND which may be considered as lying between his first system described in subsection 5.1. and his famous sequent calculus. It shows another possible way of arranging the bookkeeping of active assumptions. As a result, in this approach the basic items which are transformed in proofs are not formulas but rather sequents. For example, both rules for conjunction are of the form: $\begin{array}{ll} (\wedge I') & \text{If }\Gamma \vdash \varphi \text{ and } \Delta \vdash \psi\text{, then } \Gamma, \Delta \vdash \varphi \wedge \psi \\ (\wedge E') & \text{If } \Gamma \vdash \varphi \wedge \psi\text{, then } \Gamma \vdash \varphi \text{;} \ \ \text{If } \Gamma \vdash \varphi \wedge \psi\text{, then } \Gamma \vdash \psi \end{array}$ where $\Gamma , \Delta$ are records of active assumptions. The full list of rules for CPL contains also: $\begin{array}{ll} (\neg E') & \text{If } \Gamma, \varphi \vdash \psi \text{ and } \Delta, \varphi \vdash \neg\psi\text{, then } \Gamma, \Delta \vdash \psi \\ (\neg I') & \text{If } \Gamma \vdash \neg\neg\varphi\text{, then } \Gamma \vdash \varphi \\ (\rightarrow E') & \text{If } \Gamma \vdash \varphi \text{ and } \Delta \vdash \varphi \rightarrow \psi\text{, then } \Gamma, \Delta \vdash \psi \\ (\rightarrow I') & \text{If } \Gamma , \varphi \vdash \psi\text{, then } \Gamma \vdash \varphi \rightarrow \psi \\ (\vee I') & \text{If } \Gamma \vdash \varphi\text{, then } \Gamma \vdash \varphi \vee \psi\text{;} \ \ \text{If } \Gamma \vdash \psi\text{, then } \Gamma \vdash \varphi \vee \psi \\ (\vee E') & \text{If } \Gamma \vdash \varphi \vee \psi \text{ and } \Delta, \varphi \vdash \chi \text{ and } \Lambda , \psi \vdash \chi\text{, then } \Gamma , \Delta , \Lambda \vdash \chi \end{array}$ Assumptions are sequents of the form $\varphi \vdash \varphi$. Theses are sequents with an empty antecedent. Here is an example of a proof: $\begin{array}{c} \underline{p \vdash p\hspace{1cm} p\rightarrow q \vdash p\rightarrow q}\hspace{4cm} \\ \underline{p, p\rightarrow q\vdash q \hspace{3cm} q \rightarrow r\vdash q\rightarrow r} \\ \underline{p, p\rightarrow q, q\rightarrow r \vdash r} \\ \underline{p\rightarrow q, q\rightarrow r \vdash p \rightarrow r} \\ \underline{p\rightarrow q \vdash (q \rightarrow r)\rightarrow(p\rightarrow r)} \\ \vdash (p\rightarrow q)\rightarrow ((q\rightarrow r)\rightarrow(p\rightarrow r)) \end{array}$ One can observe that in the context of such a system the difference between inference and proof construction rules disappears. The only difference is that in the former all transformations are performed on consequents of sequents whereas in the latter some operations (that is, subtractions) are allowed also on antecedents. This is the difference with Gentzen’s ordinary sequent calculus where we have rules introducing constants to antecedents of sequents (instead of rules of elimination). Of course one can go further and allow this kind of rule as well (such a system was constructed, for example, by Hermes 1963), but it seems that Gentzen’s choice offers significant simplifications. First of all, the tree format is not necessary, and one can display proofs as linear sequences since the record of active assumptions is kept with every formula in a proof (as the antecedent). Moreover, since no operation except subtraction is carried out on antecedents, we can get rid of formulas in antecedents and use instead numerals of lines where suitable assumptions were introduced into proofs. Both simplifications are present in Suppes’ system (1957) of ND where the same proof looks like that: $\begin{array}{lcll} 1 & \{1\} & p\rightarrow q & \text{ass.} \\ 2 & \{2\} & q\rightarrow r & \text{ass.} \\ 3 & \{3\} & p & \text{ass.} \\ 4 & \{1, 3\} & q & 1, 3, \ (\rightarrow E) \\ 5 & \{1, 2, 3\} & r & 2, 4, \ (\rightarrow E) \\ 6 & \{1, 2\} & p\rightarrow r & 5, \ (\rightarrow I) \\ 7 & \{1\} & (q \rightarrow r)\rightarrow (p\rightarrow r) & 6, \ (\rightarrow I) \\ 8 & \varnothing & (p\rightarrow q)\rightarrow ((q \rightarrow r)\rightarrow (p\rightarrow r)) & 7, \ (\rightarrow I) \end{array}$ Other solutions generalising standard proof representations were also considered. One can mention at least two approaches without going into details: ND operating on clauses instead of formulas (Borićić 1985, Cellucci 1992, Indrzejczak 2010)  and ND admitting subproofs as items in the proof (Fitch 1966, Schroeder-Heister 1984). ## 7. Rules for Quantifiers Gentzen (1934) also provided the first set of ND rules adequate for CFOL (Classical First-Order Logic) whereas the rules of Jaśkowski’s system characterised the weaker system of IFOL (Inclusive First-Order Logic) which admits empty domains in models. As pointed out by Bencivenga (2014), a minimal relaxation of Jaśkowski’s rules yields also Free Logic, that is, a logic allowing non-denoting terms, hence it may be claimed that it is the first formalization of Universally Free Logic, that is, allowing both empty domains and non-denoting terms. Before characterising Gentzen’s original rules for quantifiers let us note that he was using two sorts of symbols to distinguish between free and bound individual variables. The former are often called individual parameters. Such a solution simplifies a formulation of rules and eliminates the risk of a clash of variables while applying the rules. When we provide ND rules for more standard approaches with just individual variables which may have free or bound occurrences, we must be careful to define precisely the operation of proper substitution of a term for all free occurrences of a variable. ‘Proper’ means that no occurrence of a free variable substituted for another (or, when function-symbols are used, within a term substituted for a variable) gets bound by a quantifier. For simplicity's sake we will keep Gentzen’s solution; let $x, y, z$ denote (bound) variables and $a, b, c$ free variables or individual parameters. Gentzen’s rules are the following: $\begin{array}{ll} (\forall E) & \forall x\varphi \vdash \varphi[x/a] \\ (\exists I) & \varphi [x/a] \vdash \exists x\varphi \\ (\forall I) & \text{If }\Gamma \vdash \varphi [x/a] \text{, then }\Gamma \vdash \forall x\varphi \\ (\exists E) & \text{If }\Gamma \vdash \exists x\varphi\text{ and } \Delta, \varphi[x/a] \vdash \psi\text{, then }\Gamma, \Delta \vdash \psi \end{array}$ where $\varphi [x/a]$ denotes the operation of substitution, that is, of replacing all free occurrences of $x$ in $\varphi$ with a parameter $a$. In case of $(\forall I)$ and $(\exists E)$ a parameter $a$ is required to be “fresh” in the sense of having no other occurrences in $\Gamma , \Delta, \varphi , \psi$. Such a fresh $a$ is sometimes called an ‘eigenvariable’ or a ‘proper variable’. The last rule in Gentzen’s tree format looks as follows: $\begin{array}{crc} \Gamma & & [\varphi[x/a]], \Delta \\ \vdots & & \vdots \\ \exists x\varphi & & \psi \\ \hline & \psi & \end{array}$ Although Gentzen provided this set of rules for his tree-system of ND, it was easily adapted also to linear systems based on Jaśkowski’s (or Suppes’) format of proof. Let us illustrate their application in Fitch’s proof format (but not with his original rules): The first application of $(\forall E)$ introduces a parameter $a$ in place of $x$. In line 3 and 7 the assumptions for the applications of $(\exists E)$ in line 5 and 10 respectively are introduced, each time with a new eigenparameter in place of $y$. Note that both applications of $(\exists E)$ are correct since neither $b$ nor $c$ are present in the formulas ending suitable subproofs. Also the application of $(\forall I)$ in line 6 is correct since $a$ is not present in line 1. The fact that $(\forall I)$ is a proof construction rule is obscured here since there is no need to introduce a subproof by means of a new assumption. We just require that in order to apply $(\forall I)$ there be no occurrence of an involved parameter (here $a$) in active assumptions. However, there are systems of ND where such a subproof (usually flagged with a fresh parameter which will be universally quantified below) is explicitly introduced into a proof. For instance, the original Fitch’s rule is based on such a solution; in fact it follows closely the original Jaśkowski’s rule for inclusive general quantifier. Gentzen’s $(\exists E)$ was sometimes considered as complex and artificial, and some inference rules were proposed instead where $\varphi[x/a]$ is directly inferred and not assumed. Although the idea is simple its correct implementation leads to troubles. Carefull formulations of such a rule (as in Quine 1950) are correct but hard to follow; simple formulations (as in several editions of Copi 1954) make the system unsound. For a detailed analysis of the relations between Gentzen-style and Quine-style quantifier rules one should consult Fine (1985), Hazen (1987) and Pelletier (1999). All these problems with providing correct and simple rules for quantifiers led some authors to doubt if it is really possible (see Anellis 1991). It seems that the only correct system of ND for CFOL with ‘really’ simple rule of this kind is in Kalish and Montague (1964), but this is rather a side-effect of the overall architecture of the system which is not discussed here (but see a detailed explanation of the virtues of Kalish and Montague’s system in Indrzejczak 2010). ## 8. ND for Non-Classical Logics ND systems were also offered for many important non-classical logics. In particular, Jaśkowski’s graphical approach is very handy in this field due to the machinery of isolated subproofs. It appeared that for many non-classical logics one can obtain a satisfying result by putting restrictions on the rule of repetition in the case of some subproofs. Let us take as an example the ND formalization of well known propositional modal logic T; for simplicity we restrict considerations to rules for $\Box$ (necessity). $(\Box E)$ is obvious: $\Box \varphi \vdash \varphi$. With $(\Box I)$ the situation is more complicated since it is based on the following principle: If $\varphi_1, ..., \varphi_n \vdash \psi$, then $\Box\varphi_1, ..., \Box\varphi_n \vdash \Box\psi$ where formulas in the antecedent are also being changed by addition of $\Box$. It is realised by means of a special ‘modal’ subproof which is opened with no assumption, but no other formulas may be put in it except those which were preceded by $\Box$ in outer subproofs (and with $\Box$ deleted after transition). If in such modal subproof we deduce $\psi$, it can be closed and $\Box\psi$ can be put into the outer subproof. The following proof in Fitch’s style illustrates this: In line 4 a modal subproof was initiated which is shown by putting a sole $\Box$ in place of the assumption. Lines 5 and 6 result from the application of modal repetition. Such an approach may be easily extended to other modal logics by modifying conditions of modal repetition; for example, for S4 it is enough to admit that formulas with $\Box$ (no deletion) also may be repeated; for S5, formulas with negated $\Box$ are also allowed. Such an approach to modal logics was initiated by Fitch (1952), extensive study of such systems can be found in Fitting (1983), Garson (2006) and Indrzejczak (2010) where also some other approaches are discussed. This modus of formalizing logics in ND was also applied for other non-classical logics including conditional logics (Thomason 1970), temporal logics (Indrzejczak 1994) and relevant logics (Anderson and Belnap 1975). In the latter the technique of restricted repetition is not enough however (and even not required for some logics of this kind). Far more important is the technique of labeling all formulas with sets of numbers annotating active assumptions which is necessary for keeping track of relevance conditions. Subsequently, applications of labels of different kinds is in fact one of the most popular technique used not only in tableau methods but also in ND. Vigano (2000) provides a good survey of this approach. ## 9. Normal Proofs When constructing proofs one can easily make some inferences which are unnecessary for obtaining a goal. Gentzen was interested not only in providing an adequate system of ND but also in showing that everything which may be proved in such a system may be proved in the most straightforward way. As he put it, in such a proof “No concepts enter into the proof other than those contained in its final result, and their use was therefore essential to the achievement of the result’’ (Gentzen 1934). In particular, such unnecessary moves are performed if one first applies some introduction rule for logical constant $c$ and then uses the conclusion of this rule application as a premise for the application of the elimination rule for $c$. In such cases the final conclusion is either already present in the proof (as one of the premises of respective introduction rule) or may be directly deduced from premises of the application of introduction rule. For example, if one is deducing $\varphi\rightarrow\psi$ on the basis of $(\rightarrow I)$ and then by $(\rightarrow E)$ is deducing $\psi$ from this implication and $\varphi$, then it is simpler to deduce $\psi$ directly from $\varphi$; the existence of such a proof is guaranteed because it is a subproof introducing $\varphi\rightarrow\psi$. Let us call a maximal formula any formula which is at the same time the conclusion of an introduction rule and the main premise of an elimination rule. A proof is called normal iff no maximal formula is present in it. Roughly speaking we can obtain such a proof if first we apply elimination rules to our assumptions (premises) and then introduction rules to obtain the conclusion. Such proofs are analytic in the sense of having the subformula property: all formulas occurring in such a proof are subformulas or negations of subformulas of the conclusion or premises (undischarged assumptions). Although the idea of a normal proof is rather simple to grasp it is not so simple to show that everything provable in ND system may have a normal proof. In fact for many ND systems (especially for many non-classical logics) such a result does not hold. Gentzen proved such a result directly for an ND system for Intuitionistic Logic, but he was unable to provide a proof for his ND for Classical Logic. He failed to provide the proof for the Intuitionistic case and instead he provided the result for both his ND systems indirectly. First he introduced an auxiliary technical system of sequent calculus and proved for it (both in the classical and intuitionistic cases) the famous Cut-Elimination Theorem. Then he showed that this result implies the existence of a normal proof for every thesis and valid argument provable in his ND systems. Such a result is usually called the Normal Form Theorem whereas the stronger result showing directly how to transform every ND-proof into normal proof by means of a systematic procedure is called the Normalization Theorem. That Gentzen indeed proved the Normalization Theorem for Intuitionistic case became known recently due to von Plato (2008) who found a preliminary draft of Gentzen’s thesis. The first published versions of proofs of Normalization theorems appeared in the 1960s due to Raggio (1965) and Prawitz (1965) who proved this result also for ND systems for some non-classical logics. For a detailed account of these problems see Troelstra and Schwichtenberg (1996) or Negri and von Plato (2001). One thing should be noticed with respect to proofs in normal form. Although normal proofs are in a sense the most direct proofs, this does not mean that they are the most economical. In fact, non-normal proofs often may be shorter and easier to understand than normal ones. Perhaps it is simpler to understand if we recall that normalization in ND is the counterpart of cut-elimination in sequent calculi. Applications of cuts in proofs correspond to applications of previously proved things as lemmas and may drastically shorten proofs. When a proof is normalized, its size may grow exponentially (see, for example, Boolos 1984, Fitting 1996, D’Agostino 1999). What is important in normal proofs is that, due to their conceptual simplicity, they provide a proof theoretical justification of deduction and a new way of understanding the meaning of logical constants. ## 10. Philosophy of Meaning Aesthetics was not the only reason for insisting on having both introduction and elimination rules for every constant in Gentzen’s ND. He also wanted to realise a deeper philosophical intuition concerning the meaning of logical constants. It is claimed that if a set of rules is intuitive and sufficient for adequate characterisation of a constant, then it in fact expresses our way of understanding this constant. Moreover, such an approach may be connected with Wittgenstein’s program of characterization of meaning by means of the use of words. In this particular case the meaning of logical constants is characterised by their use (via rules) in proof construction. There is also a strong connection with anti-realistic position in the philosophy of meaning where it is claimed that the notion of truth may be successfully replaced with the notion of a proof (Dummett 1991). One recent, and very strong, version of this trend is represented in Brandom’s (2000) program of strong inferentialism, where it is postulated that the meanings of all expressions may be characterised by means of their use in widely understood reasoning processes. However, inferentialism is not particularly connected with ND nor with the specific shapes of rules as giving rise to the meaning of logical constants. Leaving aside the far-reaching program of inferentialism, one can quite reasonably ask whether the characteristic rules of logical constants may be treated as definitions. The term ‘Proof-Theoretic Semantics’ first appeared in 1991 (Schroeder-Heister 1991), but the roots of this idea is certainly linked with Gentzen (1934). He himself preferred introduction rules as a kind of definition of a constant. Elimination rules are just consequences of these ‘definitions’, not in the sense of being deducible from them but in the sense that their application is a kind of inversion of introduction rules. The notion of inversion was precisely characterised by Prawitz’s principle of inversion [see Prawitz’s (1965)]: if by the application of elimination rule $r$ we obtain $\varphi$, then proofs sufficient for deduction of premises of $r$ already contain a deduction of $\varphi$. Hence one can directly obtain $\varphi$ on the basis of these proofs with no application of $r$. As these sufficient conditions for deductions of premises are characterised by introduction rules, we can easily see that the inversion principle is strongly connected with the possibility of proving normalization theorems; it justifies making reduction steps for maximal formulas in normalization procedures. Not all authors dealing with proof-theoretic semantics followed Gentzen in his particular solutions. Popper (1947) was the first who tried to construct deductive systems in which all rules for a constant were treated together as its definition. There are also approaches (such as Dummett 1991, chapter 13, and Prawitz 1971) in which elimination rules are treated as the most fundamental. No matter which kind of rules should be taken as basic for characterization of logical constants, it is obvious that not any set of rules may be treated as a candidate for definition. Prior (1960) paid attention to this fact by means of his famous example. Let us consider a connective “tonk’’ characterised by the following rules: (tonk I) $\varphi \vdash \varphi$ tonk $\psi$ (tonk E) $\varphi$ tonk $\psi \ \vdash \psi$ One can easily show that any formula is deducible from any formula after adding such rules to ND system. However Prior’s example only showed that one should carefuly characterise conditions of correctness for rules which are proposed as a tool for characterisation of logical constants. One of the first proposals is due to Belnap (1962) who emphasized that, just as for definitions, rules must be noncreative in the sense that if we add them to some ND system, then we obtain its conservative extension. In other words, if some formula with no occurrence of this new constant was not deducible in the ‘old’ system, then it is still not in the extended system. Rules for “tonk’’ do not satisfy this requirement. Although Belnap’s solution is not sufficient, he opened the door for further research of such conditions. The term “(proof-theoretic) harmony’’ is widely used for specification of such adequacy conditions for rules, and there is a large amount of literature concerned with this question. Schroeder-Heister (2014) provides one of the recent solutions to this problem whereas Schroeder-Heister (2012) offers extensive discussion of other approaches. ## 11. References and Further Reading • [1] Anderson, A., R. and N., D. Belnap, Entailment: the Logic of Relevance and Necessity, vol I. Princeton University Press, Princeton 1975. 17. • [2] Anellis, I. H., Forty Years of "Unnatural" Natural Deduction and Quantification. A History of First-Order Systems of Natural Deduction from Gentzen to Copi', Modern Logic, 2(2): 113-152, 1991. • [3] Belnap, N. D., Tonk, Plonk and Plink', Analysis 22/6:130-134, 1962. • [4] Bencivenga E., Jaskowski's Universally Free Logic, Studia Logica, 102(6):1095-1102, 2014. • [5] Boolos, G., Don't eliminate Cut, Journal of Philosophical Logic, 7:373-378, 1984. • [6] Boricic;, B. R., On Sequence-conclusion Natural Deduction Systems, Journal of Philosophical Logic, 14: 359-377, 1985. • [7] Borkowski L., J. S lupecki, A Logical System based on rules and its applications in teaching Mathematical Logic, Studia Logica, 7: 71-113, 1958. • [8] Brandom, R., Articulating Reasons. An Introduction to Inferentialism, Cambridge, Harvard University Press 2000. • [9] Cellucci, C., Existential Instatiation and Normalization in Sequent Natural Deduction, Annals of Pure and Applied Logic, 58: 111-148, 1992. • [10] Copi I. M., Symbolic Logic, The Macmillan Company, New York 1954. • [11] Corcoran, J. Aristotle's Natural Deduction System, in: J. Corcoran (ed.), Ancient Logic and its Modern Interpretations, Reidel, Dordrecht 1972. • [12] D'Agostino, M., Tableau Methods for Classical Propositional Logic in: M. D'Agostino et al. (eds.), Handbook of Tableau Methods, pp. 45-123, Kluwer Academic Publishers, Dordrecht 1999. • [13] Dummett, M., The Logical Basis of Metaphysics, Cambridge, Harvard University Press 1991. • [14] Fine, K., Natural deduction and arbitrary objects', Journal of Philosophical Logic 14:57-107, 1985. • [15] Fitch, F.B., Symbolic Logic, Ronald Press Co, New York 1952. • [16] Fitch, F.B., Natural deduction rules for obligation', American Philosophical Quaterly 3:27-38, 1966. • [17] Fitting, M., Proof Methods for Modal and Intuitionistic Logics, Reidel, Dordrecht 1983. • [18] Fitting, M., First-Order Logic and Automated Theorem Proving, Springer, Berlin 1996. 18 • [19] Garson, J.W. Modal Logic for Philosophers, Cambridge University Press, Cambridge 2006. • [20] Gentzen G., Uber die Existenz unabhangiger Axiomensysteme zu unendlichen Satzsystemen, Mathematische Annalen, 107:329-350, 1932. • [21] Gentzen, G., Untersuchungen uber das Logische Schliessen, Mathematische Zeitschrift 39:176-210 and 39:405-431, 1934. • [22] Gentzen, G., Die Widerspruchsfreiheit der reinen Zahlentheorie, Mathematische Annalen 112:493-565, 1936. • [23] Hazen, A.P., Natural deduction and Hilbert's epsilon-operator', Journal of Philosophical Logic 16:411-421, 1987. • [24] Hazen A. P. and F. J. Pelletier, Gentzen and Jaskowski Natural Deduction: Fundamentally Similar but Importantly Different, Studia Logica, 102(6):1103-1142, 2014. • [25] Herbrand J., abstract in: Comptes Rendus des Seances de l'Academie des Sciences 1928, vol. 186, 1275 Paris. • [26] Herbrand J., Recherches sur la theorie de la demonstration, in: Travaux de la Societe des Sciences et des Lettres de Varsovie, Classe III, Sciences Mathematiques et Physiques, Warsovie, 1930. • [27] Hermes H., Einfuhrung in die Mathematische Logik, Teubner, Stuttgart 1963. • [28] Hertz P., Uber Axiomensysteme fur beliebige Satzsysteme, Mathematische Annalen, 101: 457-514, 1929. • [29] Indrzejczak, A., Natural Deduction System for Tense Logics, Bulletin of the Section of Logic 23(4):173-179, 1994. • [30] Indrzejczak, A., Natural Deduction, Hybrid Systems and Modal Logics, Springer 2010. • [31] Jaskowski, S., Teoria dedukcji oparta na dyrektywach za lozeniowych in: Ksiega Pamiatkowa I Polskiego Zjazdu Matematycznego, Uniwersytet Jagiellonski, Krakow 1929. • [32] Jaskowski, S., On the Rules of Suppositions in Formal Logic Studia Logica 1:5-32, 1934. • [33] Kalish, D., and R. Montague, Logic, Techniques of Formal Reasoning, Harcourt, Brace and World, New York 1964. • [34] Mates B., Stoic Logic, University of California Press, Berkeley 1953. • [35] Negri, S., and J. von Plato, Structural Proof Theory, Cambridge University Press, Cambridge 2001. 19 • [36] Pelletier F. J. A Brief History of Natural Deduction, History and Philosophy of Logic, 20: 1-31, 1999. • [37] Pelletier F. J. and A. P. Hazen, A History of Natural Deduction, in: D. Gabbay, F. J. Pelletier and E. Woods (eds.) Handbook of the History of Logic vol 11, 341-414, 2012. • [38] Plato von J., Gentzen's proof of normalization for ND, The Bulletin of Symbolic Logic 14(2):240-257, 2008. • [39] Plato von J., From Axiomatic Logic to Natural Deduction, Studia Logica, 102(6):1167-1184, 2014. • [40] Popper, K., Logic without assumptions', Proceedings of the Aristotelian Society 47:251-292, 1947. • [41] Popper, K., New foundations for Logic', Mind 56: 1947. • [42] Prior, A.,N. The runabout inference ticket', Analysis 21:38-39, 1960. • [43] Prawitz, D. Natural Deduction, Almqvist and Wiksell, Stockholm 1965. • [44] Prawitz, D. Ideas and Results in Proof Theory' in: Proceedings of the Second Scandinavian Logic Symposium, J. E. Fenstad (ed.), North-Holland, Amsterdam 1971. • [45] Quine W. Van O., Methods of Logic, Colt, New York 1950. • [46] Raggio A., Gentzen's Hauptsatz for the systems NI and NK, Logique et Analyse 8:91-100, 1965. • [47] Restall G.,Normal Proofs, Cut Free Derivations and Structural Rules' Studia Logica, 102(6):1143-1166, 2014. • [48] Schroeder-Heister, P., A Natural Extension of Natural Deduction', Journal of Symbolic Logic 49:1284-1300, 1984. • [49] Schroeder-Heister, P., Uniform Proof-Theoretic Semantics for Logical Constants (Abstract), Journal of Symbolic Logic 56, 1142, 1991. • [50] Schroeder-Heister, P., Proof-Theoretic Semantics' in: The Stanford Encyclopedia of Philosophy (ed.) E. N. Zalta 2012. • [51] Schroeder-Heister, P., The Calculus of Higher-Level Rules, Propositional Quantification and the Foundational Approach to Proof-Theoretic Harmony' Studia Logica, 102(6):1185{1216, 2014. • [52] Suppes P., Introduction to Logic, Van Nostrand, Princeton 1957, 20. • [53] Tarski A., Fundamentale Begriffe der Methodologie der deduktiven Wissenschaften, Monatschefte fur Mathematik und Physik, 37:361-404, 1930. • [54] Troelstra A. S. and H. Schwichtenberg., Basic Proof Theory, Cambridge 1996. • [55] Vigano L., Labelled Non-Classical Logics, Kluwer 2000. ### Author Information Andrzej Indrzejczak Email: indrzej@filozof.uni.lodz.pl University of Lodz Poland
2017-04-24 07:21:17
{"extraction_info": {"found_math": true, "script_math_tex": 121, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 121, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8201649785041809, "perplexity": 1150.7617856130018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00569-ip-10-145-167-34.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1554986/why-are-translates-of-travelling-waves-again-travelling-waves
# Why are translates of travelling waves again travelling waves? A travelling wave solution of a PDE or ODE is a solution that depends on the single variable $\xi=x-ct$. For example consider the PDE $$u_t=u_{xx}+f(u)-w,~~~w_t=\epsilon (u-\gamma w).~~~~~(1)$$ Then, a travelling wave $(u(\xi), w(\xi)$ satisfies $$-cu_{\xi}=u_{\xi\xi}+f(u)-w,~~~~~-cw_{\xi}=\epsilon (u-\gamma w).~~~(2)$$ Now: Why is any translate $(u(\xi-\xi_0), w(\xi-\xi_0))$ with $\xi_0\in\mathbb{R}$ a travelling wave, too? If we express the original PDE (1) not in coordinates $t$ and $x$ but in coordinates $t$ and $\xi=x-ct$, then we get $$u_t=u_{\xi\xi}+cu_{\xi}+f(u)-w,~~~~~w_t= cw_{\xi}+\epsilon (u-\gamma w).~~~(3)$$ For a travelling wave, we then have. because of (2), $$0=u_{\xi\xi}+cu_{\xi}+f(u)-w,~~~~~0=cw_{\xi}+\epsilon (u-\gamma w),$$ hence, a travelling wave is an equilibria solution for (3). Does this help to argue why any translate of a travelling wave is also a travelling wave? • The translate is a travelling wave since the PDE is invariant under shifts $x\to x + c$, $t\to t + d$, i.e. the PDE using the shifted coordinates is the same PDE as the original PDE. – Winther Dec 1 '15 at 16:26 • If $(u(\xi),w(\xi))$ is an equilbrium for (3), why then a translate of it, too? – M. Meyer Dec 1 '15 at 16:34 • But whats with the terms $f(u)$ and -w, for example in the first equation? – M. Meyer Dec 1 '15 at 16:47 • Doesn't this all follow immediately by the fact that a travelling wave is an equilibrium of (3) and hence time independent? So it does not matter if we consider $\xi=x-ct$ or $\xi-k=x-c(t-(k/c))$, i.e. the time $s:=t-(k/c)$ instead of $t$? – M. Meyer Dec 1 '15 at 16:54 • The PDE where the time-independent concept comes from is an abstraction. We know that $\zeta = x - ct$, but lets us just consider the PDE $u_t = u_{\zeta\zeta} + \ldots$ like it was any other PDE and $\zeta$ was just a normal spatial coordinate independent of $t$. From this point of view the solution $u$ is time-independent so the solution can be written $u = f(\zeta)$ independent of $t$. However in the picture we started with we have $\zeta = x-ct$ so this means that $u = f(x-ct)$ so the solution of the problem we started with is not time-independent. – Winther Dec 1 '15 at 17:41 If we perform the translation $t\to t - t_0$ and $x\to x - x_0$, using that derivatives are translation invariant $\frac{d}{d(x-x_0)} = \frac{d}{dx}$, we get that the PDEs $$\matrix{u_t(x,t) &=& u_{xx}(x,t) + f(u(x,t)) - w(x,t)\\ w_t(x,t) &=& \epsilon[u(x,t) - \gamma w(x,t)]}$$ $$\matrix{\hat{u}_t(x,t) &=& \hat{u}_{xx}(x,t) + f(\hat{u}(x,t)) - \hat{w}(x,t)\\ \hat{w}_t(x,t) &=& \epsilon[\hat{u}(x,t) - \gamma \hat{w}(x,t)]}$$ where I have taken $\hat{u}(x,t) = u(x-x_0,t-t_0)$ and $\hat{w}(x,t) = w(x-x_0,t-t_0)$. This is exactly the same PDEs as we started with. If $\{u(x,t),w(x,t)\}$ is a solution then so is $\{u(x-x_0,t-t_0)$, $w(x-x_0,t-t_0)\}$. In terms of the $\zeta$ variable this means that if $\{u(\zeta),w(\zeta)\}$ is a solution then (take $\zeta_0 = x_0 - ct_0$) so is $\{u(\zeta-\zeta_0),w(\zeta-\zeta_0)\}$.
2019-07-22 01:48:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117839336395264, "perplexity": 270.4814957144693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00378.warc.gz"}
https://tex.stackexchange.com/questions/301034/biblatex-mla-compiling-issues
# Biblatex-MLA Compiling Issues [duplicate] So, after recently switching one of my computers to Windows, I found myself needing MLA for class reasons :( Then the fun begins. My extremely minimal example: \documentclass[10pt,letterpaper]{report} \author{Author} \title{Title} \usepackage[style=mla]{biblatex} \begin{document} \maketitle \end{document} which results in Package biblatex Warning: No "backend" specified, using Biber backend. (biblatex) To use BibTeX, load biblatex with (biblatex) the "backend=bibtex" option. (/usr/share/texmf-dist/tex/latex/biblatex/biblatex_.sty (/usr/share/texmf-dist/tex/latex/etoolbox/etoolbox.sty) (/usr/share/texmf-dist/tex/latex/graphics/keyval.sty) (/usr/share/texmf-dist/tex/latex/oberdiek/kvoptions.sty (/usr/share/texmf-dist/tex/generic/oberdiek/ltxcmds.sty) (/usr/share/texmf-dist/tex/generic/oberdiek/kvsetkeys.sty (/usr/share/texmf-dist/tex/generic/oberdiek/infwarerr.sty) (/usr/share/texmf-dist/tex/generic/oberdiek/etexcmds.sty (/usr/share/texmf-dist/tex/generic/oberdiek/ifluatex.sty)))) (/usr/share/texmf-dist/tex/latex/logreq/logreq.sty (/usr/share/texmf-dist/tex/latex/logreq/logreq.def)) (/usr/share/texmf-dist/tex/latex/base/ifthen.sty) (/usr/share/texmf-dist/tex/latex/url/url.sty) (/usr/share/texmf-dist/tex/latex/biblatex/blx-dm.def) (/usr/share/texmf-dist/tex/latex/biblatex/blx-compat.def) (/usr/share/texmf-dist/tex/latex/biblatex/biblatex_.def) (/usr/share/texmf-dist/tex/latex/biblatex-mla/mla.bbx (/usr/share/texmf-dist/tex/latex/biblatex/bbx/standard.bbx) ! Illegal parameter number in definition of \blx@defformat@d. 4 l.56 {\usebibmacro{name:first-last}{#1}{#4 }{#5}{#7}} ! Illegal parameter number in definition of \blx@defformat@d. 5 l.56 {\usebibmacro{name:first-last}{#1}{#4}{#5 }{#7}} ! Illegal parameter number in definition of \blx@defformat@d. 7 l.56 ...sebibmacro{name:first-last}{#1}{#4}{#5}{#7 }} ! Illegal parameter number in definition of \blx@defformat@d. 3 l.57 {\usebibmacro{name:first-last}{#1}{#3 }{#5}{#7}}% ! Illegal parameter number in definition of \blx@defformat@d. 5 l.57 {\usebibmacro{name:first-last}{#1}{#3}{#5 }{#7}}% ! Illegal parameter number in definition of \blx@defformat@d. 7 l.57 ...sebibmacro{name:first-last}{#1}{#3}{#5}{#7 }}% ) (/usr/share/texmf-dist/tex/latex/biblatex-mla/mla.cbx ! Illegal parameter number in definition of \blx@defformat@d. 3 l.682 \usebibmacro{name:first-last}{#1}{#3 }{#5}{#7}% ! Illegal parameter number in definition of \blx@defformat@d. 5 l.682 \usebibmacro{name:first-last}{#1}{#3}{#5 }{#7}% ! Illegal parameter number in definition of \blx@defformat@d. 7 l.682 ...ebibmacro{name:first-last}{#1}{#3}{#5}{#7 }% ! Illegal parameter number in definition of \blx@defformat@d. 3 l.685 \usebibmacro{name:first-last}{#1}{#3 }{#5}{#7}% ! Illegal parameter number in definition of \blx@defformat@d. 5 l.685 \usebibmacro{name:first-last}{#1}{#3}{#5 }{#7}% ! Illegal parameter number in definition of \blx@defformat@d. 7 l.685 ...ebibmacro{name:first-last}{#1}{#3}{#5}{#7 }% ) (/usr/share/texmf-dist/tex/latex/biblatex/biblatex.cfg))) (/usr/share/texmf-dist/tex/latex/biblatex-mla/english-mla.lbx (/usr/share/texmf-dist/tex/latex/biblatex/lbx/english.lbx)) Now, with an example that minimal, it'd seem to be an issue with tex, but it's consistent across MikTeX, Texlive Windows natives, and Cygwin Texlive ## marked as duplicate by moewe biblatex StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Mar 27 '16 at 6:17 • Works on linux. Maybe some end-of-line coding problem? Have you moved some file from linux to windows? – Rmano Mar 26 '16 at 21:19 • Created the MWE just a minute ago, the tex installations have all been fresh from the internet. – caffinatedangel Mar 26 '16 at 21:22 • You have been caught by the recent biblatex update. They changed the commands for the representation of names, and the vast majority of non standard biblatex styles have not switched to the new commands. A simple solution is to add backend=bibtex as an option to biblatex (i.e., \usepackage[backend=bibtex,style=mla]{biblatex}) – Guido Mar 26 '16 at 21:28 You have been caught by the recent biblatex update. They changed the commands for the representation of names, and the vast majority of non standard biblatex styles have not switched to the new commands. A simple solution is to add backend=bibtex as an option to biblatex: \usepackage[backend=bibtex,style=mla]{biblatex} • I have created a patch to the biblatex-mla style. It is available from github github.com/gvdgdo/biblatex-mla (and there is a pull request to the original version) – Guido Apr 20 '16 at 23:36
2019-07-23 15:33:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8584949374198914, "perplexity": 10790.88355137612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00368.warc.gz"}
https://ask.sagemath.org/answers/12898/revisions/
Revision history [back] Hi! I think you must have assigned one too many of your variables (like assigning a value to RB) to come up with your last expression. Also, I don't see what predefined, lowercase q you're referring to. In any case, here's roughly what I'd do. First, define the variables and the equation, and then solve it: sage: var("F, Q, M, RB") (F, Q, M, RB) sage: deg = pi/180; sage: sage: eq = -2*F*cos(45*deg)-3*Q+4*RB*cos(45*deg)-RB*sin(45*deg)-M == 0 sage: eq -sqrt(2)*F + 3/2*sqrt(2)*RB - M - 3*Q == 0 sage: sage: solve(eq,RB) [RB == 1/3*sqrt(2)*M + sqrt(2)*Q + 2/3*F] sage: for sol in solve(eq,RB): ....: print sol.rhs() ....: 1/3*sqrt(2)*M + sqrt(2)*Q + 2/3*F Then solve it. By default, solve returns a list of equations, so you can loop over them like I did or index into them using [0], [1], [2], etc. I prefer the solution_dict style: sage: solve(eq,RB, solution_dict=True) [{RB: 1/3*sqrt(2)*M + sqrt(2)*Q + 2/3*F}] sage: sols = solve(eq,RB, solution_dict=True) sage: sols[0][RB] 1/3*sqrt(2)*M + sqrt(2)*Q + 2/3*F in which you get a list of dictionaries, with each dictionary corresponding to a possible solution, and then you simply use the variable name as the index. We can then extract a solution: sage: rb = sols[0][RB] sage: rbval = rb.subs(M=23, Q=6, F=1/4) sage: rbval 41/3*sqrt(2) + 1/6 and I've substituted some basically random M, Q, and F values into it. This can then be evaluated numerically (the evalf equivalent) by using the numerical_approx method: sage: rbval.n() 19.4942520190990 sage: rbval.numerical_approx() 19.4942520190990 sage: rbval.n(100) 19.494252019098965666956412564 There are lots of ways to turn a symbolic expression into a "number", though. See my answer to this question on a similar subject. Does that make sense?
2019-06-15 21:00:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6042692065238953, "perplexity": 3759.9007337786184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00410.warc.gz"}
https://www.hackmath.net/en/math-problem/632
# Components In the box are 8 white, 4 blue and 2 red components. What is the probability that we pull one white, one blue and one red component without returning? p =  0.0293 ### Step-by-step explanation: $p=\frac{8}{8+4+2}\cdot \frac{4}{8+4+2-1}\cdot \frac{2}{8+4+2-2}=0.0293$ We will be pleased if You send us any improvements to this math problem. Thank you! Tips to related online calculators Would you like to compute count of combinations? ## Related math problems and questions: • White and black balls There are 7 white and 3 black balls in an opaque pocket. The balls are the same size. a) Randomly pull out one ball. What is the probability that it will be white? We pull out one ball, see what color it is and return it to the pocket. Then we pull out th • Tokens In the non-transparent bags are red, white, yellow, blue tokens. We 3times pull one tokens and again returned it, write down all possibilities. • Green - Red We have 5 bags. Each consist one green and 2 red balls. From each we pull just one ball. What is the probability that we doesn't pull any green ball? • Tricolors From the colors - red, blue, green, black, and white, create all possible tricolors. • Balls From the urn in which are 7 white balls and 17 red, gradually drag 3-times without replacement. What is the probability that pulls balls are in order: red red red? • Three colors Find the probability that 3 balls of the same color will be drawn from fate with 10 white, 10 red, and 10 blue balls. • Component fail There is a 90 percent chance that a particular type of component will perform adequately under high temperature conditions. If the device involved has four such components, determine the probability that the device is inoperable because exactly one of the • Round destiny There are 5 white and 10 red balls in the destiny. 4 balls will be drawn at random. What is the probability of the event "at least 2 spheres are white"? • Karolína Karolína chose 5 bodies from the kit - white, blue and gray cubes, a blue cylinder and a white triangular prism. How many different roof towers can be built one by one if all the blue bodies (cube and cylinder) are not placed on top of each other? • Flags How many different flags can be made from colors red, yellow, blue, green, white to consist of three different colors? • Balls The urn is 8 white and 6 black balls. We pull 4 randomly balls. What is the probability that among them will be two white? • PC disks Peter has 45 disks in three colors. One-fifth of the disks are blue, red are twice more than the white. How much is blue, red and white disks? • Red balls In the bag there are 3 red, 12 blue and 8 green balls. How many red balls we must be attached to the bag if we want the probability of pulling out the red balls was 20%? • Combinations of sweaters I have 4 sweaters two are white, 1 red and 1 green. How many ways can this done? • One three We throw two dice. What is the probability that max one three falls? • Cards From a set of 32 cards, we randomly pull out three cards. What is the probability that it will be seven kings and ace? • There There are two numbers on the screen - one in the blue and the other in the red box. In the beginning, both numbers are the same. With each beep, both numbers increase - by 1 in the blue field and by 3 in the red field. At one point, the number 49 appears
2021-04-15 07:19:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49518144130706787, "perplexity": 638.0049573404386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084601.32/warc/CC-MAIN-20210415065312-20210415095312-00150.warc.gz"}
https://math.stackexchange.com/questions/3034751/the-greatest-area-for-a-rectangle-on-a-track-field
# The greatest area for a rectangle on a track field. An athletic field with a perimeter of 0.25 miles consists of a rectangle with a semicircle at each end, as shown below. Find the dimensions that yield the greatest possible area for the rectangular region. This is the work that I did below. I was wondering if this was the greatest possible area for the rectangle below. • Near the end of page $$1$$, you wrote $$r=\frac{2}{16\pi}$$ when you meant to say $$2r=\frac{2}{16\pi}$$ • Once we found out that $$r=\frac{1}{16\pi}$$, we can compute $$l=\frac18 - \pi r= \frac18 -\frac1{16}=\frac1{16}$$ directly without finding $$A$$ explicitly. • you wrote $r=\frac2{16\pi}$ and then you wrote $w=\frac{1}{8\pi}$? – Siong Thye Goh Dec 11 '18 at 1:53 • Great, do not write $r = \frac1{16\pi} \times 2$, you can write $r \times 2 = \frac1{16\pi} \times 2$ or $2r = \frac1{16\pi} \times 2$. – Siong Thye Goh Dec 11 '18 at 1:56
2019-09-18 19:53:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.840157687664032, "perplexity": 237.7318606701635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00471.warc.gz"}
https://www.physicsforums.com/threads/problems-in-elementary-qft.319570/
# Problems in elementary QFT 1. Jun 12, 2009 ### muppet Hi all, I'm trying to teach myself the basics of QFT. I'm using Peskin and Schroeder, and having a few difficulties reproducing a couple of the calculations. I don't think I've made careless algebraic slips, so before I show my working explicitly and beg for proof-reading I'd like to ask a couple of general questions. Firstly, canonical quantisation. 1)Does the field operator act multiplicatively, as the ladder operators act on the complex exponentials inside the integral? 2)Is there an explicit relationship between the operators $$a_{p} , a_{-p}$$ and a corresponding one for their adjoints? (Possibly one valid only when integrating over all momentum?) When trying to compute the hamilitonian and momentum operators (starting from the expressions 2.27, 2.28), I'm getting expressions in terms of operators in p and in -p, and it's not clear to me that they're equivalent to those given. Secondly, are there anything wrong with the relation $$\varepsilon^{ijk}\varepsilon_{ipq}=\delta^{j}_{p}\delta^{k}_{q}-\delta^{j}_{q}\delta^{k}_{p}$$ as applied to the 3d levi-civita symbol living in the spatial components of minkowski space? EDIT:Thanks christo Last edited: Jun 12, 2009 2. Jun 12, 2009 ### malawi_glenn for $$a_{p} , a_{-p}$$ you just have to swhithc sign of p i the definition. for evaluating the hamiltonian, you need some tricks... $$\int d\vec{p}f(\vec{p}) =$$ change p -> -p find the outcome of this as an excerice ;-) 3. Jun 12, 2009 ### Avodyne Operators always act multiplicatively in quantum mechanics. Ladder operators act on the state; the complex exponentials are just numbers. There is no relation among these operators themselves. Inside an integral, it can be useful to change the dummy integration variable from p to -p in some terms, as already noted by malawi_glenn. 4. Jun 13, 2009 ### muppet Thanks for the responses guys. Malawi_Glenn: The good news is, your post convinced me that I shouldn't pick up a minus sign when making the obvious substitution needed to evaluate the hamiltonian, which I'd come to believe but wasn't 100% sure about. I've also taken $$\omega_p = \omega_{-p}$$, which would make physical sense (and gives the right answer :tongue:) but I've not noticed explicitly stated anywhere. The thing I'm struggling more with is the momentum, when I have terms in both p and -p that I can't appear to separate. As for the swapping the signs of the momentum, if I change the sign of the momentum density (and take $$\omega_p = \omega_{-p}$$)then I get $$a_p=a_p^{\dagger}$$, which is obviously not true by the commutation relations. Is the right way to think about this that you take the integrands of the field and momentum density operators to be operators indexed by p, and that you just change the sign on the indices when you make this substitution? Avodyne: By "multiplicatively" I meant in the way the position operator is multiplicative, as opposed to the effect of applying e.g. a differential operator. Would the effect of the field operator be unchanged if you swapped the orders of the exponentials and the ladder operators? As for the epsilon business, I am working in signature (+ - - -). The reason I ask is the following calculation, trying to parametrise an infinitesmal rotation: $$R= I-\frac{i}{2}\omega_{\mu\nu}S^{\mu\nu} $=I-\frac{i}{2}\omega_{ij}S^{ij}$$$ Computing the second term $$$\omega_{ij}S^{ij}=\frac{1}{2}\varepsilon_{ijk}\varepsilon^{ijl}\theta^k\Sigma_l$ $=\frac{1}{2}(\delta^{j}_{j}\delta^{k}_{l}-\delta^{j}_{l}\delta^{j}_{k})\theta^k\Sigma_l$ $=0$$$ Here $$\Sigma_l$$ is the matrix $$$\left( \begin{array}{cc} \sigma_l & 0 \\ 0 & \sigma_l \end{array} \right)$$$ As an aside, I've just noticed that Peskin and Schroeder don't lower the index on the matrix when they contract it with the levi-civita tensor, if that's significant? 5. Jun 13, 2009 ### malawi_glenn A good thing to have when one studies peskin is the errata: http://www.slac.stanford.edu/~mpeskin/QFT.html Now the hamiltonian, I have it fully calculated somewhere in my old notes. And it is hard to help you if you don't show us your calculations. 6. Jun 13, 2009 ### xepma Yes, you can switch these without any harm. The ladder operators act on the Hilbert space, i.e. the states $$|\Psi\rangle$$. As for the Levi-Cevita tensor, let me give some info: One defines it in the usual sense that it's a completely anti-symmetric tensor and $$\epsilon_{123} = +1$$. Furthermore, by definition, $$\epsilon^{\mu\nu\lambda}$$ is obtained through raising via the Minkowski metric. Since the metric is symmetric and $$\epsilon_{123}$$ is antisymmetric, it follows that the object $$\epsilon^{\mu\nu\lambda}$$ is fully antisymmetric. So we only need to compute one component, for instance: $$\epsilon^{123} = \eta^{11}\eta^{22}\eta^{33}\epsilon_{123} = \pm(\eta_{\mu\nu})\epsilon_{123}$$ where the sign depends on which conventions you use for the metric. You can, ofcourse, also start with $$\epsilon^{123} = 1$$. So as you can see, the relation you wrote down still holds on the basis that both objects are completely antisymmetric. But the overall sign of the relation you wrote down will depend on the choice of the metric (i.e. the sign of the spacial part of the metric) Last edited: Jun 13, 2009 7. Jun 13, 2009 ### muppet Xepma: Thanks for your reply; switching the overall sign of the expression is the only effect of the metric that I could think of. Unfortunately, this means my calculation is wrong in some more fundamental respect, as I know the answer should be the matrix $$$\left( \begin{array}{cc} \frac{\sigma.\theta}{2} & 0 \\ 0 & -\frac{\sigma.\theta}{2} \end{array} \right)$$$ where theta is a 3-vector specifying rotations around the x,y,z axes and sigma is the "vector of pauli matrices". (The overall thing is then obviously a 4x4 matrix given in block form). Malawi_glenn: If I'm allowed to set $$\omega_p = \omega_{-p}$$ and $$\int\frac{d^3p}{(2\pi)^3}a^{\dagger}_{-p}a_{-p}=\int^{+\infty}_{-\infty}\frac{d^3p}{(2\pi)^3}a^{\dagger}_{p}a_{p}$$ then I think my calculation for the Hamiltonian is correct. Here's my effort with the momentum: $$P= $-\int d^3x\pi(x)\nabla\phi(x)$ $= -\int d^3x\int\frac{d^3pd^3p'}{(2\pi)^6}(-i)\sqrt{\frac{\omega_p}{2}}(a_p-a_{-p}^{\dagger})e^{ip.x}.\nabla\frac{1}{\sqrt{2\omega_{p'}}}(a_{p'}+a_{-p}^{\dagger})e^{ip'.x}$ $=-\int d^3x\int\frac{d^3pd^3p'}{(2\pi)^6}ip'(-i)\sqrt{\frac{\omega_p}{2}}(a_p-a_{-p}^{\dagger}).\frac{1}{\sqrt{2\omega_{p'}}}(a_{p'}+a_{-p'}^{\dagger})e^{i(p+p').x}$ $=-\int\frac{d^3pd^3p'}{(2\pi)^3}p'\delta(p'+p)\sqrt{\frac{\omega_p}{2}}(a_p-a_{-p}^{\dagger}).\frac{1}{\sqrt{2\omega_{p'}}}(a_{p'}+a_{-p'}^{\dagger})$ $=\frac{1}{2}\int\frac{d^3p}{(2\pi)^3}p(a_pa_{-p}+a_pa_p^\dagger-a_{-p}^\dagger a_{-p}-a_{-p}^\dagger a^\dagger_p)$$$ Here p,p',x are all 3-vectors. By the substiution p->-p above the middle two terms will yield the delta function which we will go on to neglect: $$\frac{1}{2}\int\frac{d^3p}{(2\pi)^3}p(a_pa_p^\dagger-a_{-p}^{\dagger}a_{-p} )=\frac{1}{2}\int\frac{d^3p}{(2\pi)^3}p(a_pa_p^\dagger-a_{p}^{\dagger}a_p)$$ This leaves me needing the statement $$\frac{1}{2}\int\frac{d^3p}{(2\pi)^3}p(a_pa_{-p}-a_{-p}^\dagger a_p^\dagger)= \int\frac{d^3p}{(2\pi)^3}pa_p^{\dagger}a_{p}$$ to be true, and I don't see why it is; equally I've checked the above working repeatedly, so if I've made a mistake there there's a good chance I've misunderstood something. 8. Jun 13, 2009 ### malawi_glenn ok i will have a serious look at it next week, stay tuned. Meanwhile, do you know of Srednicki's book? http://www.physics.ucsb.edu/~mark/qft.html he does a quite detailed calculation, you might be able to pick up many things from there 9. Jun 13, 2009 ### RedX I'm not so sure this is correct. Maybe the answer should not have a negative sign? In the Weyl representation, the bottom block should transform as the adjoint of the top block (the left and right chiral fields are related by adjoint operation). But the adjoint of the top block equals the top block, so the bottom block should also be the same as the top block. 10. Jun 13, 2009 ### muppet Sorry RedX, you're absolutely right. I was careless about writing that out (TeXing stuff is taking me ages as I'm really only just getting used to it!). It is, however, very definitely non-zero... Thanks for the recommendation malawi-glenn, Srednicki's book is out on loan at my uni library at the moment but I'll try and have a look at some point. Thanks in general for all this help guys, I've just finished my 3rd year at uni so some of this stuff is ... stretching 11. Jun 13, 2009 ### malawi_glenn but you can obtain it at his website.... free 12. Jun 13, 2009 ### muppet Wow. I'll have a look at that later tonight. Thanks! 13. Jun 13, 2009 ### malawi_glenn I know I got some hints from his calculations when I did this the first time 14. Jun 13, 2009 ### RedX The negative sign you have for the spin matrix on the bottom right corner should be positive, and not negative, so you did the calculation right. However, my reasoning for why it should have been positive is sketchy so don't listen to that. For boosts however, the negative sign is correct. 15. Jun 13, 2009 ### Avodyne No. In the third term, you get an extra minus sign from p->-p, because you have a coefficient p. So you do not get a commutator here. In fact the sum of the 2nd and 3rd terms add up to the total you want. And $p a_p a_{-p}$ vanishes when integrated over p, because it is odd in p. Same for its hermitian conjugate, so the 1st and 4th terms are zero. 16. Jun 13, 2009 ### muppet Brilliant. Thanks Avodyne! 17. Jun 13, 2009 ### muppet I can get the corresponding calculation for boosts out fine, which makes me wonder if it's something in the way I'm parametrising the infinitesmal rotation. But I can't work out for the life of me what it is! 18. Jun 14, 2009 ### malawi_glenn So you don't need me anymore? ;-) 19. Jun 14, 2009 ### muppet You're not escaping that easily! :tongue: 20. Jun 16, 2009 ### muppet The dirac delta function is in 3d, so summing over the repeated index j gives the correct answer. Sorted Thanks for all your help guys!
2017-10-19 05:56:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8437508344650269, "perplexity": 926.6414902944738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823229.49/warc/CC-MAIN-20171019050401-20171019070401-00099.warc.gz"}
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electronics/Book%3A_Laboratory_Manual%3A__Operational_Amplifiers_and_Linear_Integrated_Circuits_(Fiore)/13%3A_Precision_Rectifiers/13.7%3A_Data_Tables
# 13.7: Data Tables Error Quantity Estimate Actual $$R/2$$ is simply $$R$$ $$V_{out}$$ $$D_2$$ is shorted $$V_{out}$$ $$R_i$$ of op amp 1 is open $$V_{out}$$ Table $$\PageIndex{1}$$ This page titled 13.7: Data Tables is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by James M. Fiore via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-09-27 04:31:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980120062828064, "perplexity": 1988.0131776343749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00367.warc.gz"}
https://www.physicsforums.com/threads/simple-harmonic-motion-of-a-block-on-a-spring.376164/
# Simple Harmonic Motion of a Block on a Spring 1. Feb 7, 2010 ### Neodudeman 1. The problem statement, all variables and given/known data There's a block attached to a spring on a frictionless surface that oscillates back and forth. (Assume no damping). At t=0, the potential energy in the spring is 25% of the maximum potential energy. Kinetic energy decreases with time at t=0, and at t=2, the kinetic energy becomes 0 for the first time. Determine the frequency of this motion. 2. Relevant equations $$\frac{1}{2}$$mv2 + $$\frac{1}{2}$$kx2 = Enet x(t) = Acos($$\omega$$*t+$$\phi$$o) 3. The attempt at a solution Ok. In order to get Frequency, we need the Period. To get the period, we solve for $$\omega$$ by using the position function. To use the position function, we must first find $$\phi$$. So, according to the data, at t=0, the PE is 25% of the maximum potential energy. We know that the maximum potential energy is actually equal to $$\frac{1}{2}$$kA2. Thus, 25% of the maximum potential energy is equal to $$\frac{1}{4}$$*$$\frac{1}{2}$$*kA2. Therefore, at t=0, $$\frac{1}{4}$$*$$\frac{1}{2}$$*kA2=$$\frac{1}{2}$$kx2. Solving for x, we get that x=$$\frac{1}{2}$$A. Putting that into the position function, @t=0 $$\frac{1}{2}$$A=Acos($$\omega$$*0+$$\phi$$) $$\frac{1}{2}$$=cos($$\phi$$) acos($$\frac{1}{2}$$=$$\phi$$) $$\phi$$=$$\pi$$/3 Now, solving for $$\omega$$ and the period. And this is where I have a problem... x(t)=A*cos($$\omega$$*t+$$\phi$$) At t=2, the potential energy is max, meaning the kinetic energy is 0. Thus, the position x is equal to the amplitude A. A=A*cos($$\omega$$*2+$$\pi$$/3) Divide by A, and $$\omega$$ = 2$$\pi$$/T 1=cos(4$$\pi$$/T+$$\pi$$/3) Acos(1) = 0 0 = 4$$\pi$$/T+$$\pi$$/3 Subtracting by $$\pi$$/3 -$$\pi$$/3 = 4$$\pi$$/T -1/3=4/T This gives us: T=-12 Which, I'm 90% sure, we cannot have. A negative period gives a negative frequency. Where did I mess up in this problem? :/ 2. Feb 7, 2010 ### thebigstar25 I had a look at your solution .. maybe I cant be helpful .. but you know that cos(60)=cos(-60)=0.5 .. if you use the -60 instead of 60 (pi/3) you will end up with the right sign , since it is impossible (100%) to have a negative frequency or period they alwaye positive .. 3. Feb 7, 2010 ### Neodudeman I think, expanding from this idea, it's possible to use, instead of 0=4$$\pi$$/3+$$\pi$$/3 you can do 2$$\pi$$=4$$\pi$$/3+$$\pi$$/3 This gives f=5/12 With a -$$\pi$$/3, it gives a f=1/12 Which is correct then? Hmm... Maybe someone can comment on the theories behind each of these approaches?
2017-11-20 14:12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7068946361541748, "perplexity": 942.6743181065424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00011.warc.gz"}
https://airel.ee/docs/spectops-software/configuration/
Configuration Spectops is configured from the measurement setup window. • New setup: Creates a new measurement setup file and shows the setup configuration window. • Open setup: Opens an existing measurement setup file and shows the setup configuration window before running starting the measurements. • Change setup: Opens the measurement setup window for the currently running setup. Modifying the setup using these three commands will restart the measurements. That means that the visible data buffer is cleared. For simple changes in the setup there is the \emph{quick setup} option which will not restart the measurements but not all settings can be modified. • Measurement setup file: The name of the measurement setup file. Instrument package, inversion package and output folder paths will be relative to the path of this file. • Instrument package: Path to the extracted instrument package folder. Usually named (name)_instrument. • Serial port: Serial port name where the instrument is connected. A drop-down list of available serial ports is presented, but it is not guaranteed that all ports are there. You may enter the name by hand. • Measurement sequence: Operating modes and respective duration in seconds as a comma separated list. For example particles 120, ions 120, offset 60. • Averaging periods: Comma separated list of averaging periods that should be generated in addition to block averages. • Inversion package: The file name of inversion matrix package that is used to produce the spectra. Usually it is in a subfolder of the instrument package folder.
2019-03-26 07:59:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41485363245010376, "perplexity": 3404.8990059105518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00351.warc.gz"}
https://www.bgce.de/curriculum/projects/modal-based-damage-localisation/
# Modal-based Damage Localisation supervised by : M.Sc. Antonios Kamariotis from Engineering Risk Analysis Group (TUM) Prof. Dr. Daniel Straub from Engineering Risk Analysis Group (TUM) and Prof. Dr. Eleni. Chatzi from Dep. of Civil, Env. and Geomatic Eng. (ETH). This project studies the different damage indices derived from the damage sensitive features(DSFs) (i.e.mode Shapes, Mode Shape Curvature, Modal Strain Energy and Flexibility Matrix) following the chapter 7 in the book Structural health monitoring Ref. 1 on the numerical benchmark model developed from Konstantinos and Chatzi documented in Ref. 2 and Ref. 3. We can perform modal analysis and time history analysis on the simulator. Here we mainly take the modal analysis as a reference and check which damage indices are more informative based on the time history analysis. We studied the single damage case and the case where two damages are applied at the same time. Finally it shows that all three methods can work on theis settings in the study. The Flexibility matrix method shows less informative probably due to the fact that it needs the mass-normalised mode shapes. This study does not involve model updating and it serves as a basic understanding of the structural health monitoring(SHM). ## 1. Introduction—Modal-based Damage Localisation The aim of the project is to localise the damage based on the modal analysis method. So in the introduction part we will first introduce background of Damage Localisation in Section 1.1, then the basic modal analysis method in Section 1.2. ### 1.1 Damage Localisation The damage localisation is a subtask inside the structural health monitoring (or the so called Damage Identification). In the Table 1 the four levels of the Damage Identification are listed. Damage Detection is to answer the question whether there are damages in the structure. Then the localisation is to answer the question where the damage could be. The Classification and the quantification is to answer what type the damage is and how severe it is. Finally engineers need to make a decision or take actions based on the damage evaluations and the prediction of how long the structure would survive. In our study, we assume the type of the damage is the stiffness reduction caused by bending and we will go up to 2nd level Damage localisation and also some damage quantification in the 3rd level. 1st level Damage Detection 2nd level Damage Localisation 3rd level Damage Classification & Quantification 4th level Prediction There are nine foundamental axioms of structural health monitoring presented in chapter 1 in the book Structural health monitoring Ref. 1. We list the first four of them as a first feeling of structural health monitoring. 1. All materials have inherent flaws or defects. 2. Damage assessment requires a comparision between two system states. 3. Identifying the existence and location of damage can be done in an unsupervised learning mode, but identifying the type of damage present and the damage severity can generally only be done in a supervised learning mode. 4. Sensors cannot meassure damage. Feature extraction through signal processing and statistical classification are necessary to convert sensor data into damage information. The second axiom reminds us that we usually need to compare the healthy state and the damaged state to assess a damage. The fourth axiom tells us that damage can not be meassured directly, but can only be derived from some sensor data that we can get. e.g. here the sensor data we use is the acceleration data at specific degree of freedom in our finite element model. From the undamaged state to the damaged state, the mass, stiffness and the damping matrix may all change. However, we assume here first no damping is considered. And the dominated damage type will cause only the stiffness reducton. So the mass matrix remains the same. From the dynamic analysis, we can get the eigenvalue and eigenvector of the dynamic system. In modal analysis, eigenvalue(resonance frequency) and eigenvector(mode shapes) are basic modal properties. Based on the basic modal properties, mode shape curvature(MSC), modal strain energy and flexibility matrix can be further derived and defined in Section 3. The idea of using modal analysis to identify the damage(to be exact the stiffness reduction) is summarised in the Fig. 1. <align=”center”> Fig. 1: The modal analysis ## 2. Method ### 2.1 Workflow From the simulator developed from Konstantinos and Chatzi documented in Ref. 2 and Ref. 3. With the simulator we can generated the acceleration data or the mode shapes at the predefined degree of freedom of the FEM model. We can define the geometry, load case and the boundary conditions of the FEM model in this setted up simulator. For example, if we input the location and the severity of the damage. The pseudo-observation data of a sensor can be generated, this data can be the acceleration or mode shapes. Based on the observation data, the modal based method is applied to find the location of the damage we have defined before. The general work flow is summarized in the Fig. 2. <align=”center”> Fig. 2: The workflow of the whole project. ### 2.2 Simulator settings Below we repeat the simulator settings introduced in Ref. 2 and Ref. 3. in Geometry, Finite Element, Material assumption, Boundary Conditions, damage settings and sensor locations respectively. <align=”center”> Fig. 3: The introduction to the numerical benchmark model #### 2.2.1 Geometry The length, heights and the weights of the benchmark model in Fig. 3 are defined as: L1 = 12; L2 = 13; h = 0.6 ; w = 0.1 #### 2.2.2 Finite element choice 200×6 isoparametric quadrilateral elements for plane-stress problem are used in the FEM model with dx = 25/200, dy=0.6/6. #### 2.2.3 Material assumption linear elastic material with E = 30e10, poisson ratio ν = 0.3, material density ρ = 7800. #### 2.2.4 Boundary condition Here only the distributed loads modelled as independent Gaussian white noise F4(x) is applied. The support are the same with the settings in the simulator. #### 2.2.5 Damage settings The damage are denoted by the black boxes at 1/4,1/2 and 3/4 of the bridge. At each position there are different numbers of affected elements around this position. For example Damaged setting 1, 4 and 8 have only one damaged element. Accordingly damage settting 7, 11 and 12 have ten damaged elements respectively. To make the results more obvious we study the damage case where ten elements are involved. For each damage settings, there are also three different predefined damage severity, i.e. 20%,50%,70% of the stiffness reduction. <align=”center”> Fig. 4: The damage location #### 2.2.6 Sensor locations In the original settings, 60 sensors is predefined in the simulator already shown by the black dot in Fig. 3. Each sensor can return the value in x, y direction( either modes or accelerations). We define the Upper-,Middle-,Lower-sensor as the group of sensors whose y coordinate is 0.5, 0.3 and 0.1. Since only values in y directions are of interest for the bending deformation. Finnaly we choose Uppersensor_y and Uppersensor_y_less 10 shown by Fig. 5 to study further. The distances along x direction between the two neighborhood sensors δx = 10dx(20dx). <align=”center”> Fig. 5: The sensor location of the numerical benchmark model ## 3. Damage Assessment ### 3.1 Damage Index Overview To define a damage index, we need to find the damage sensitive features(DSF). And we already know that damage assessment is between two states, we need to make an approriate mathematical operations to see the difference between the healthy and unhealthy states: #Subtraction: DamageIndex_sub = DSFdamaged-DSFundamaged 1. DamageIndex_absdiff = abs(DSF damaged 1. -DSF undamaged ) 1. DamageIndex_absabsdiff = abs(abs(DSF damaged 1. )-abs(DSF undamaged )) 1. DamageIndex_abssqudiff = abs((DSF damaged 1. ) 2 1. -(DSF undamaged 1. ) 2 ) #Division: DamageIndex_div = DSFdamaged/DSFundamaged The following is to try different damage sensitive features(DSF) to create the damage index. We will try to introduce the method based on the example, where the configuration is: damage 7 with sensors along the Uppersensor_y and Uppersensor_y_less. ### 3.2 Mode Shape Curvature Mode Shape Curvature is the 2nd derivative of mode shapes. In the FEM model we can use the central difference scheme to get the 2nd order derivative of the mode shapes. ### 3.3 Modal Strain Energy The strain energy stored in the spring when the structures deform in one of its mode shapes is: $U = \frac{1}{2}k{\Delta x}^2$ where $$\Delta x$$ is the change of length of the springs from its undeformed state. The strain energy U of a Euler-Bernoulli beam of length L is given by $$U = \frac{1}{2}\int_{0}^{l}EI(\frac{\partial^2 w}{\partial x^2})^2\,dx$$, where $$w$$ is the transverse displacement of the beam;$$x$$ is the coordinate along the length of the beam. For a particular mode shape $${\phi}_i$$, the strain energy associated with the deformation in that mode shape pattern is $U_i = \frac{1}{2}\int_{0}^{l}EI(\frac{\partial^2 {\phi}_i}{\partial x^2})^2\,dx$ Since the beam is subdivided into N divisions as shown in the figure, then the energy associated with each subregion j due to the ith mode is given by $\begin{split} U_{ij} & = \frac{1}{2}\int_{a_j}^{a_{j+1}}(EI)_j(\frac{\partial^2 {\phi}_i}{\partial x^2})^2\,dx\\ & = \frac{1}{2}(EI)_j\int_{a_j}^{a_{j+1}}(\frac{\partial^2 {\phi}_i}{\partial x^2})^2\,dx \end{split}$ where the fractional energy is defined as $$F_{ij} = \frac{U_{ij}}{U_i}$$.If it is assumed that the damage is primarily located at a single subregion then the fractional energy will remain relatively constant in the undamaged subregions and in the damaged region k Similar quantities can be defined for a damaged structure $$F_{ijd} = \frac{U_{ijd}}{U_{id}}, \sum_{j=0}^{j=N} F_{ij}=\sum_{j=0}^{j=N} F_{ijd} = 1$$ If it is assumed that the damage is primarily located at a single subregion then the fractional energy will remain relatively constant in the undamaged subregions and in the damaged region k.$F_{ikd}=F_{ik}$ $\frac{U_{ikd}}{U_{id}}=\frac{U_{ik}}{U_i}\frac{(EI)_k}{((EI)_d)_k}=\frac{\frac{\int_{a_k}^{a_{k+1}}(\frac{\partial^2 \phi_{i,d}}{\partial x^2})^2\,dx}{\int_{0}^{l}(\frac{\partial^2 \phi_{i,d}}{\partial x^2})^2\,dx }}{\frac{\int_{a_k}^{a_{k+1}}(\frac{\partial^2 \phi_{i}}{\partial x^2})^2\,dx}{\int_{0}^{l}(\frac{\partial^2 \phi_{i}}{\partial x^2})^2\,dx}}\equiv \frac{f_{ikd}}{f_{ik}}$ damage index(DI) can be defined as the sum of all order: $\beta_k = \frac{\sum_{i=1}^{i=m} f_{ikd}}{\sum_{i=1}^{i=m}f_{ik}}$ or defined as the sum of the first two orders: $\beta_2k = \frac{\sum_{i=1}^{i=2} f_{ikd}}{\sum_{i=1}^{i=2}f_{ik}}$ ### 3.4 Flexibility Matrix $[f]= [K][y]$ $[y]= [K]^{-1}[f]=[G][f]$ where $$[f]$$ is the vector of static loads applied to the structure and $$[y]$$ is a vector corresponding to the deformation. For an undamaged structure with m mass-normalised modal vectors identified from experimental data obtained at n degrees of freedom, the flexibility matrix $$[G]_{n\times n}$$ can be derived from the modal data as follows : $\label{eqn:G} [G]\approx [\Phi][\Omega][\Omega]^T\approx \sum_{i=1}^{i=m}\frac{1}{(\omega_i)^2}[\phi]_i[\phi]_i^T$ where the mode shape matrix: $$[\Phi] = [\phi]_1,[\phi]_2,…,[\phi]_m$$ the $$i^{th}$$ mass normalised mode shape: $$[\phi]_i$$ the modal stiffness matrix: $$[\Omega]=diag(\omega_i^2)$$ the $$i^{th}$$ modal frequency: $$\omega_i$$ Inspired by equation ([G]) for each order we define: $\label{eqn:g} [g] \approx \frac{1}{(\omega_i)^2}[\phi]_i[\phi]_i^T$ Then the damage index(DI) can be defined as Fig. Flexibility Matrix Damage Index illustration. Flexibility Matrix damage index illustration We can obtain first m modes, then we have a $$n\times n$$ flexibility matrix for each order $$[g]$$. n is the number of degree of freedoms we can have, i.e. the number of sensors we setted up. Based on the flexibility matrix for each order $$[g]$$, a vector $$\vec{v}_{n \times 1}$$ for each order is defined to sum up all the columns of $$[g]$$. Same as the idea to compute the mode shape curvature, we calculate the v curvature $$\vec{vc}_{n \times 1}$$ and try to define the damage index of the $$\vec{v}$$ curvature $$\vec{vc}$$ through $FMDI_{absabsdiff} =||\vec{vc}_{da}|-|\vec{vc}_{un}||$ $FMDI_{abssqudiff} =|(\vec{vc}_{da})^2-(\vec{vc}_{un})^2|$ ## 4. Results and discussions ### Discussion The Damage index based on frequency can give use the information about the damage existence and damage relative severity. The modes is easily affected by the noise, here mainly the noise from the stochastic subspace identification SSI process Ref. 4. In the example, we can get first six order of modes and it is ordered from the smallest frequency to the largest frequency. The higher order modes have more noise. So when we only sum up the first two order, the noise from higher order are avoided, and the plot can give us some damage information. If we use the damage index from the mode shape curvature, the sum of first two order is also more informative than um of all order. The same case applies on the damage index based on Modal strain energy and flexibility matrix. The detailed damage index plot for each order is in the pdf file here HonorsProjectPresentation - BGCE ### Future work The damage index based on Flexibility matrix needs the mass-normalised mode shapes. However here we only use the SSI2 with the scaling factor = max(|vector|). If we can use the mass-normalised mode shapes, the damage index based on Flexibility Matrix would be more informative. Then the higher order can be also informative. Since in the expression of [G] has the frequency in the denominator, thus the higher order modes noise can be reduced in this way. ## References [1] Farrar, Charles R and Worden, Keith Structural health monitoring: a machine learning perspective John Wiley \& Sons, 2012. [ bib ] [2] Tatsis, Konstantinos and Chatzi, Eleni A numerical benchmark for system identification under operational and environmental variability 8th International Operational Modal Analysis Conference , 2019. [ bib ] [3] Tatsis, Konstantinos and Ntertimanis, Vasilis K and Chatzi, Eleni Modal-based damage localization on wind turbine blades under environmental variability 9th European Workshop on Structural Health Monitoring : Online Proceedings, 2018. [ bib ] [4] Peeters, Bart and De Roeck, Guido Reference-based stochastic subspace identification for output-only modal analysis Mechanical systems and signal processing : Elsevier, 1999. [ bib ]
2021-10-25 03:22:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6225919723510742, "perplexity": 1344.9380432649818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587623.1/warc/CC-MAIN-20211025030510-20211025060510-00218.warc.gz"}
http://assert.pub/arxiv/cond-mat/all/
### Top 10 Arxiv Papers Today in Condensed Matter ##### #1. Josephson critical currents in annular superconductors with Pearl vortices We investigate the influence of Pearl vortices in the vicinity of an edge-type Josephson junction for a superconducting thin-film loop in the form of an annulus, under uniform magnetic field. Specifically, we obtain the exact analytic formulation that allows to describe the circulating current density and the gauge invariant phase increment $\Delta\phi$ across the junction. The main properties of $\Delta\phi$ and their influence on the critical current pattern $I_c(B)$ are described quantitatively in terms of the loop's width to radius ratio $W/R$ and of the vortex position within the loop ${\bf r}_v$. It is shown that narrow loops ($W/R < 0.3$) may be well described by the straight geometry limit. However, such approximation fails to predict a number of distinctive features captured by our formulation, as the node lifting effect of the $I_c(B)$ pattern in wide loops or the actual influence of a vortex pinned at different positions. more | pdf | html ###### Tweets CondensedPapers: Josephson critical currents in annular superconductors with Pearl vortices. https://t.co/w3UBYxTrX7 CondMatPhys: Josephson critical currents in annular superconductors with Pearl vortices https://t.co/70mRprwe7C None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 7069 Unqiue Words: 1961 ##### #2. Phonon-induced giant linear-in-$T$ resistivity in magic angle twisted bilayer graphene: Ordinary strangeness ###### Fengcheng Wu, Euyheon Hwang, Sankar Das Sarma We theoretically show that the recently fabricated flatband twisted bilayer graphene should have an extremely enhanced linear-in-temperature resistivity in the metallic regime with the resistivity magnitude increasing dramatically as the twist angle approaches the magic angle ($\sim 1^{\circ}$). The slope of the resistivity versus temperature could approach hundreds of ohms per kelvin with a very strong angle dependence, but with a rather weak dependence on the carrier density. This dramatic angle-tuned resistivity enhancement arises from the huge increase in the effective electron-phonon coupling in the system due to the suppression of graphene Fermi velocity induced by the flatband condition. Our calculated temperature dependence is reminiscent of the so-called `strange metal' transport behavior except that it is arising from the ordinary electron-phonon coupling in a rather unusual parameter space due to the generic moir\'e flatband structure of twisted bilayer graphene. more | pdf | html None. ###### Tweets CondensedPapers: Phonon-induced giant linear-in-$T$ resistivity in magic angle twisted bilayer graphene: Ordinary strangeness. https://t.co/FxDBAxwLYl None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 4437 Unqiue Words: 1414 ##### #3. Machine Learning Characterization of Structural Defects in Amorphous Packings of Dimers and Ellipses ###### Matt Harrington, Andrea J. Liu, Douglas J. Durian Structural defects within amorphous packings of symmetric particles can be characterized using a machine learning approach that incorporates structure functions of radial distances and angular arrangement. This yields a scalar field, \emph{softness}, that correlates with the probability that a particle is about to rearrange. However, when particle shapes are elongated, as in the case of dimers and ellipses, we find the standard structure functions produce imprecise softness measurements. Moreover, ellipses exhibit deformation profiles in stark contrast to circular particles. In order to account for effects of orientation and alignment, we introduce new structure functions to recover predictive performance of softness, as well as provide physical insight to local and extended dynamics. We study a model disordered solid, a bidisperse two-dimensional granular pillar, driven by uniaxial compression and composed entirely of monomers, dimers, or ellipses. We demonstrate how the computation of softness via support vector machine extends... more | pdf | html None. ###### Tweets gsrdzl: "Machine Learning Characterization of Structural Defects in Amorphous Packings of Dimers and Ellipses" https://t.co/Bnyml7dyfY #GranularMatter @DouglasDurian, @Doc_Harrington CondensedPapers: Machine Learning Characterization of Structural Defects in Amorphous Packings of Dimers and Ellipses. https://t.co/nbNtan0Hsu None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 11620 Unqiue Words: 2933 ##### #4. Magnetic properties of the itinerant A-type antiferromagnet CaCo2P2 studied by 59Co and 31P NMR ###### N. Higa, Q. -P. Ding, A. Teruya, M. Yogi, M. Hedo, T. Nakama, Y. Ōnuki, Y. Furukawa $^{59}$Co and $^{31}$P nuclear magnetic resonance (NMR) measurements in external magnetic and zero magnetic fields have been performed to investigate the magnetic properties of the A-type antiferromagnetic (AFM) CaCo$_2$P$_2$. NMR data, especially, the nuclear spin lattice relaxation rates 1/$T_1$ exhibiting a clear peak, provide clear evidence for the AFM transition at a N\'eel temperature of $T_{\rm N}\sim$110~K. The magnetic fluctuations in the paramagnetic state were found to be three-dimensional ferromagnetic, suggesting ferromagnetic interaction between Co spins in the ${\it ab}$ plane characterize the spin correlations in the paramagnetic state. In the AFM state below $T_{\rm N}$, we have observed $^{59}$Co and $^{31}$P NMR signals under zero magnetic field. From $^{59}$Co NMR data, the ordered magnetic moments of Co are found to be in $ab$ plane and are estimated to be 0.35 $\mu_{\rm B}$ at 4.2 K. Furthermore, the external field dependence of $^{59}$Co NMR spectrum in the AFM state suggests a very weak magnetic anisotropy... more | pdf | html ###### Tweets CondensedPapers: Magnetic properties of the itinerant A-type antiferromagnet CaCo2P2 studied by 59Co and 31P NMR. https://t.co/6dM8M8KuEW CondMatPhys: Magnetic properties of the itinerant A-type antiferromagnet CaCo2P2 studied by 59Co and 31P NMR https://t.co/ZdTqjf6OIi None. None. ###### Other stats Sample Sizes : None. Authors: 8 Total Words: 8930 Unqiue Words: 1951 ##### #5. Engineering Quantum Interference ###### M. Lucci, V. Merlo, I. Ottaviani, M. Cirillo, D. Badoni, V. Campanari, G. Salina, J. G. Caputo, L. Loukitch A model for describing interference and diffraction of wave functions of one-dimensional Josephson array interferometers is presented. The derived expression for critical current modulations accounts for an arbitrary number of square junctions, variable distance between these, and variable size of their area. Predictions are tested on real arrays containing up to 20 equally spaced and identical junctions and on arrays shaped with peculiar geometries. Very good agreement with the modulations predicted by the model and the experimental results is obtained for all the tested configurations. It is shown that specific designs of the arrays generate significant differences in their static and dynamical (non-zero voltage) properties. The results demonstrate that the magnetic field dependence of Josephson supercurrents shows how interference and diffraction of macroscopic quantum wavefunctions can be manipulated and controlled. more | pdf | html ###### Tweets CondensedPapers: Engineering Quantum Interference. https://t.co/GIOSNckZ1g CondMatPhys: Engineering Quantum Interference https://t.co/rTFcNJrF7K None. None. ###### Other stats Sample Sizes : None. Authors: 9 Total Words: 3620 Unqiue Words: 1316 ##### #6. Microwave cavity detected spin blockade in a few electron double quantum dot ###### A. J. Landig, J. V. Koski, P. Scarlino, C. Reichl, W. Wegscheider, A. Wallraff, K. Ensslin, T. Ihn We investigate spin states of few electrons in a double quantum dot by coupling them weakly to a magnetic field resilient NbTiN microwave resonator. We observe a reduced resonator transmission if resonator photons and spin singlet states interact. This response vanishes in a magnetic field once the quantum dot ground state changes from a spin singlet into a spin triplet state. Based on this observation, we map the two-electron singlet-triplet crossover by resonant spectroscopy. By measuring the resonator only, we observe Pauli spin blockade known from transport experiments at finite source-drain bias and detect an unconventional spin blockade triggered by the absorption of resonator photons. more | pdf | html ###### Tweets CondensedPapers: Microwave cavity detected spin blockade in a few electron double quantum dot. https://t.co/Ixh9Sxno1d None. None. ###### Other stats Sample Sizes : None. Authors: 8 Total Words: 4621 Unqiue Words: 1332 ##### #7. Influence of temperature on the displacement threshold energy in graphene ###### Alexandru I. Chirita, Toma Susi, Jani Kotakoski The atomic structure of nanomaterials is often studied using transmission electron microscopy. In addition to image formation, the energetic electrons may also cause damage while impinging on the sample. In a good conductor such as graphene the damage is limited to the knock-on process caused by elastic electron-nucleus collisions. This process is determined by the kinetic energy an atom needs to be sputtered, ie, its displacement threshold energy. This is typically assumed to have a fixed value for all electron impacts on equivalent atoms within a crystal. Here we show using density functional tight-binding simulations that the displacement threshold energy is affected by the thermal perturbation of the atoms from their equilibrium positions. We show that this can be accounted for in the estimation of the displacement cross section by replacing the constant threshold value with a distribution. The improved model better describes previous precision measurements of graphene knock-on damage, and should be considered also for other... more | pdf | html ###### Tweets CondensedPapers: Influence of temperature on the displacement threshold energy in graphene. https://t.co/Rs20JUOwDx None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 3417 Unqiue Words: 1248 ##### #8. Oxygen Electromigration and Energy Band Reconstruction Induced by Electrolyte Field Effect at Oxide Interfaces ###### S. W. Zeng, X. M. Yin, T. S. Herng, K. Han, Z. Huang, L. C. Zhang, C. J. Li, W. X. Zhou, D. Y. Wan, P. Yang, J. Ding, A. T. S. Wee, J. M. D. Coey, T. Venkatesan, A. Rusydi, A. Ariando Electrolyte gating is a powerful means for tuning the carrier density and exploring the resultant modulation of novel properties on solid surfaces. However, the mechanism, especially its effect on the oxygen migration and electrostatic charging at the oxide heterostructures, is still unclear. Here we explore the electrolyte gating on oxygen-deficient interfaces between SrTiO3 (STO) crystals and LaAlO3 (LAO) overlayer through the measurements of electrical transport, X-ray absorption spectroscopy (XAS) and photoluminescence (PL) spectra. We found that oxygen vacancies (Ovac) were filled selectively and irreversibly after gating due to oxygen electromigration at the amorphous LAO/STO interface, resulting in a reconstruction of its interfacial band structure. Because of the filling of Ovac, the amorphous interface also showed an enhanced electron mobility and quantum oscillation of the conductance. Further, the filling effect could be controlled by the degree of the crystallinity of the LAO overlayer by varying the growth... more | pdf | html ###### Tweets CondensedPapers: Oxygen Electromigration and Energy Band Reconstruction Induced by Electrolyte Field Effect at Oxide Interfaces. https://t.co/VAmQK9D6XC CondMatPhys: Oxygen Electromigration and Energy Band Reconstruction Induced by Electrolyte Field Effect at Oxide Interfaces https://t.co/2sxDqaSIUI None. None. ###### Other stats Sample Sizes : None. Authors: 16 Total Words: 6140 Unqiue Words: 1803 ##### #9. Zeeman spectroscopy of excitons and hybridization of electronic states in few-layer WSe$_2$, MoSe$_2$ and MoTe$_2$ ###### Ashish Arora, Maciej Koperski, Artur Slobodeniuk, Karol Nogajewski, Robert Schmidt, Robert Schneider, Maciej R. Molas, Steffen Michaelis de Vasconcellos, Rudolf Bratschitsch, Marek Potemski Monolayers and multilayers of semiconducting transition metal dichalcogenides (TMDCs) offer an ideal platform to explore valley-selective physics with promising applications in valleytronics and information processing. Here we manipulate the energetic degeneracy of the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys in few-layer TMDCs. We perform high-field magneto-reflectance spectroscopy on WSe$_2$, MoSe$_2$, and MoTe$_2$ crystals of thickness from monolayer to the bulk limit under magnetic fields up to 30 T applied perpendicular to the sample plane. Because of a strong spin-layer locking, the ground state A excitons exhibit a monolayer-like valley Zeeman splitting with a negative $g$-factor, whose magnitude increases monotonically when thinning the crystal down from bulk to a monolayer. Using the $\mathbf{k\cdot p}$ calculation, we demonstrate that the observed evolution of $g$-factors for different materials is well accounted for by hybridization of electronic states in the $\mathrm{K}^+$ and $\mathrm{K}^-$ valleys. The mixing of... more | pdf | html ###### Tweets CondMatPhys: Zeeman spectroscopy of excitons and hybridization of electronic states in few-layer WSe2, MoSe2 and MoTe2 https://t.co/nQCntdfbnZ CondensedPapers: Zeeman spectroscopy of excitons and hybridization of electronic states in few-layer WSe$_2$, MoSe$_2$ and MoTe$_2$. https://t.co/Yfh0Zlbm7k None. None. ###### Other stats Sample Sizes : None. Authors: 10 Total Words: 9699 Unqiue Words: 2350 ##### #10. Landau quantization in coupled Weyl points: a case study of semimetal NbP ###### Y. Jiang, Z. L. Dun, S. Moon, H. D. Zhou, M. Koshino, D. Smirnov, Z. Jiang Weyl semimetal (WSM) is a newly discovered quantum phase of matter that exhibits topologically protected states characterized by two separated Weyl points with linear dispersion in all directions. Here, via combining theoretical analysis and magneto-infrared spectroscopy of an archetypal Weyl semimetal, niobium phosphide, we demonstrate that the coupling between Weyl points can significantly modify the electronic structure of a WSM and provide a new twist to the protected states. These findings suggest that the coupled Weyl points should be considered as the basis for analysis of realistic WSMs. more | pdf | html ###### Tweets CondensedPapers: Landau quantization in coupled Weyl points: a case study of semimetal NbP. https://t.co/7eRkXmwShc CondMatPhys: Landau quantization in coupled Weyl points: a case study of semimetal NbP https://t.co/rIIgB2g6Qd None. None. ###### Other stats Sample Sizes : None. Authors: 7 Total Words: 4264 Unqiue Words: 1458 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 56,474 papers. ###### Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online ###### Stats Tracking 56,474 papers.
2018-11-14 11:28:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5228900909423828, "perplexity": 6542.5827792847285}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00084.warc.gz"}
https://datascience.stackexchange.com/questions/53919/understanding-layers-in-recurrent-neural-networks-for-nlp
# Understanding Layers in Recurrent Neural Networks for NLP In convolution neural networks, we have a concept that inner layers learn fine features like lines and edges, while outer layers learn more complex shapes. Do we have any such understanding for layers in RNNs (like LSTMs), something like inner layers understand grammar while outer layers understand more complete meanings of sentences assuming that we are using the LSTM for some natural language task like text summarization? • assuming we are using the LSTM for some natural language task like text summarization – sumit Jun 17 '19 at 4:11 Its not like it just understands grammar. In LSTMs the network tries to preserve the hidden states over time. By doing this they try to learn long-term dependencies in the language and relationships between words at variable distances. LSTM does this by using its three famous gates. 1. Forget gate - Tries to remember only the important features and relationships overtime. 2. Input gate - Adds new information to old cell state at each time time step. 3. Output gate - Produces new output by taking into account old cell state and output at each time step. RNN/LSTM is designed for series (data has time step) like data(E. g. a sentence ) which has dependency between different parts of the data. In English, some words in a sentence have a dependency on previous words. To carry the dependency information and ignore the non-important information until the end of the sentence RNN/LSTM was introduced. If you use other variants of deep neural network (MLP) in series like data that network the network forget dependency information.
2020-08-15 05:24:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3207283616065979, "perplexity": 1720.115405243414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740679.96/warc/CC-MAIN-20200815035250-20200815065250-00348.warc.gz"}
https://www.ademcetinkaya.com/2023/03/bme-black-mountain-energy-ltd.html
Outlook: BLACK MOUNTAIN ENERGY LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 12 Mar 2023 for (n+4 weeks) ## Abstract BLACK MOUNTAIN ENERGY LTD prediction model is evaluated with Multi-Task Learning (ML) and Lasso Regression1,2,3,4 and it is concluded that the BME stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. How useful are statistical predictions? 2. Reaction Function 3. What are the most successful trading algorithms? ## BME Target Price Prediction Modeling Methodology We consider BLACK MOUNTAIN ENERGY LTD Decision Process with Multi-Task Learning (ML) where A is the set of discrete actions of BME stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Lasso Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Multi-Task Learning (ML)) X S(n):→ (n+4 weeks) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$ n:Time series to forecast p:Price signals of BME stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## BME Stock Forecast (Buy or Sell) for (n+4 weeks) Sample Set: Neural Network Stock/Index: BME BLACK MOUNTAIN ENERGY LTD Time series to forecast n: 12 Mar 2023 for (n+4 weeks) According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for BLACK MOUNTAIN ENERGY LTD 1. IFRS 15, issued in May 2014, amended paragraphs 3.1.1, 4.2.1, 5.1.1, 5.2.1, 5.7.6, B3.2.13, B5.7.1, C5 and C42 and deleted paragraph C16 and its related heading. Paragraphs 5.1.3 and 5.7.1A, and a definition to Appendix A, were added. An entity shall apply those amendments when it applies IFRS 15. 2. An entity's business model refers to how an entity manages its financial assets in order to generate cash flows. That is, the entity's business model determines whether cash flows will result from collecting contractual cash flows, selling financial assets or both. Consequently, this assessment is not performed on the basis of scenarios that the entity does not reasonably expect to occur, such as so-called 'worst case' or 'stress case' scenarios. For example, if an entity expects that it will sell a particular portfolio of financial assets only in a stress case scenario, that scenario would not affect the entity's assessment of the business model for those assets if the entity reasonably expects that such a scenario will not occur. If cash flows are realised in a way that is different from the entity's expectations at the date that the entity assessed the business model (for example, if the entity sells more or fewer financial assets than it expected when it classified the assets), that does not give rise to a prior period error in the entity's financial statements (see IAS 8 Accounting Policies, Changes in Accounting Estimates and Errors) nor does it change the classification of the remaining financial assets held in that business model (ie those assets that the entity recognised in prior periods and still holds) as long as the entity considered all relevant information that was available at the time that it made the business model assessment. 3. Expected credit losses reflect an entity's own expectations of credit losses. However, when considering all reasonable and supportable information that is available without undue cost or effort in estimating expected credit losses, an entity should also consider observable market information about the credit risk of the particular financial instrument or similar financial instruments. 4. The characteristics of the hedged item, including how and when the hedged item affects profit or loss, also affect the period over which the forward element of a forward contract that hedges a time-period related hedged item is amortised, which is over the period to which the forward element relates. For example, if a forward contract hedges the exposure to variability in threemonth interest rates for a three-month period that starts in six months' time, the forward element is amortised during the period that spans months seven to nine. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions BLACK MOUNTAIN ENERGY LTD is assigned short-term Ba1 & long-term Ba1 estimated rating. BLACK MOUNTAIN ENERGY LTD prediction model is evaluated with Multi-Task Learning (ML) and Lasso Regression1,2,3,4 and it is concluded that the BME stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Sell ### BME BLACK MOUNTAIN ENERGY LTD Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementCaa2Baa2 Balance SheetCaa2Baa2 Leverage RatiosBaa2Ba3 Cash FlowBaa2Baa2 Rates of Return and ProfitabilityCaa2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 89 out of 100 with 855 signals. ## References 1. Rosenbaum PR, Rubin DB. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55 2. Knox SW. 2018. Machine Learning: A Concise Introduction. Hoboken, NJ: Wiley 3. Breiman L, Friedman J, Stone CJ, Olshen RA. 1984. Classification and Regression Trees. Boca Raton, FL: CRC Press 4. Farrell MH, Liang T, Misra S. 2018. Deep neural networks for estimation and inference: application to causal effects and other semiparametric estimands. arXiv:1809.09953 [econ.EM] 5. Ashley, R. (1983), "On the usefulness of macroeconomic forecasts as inputs to forecasting models," Journal of Forecasting, 2, 211–223. 6. Mikolov T, Yih W, Zweig G. 2013c. Linguistic regularities in continuous space word representations. In Pro- ceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–51. New York: Assoc. Comput. Linguist. 7. Greene WH. 2000. Econometric Analysis. Upper Saddle River, N J: Prentice Hall. 4th ed. Frequently Asked QuestionsQ: What is the prediction methodology for BME stock? A: BME stock prediction methodology: We evaluate the prediction models Multi-Task Learning (ML) and Lasso Regression Q: Is BME stock a buy or sell? A: The dominant strategy among neural network is to Sell BME Stock. Q: Is BLACK MOUNTAIN ENERGY LTD stock a good investment? A: The consensus rating for BLACK MOUNTAIN ENERGY LTD is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of BME stock? A: The consensus rating for BME is Sell. Q: What is the prediction period for BME stock? A: The prediction period for BME is (n+4 weeks)
2023-03-23 07:09:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4265037477016449, "perplexity": 6427.7727168968295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00344.warc.gz"}
https://kullabs.com/class-7/compulsory-maths/profit-loss-and-simple-interest/profit-loss-and-simple-interest
## Profit, Loss and Simple Interest Subject: Compulsory Maths Find Your Query #### Overview When the selling price of a good is higher than its cost then, it is called profit. Similarly, when the selling price of goods is lower than its cost price then it is called loss. When there is an investment on Rs. 1000 and the C.P are considered as Rs. 100 and profit or loss is calculated on it, then it is called profit or loss percent. When a marked price of (M.P) of any article reduces and sells to the customer by shopkeeper then the reduced amount is called the discount. When a shopkeeper allows discount from Marked Price (M.P) of any article. When M.P is considered as Rs. 1000 and discount are calculated from it, then it is called Discount percent. Value added tax is a tax charged on the actual selling price of goods. Simple interest is the interest which is payable on the principal. #### Profit and Loss When the selling price of a good is higher than its cost then, it is called profit. In this case, profit is calculated as the difference between the selling price and the cost price. Profit S.P - C.P S.P C.P + profit C.P S.P - profit Similarly, when the selling price of goods is lower than its cost price then it is called loss. The loss is calculated as the difference between the cost price and the selling price. Loss C.P - S.P C.P S.P + loss S.P C.P- loss Profit and Loss percent When there is an investment on Rs. 1000 and the C.P are considered as Rs. 100 and profit or loss is calculated on it, then it is called profit or loss percent. For example, The cost price of an article is Rs. 250 and it is sold at a profit of Rs. 100. Here, On C.P of Rs. 250, Profit is of Rs. 100. On CP Rs.1, Profit is Rs. $\frac{100}{250}$ On CP of Rs. 100, profitis Rs.$\frac{100}{250}$ × 100 = Rs.40 ∴ On CP of RS. 100, Profit is Rs. 40. So it is called 40% on profit. The formulaes for Profit and the loss percent are 1. Profit percent =$\frac{profit}{C.P}$ × 100% or$\frac{S.P - C.P}{C.P}$ × 100% 2. Loss percent =$\frac{Loss}{C.P}$ × 100% or$\frac{C.P - S.P}{C.P}$ × 100% Calculation of S.P when C.P and profit or loss percent are given For this case, firstafall it needs to find actual profit and actual loss from C.P 1. Actual profit = Profit percent× C.P 2. Actual Loss = Loss percent× C.P So, we can calculate S.P as 1. S.P = C.P + Actual profit 2. S.P = C.P - Actual loss For example,C.P - Rs. 150 and profit percent is 15%. Find S.P. Here, Actual profit = 15 % of C.P =$\frac{15}{100}$ × Rs.150 = 22.5 Now, S.P = C.P + profit = Rs. 150 + 22.5 = 172.5 ans. Calculation of C.P when S.P and profit or loss percent are given For this case, the unknown value of C.P is considered as a variable such as x. Then the process of calculation will be as shown in the following example, If the S.P = Rs. 250 and loss percent is 10 %, find C.P. Here, Actual loss = 10% of C.P or, $\frac{10}{100}$ ×x =$\frac{x}{10}$ Now, C.P = S.P + loss or, x = Rs. 250 + Rs. $\frac{x}{10}$ or, x -$\frac{x}{10}$ = Rs. 250 or, $\frac{9x}{10}$ = Rs. 250 or, x =$\frac{Rs.250 × 10}{9}$ = Rs. 277.78 ∴ C.P = Rs. 277.78 ans. Discount When a marked price of (M.P) of any article reduces and sells to the customer by shopkeeper then the reduced amount is called the discount.For example, The marked price of a book is 125 and the shopkeeper reduces the Marked Price by 25. In this case, S.P of book = 125 - 25 = Rs.100. 1. Thus S.P = M.P - Discount 2. And Discount = M.P - S.P Discount Percent When a shopkeeper allows discount from Marked Price (M.P) of any article. When M.P is considered as Rs. 1000 and discount are calculated from it, then it is called Discount percent. It can be calculated by following formulas, 1. Discount Percent = $\frac{Discount}{M.P}$× 100% 2. Discount Amount = Discount percent of M.P Value Added Tax (VAT) Value added tax is a tax charged on the actual selling price of goods. So, VAT is charged at a certain percent of S.P. 1. VAT amount = VAT percent of S.P 2. S.P with VAT = S.P + VAT percent of S.P Simple Interest Generally, simple interest is the interest which is payable on the principal. When we deposit money in the bank for a certain time the bank will pay us some additional amount of money under a certain condition. Such additional amount of money is called interest. Calculation of Simple Interest The following terms appear in the calculation of simple interest, 1. Principal (P) 2. Rate (R) 3. Time (T) 4. Amount (A) The formulae for calculating the Simple Interest, I = $\frac{P.T.R}{100}$ Again, If I =I = $\frac{P.T.R}{100}$ P× T× R = I× 100 P = $\frac{I× 100}{T × R}$ T = $\frac{I × 100}{P × R}$ R = $\frac{I × 100}{P × T}$ Further more, when Amount (A) is the sum Principal (P) and its Intrest (I) 1. A = P + I 2. P = A - I 3. I = A - P ##### Things to remember 1. Profit S.P - C.P S.P C.P + profit C.P S.P - profit 2. Loss C.P - S.P C.P S.P + loss S.P C.P- loss 3. Profit percent =$\frac{profit}{C.P}$ × 100% or$\frac{S.P - C.P}{C.P}$ × 100% 4. Loss percent =$\frac{Loss}{C.P}$ × 100% or$\frac{C.P - S.P}{C.P}$ × 100% 5. Actual profit = Profit percent× C.P 6. Actual Loss = Loss percent× C.P 7. S.P = C.P + Actual profit 8. S.P = C.P - Actual loss 9. S.P = M.P - Discount 10. Discount = M.P - S.P 11. Discount Percent = $\frac{Discount}{M.P}$× 100% 12. Discount Amount = Discount percent of M.P 13. VAT amount = VAT percent of S.P 14. S.P with VAT = S.P + VAT percent of S.P 15. I = $\frac{P.T.R}{100}$ Again, If I = I = $\frac{P.T.R}{100}$ P × T × R = I × 100 P = $\frac{I× 100}{T × R}$ T = $\frac{I × 100}{P × R}$ R = $\frac{I × 100}{P × T}$ Further more, when Amount (A) is the sum Principal (P) and its Intrest (I) 1. A = P + I 2. P = A - I 3. I = A - P • It includes every relationship which established among the people. • There can be more than one community in a society. Community smaller than society. • It is a network of social relationships which cannot see or touched. • common interests and common objectives are not necessary for society. ##### Questions and Answers Solution: Let, the required C..P. be Rs x, Here, Actual loss = 20% of C.P = $\frac{20}{100}$ × Rs x = $\frac{x}{5}$ Now, C.P. = S.P. + loss or, x = Rs 240 + Rs $\frac{x}{5}$ or, x - $\frac{x}{5}$ = Rs 240 or, $\frac{4x}{5}$ = Rs 240 or, x = $\frac{5 × Rs 240}{4}$ or, x = Rs 300 So, the required C.P is Rs 300. Solution: Here, C.P. of the watch = Rs 350 S.P. of the watch = Rs 378 ∴ Profit = S.P. - C.P. = Rs 378 - Rs 350 = Rs 28 Now, profit percent = $\frac{profit}{C.P.}$ × 100% = $\frac{28}{350}$ × 100% = 8% So, the required profit percent is 8%. Solution: Here, the remaining number of glass tumblers = 100 - 10 = 90 C.P. of 100 glass tumblers = 100 × Rs 15 = Rs 1500 S.P. of 90 glass tumblers = 90 × Rs 16 = Rs 1440 ∴ Loss = C.P. - S.P.  = Rs 1500 - Rs 1440 = Rs 60 Now, loss percent = $\frac{loss}{C.P}$ × 100% = $\frac{Rs 60}{Rs 1500}$ × 100% = 4 % So, his loss percentage is 4% Solution: Here, S.P. of the radio = Rs 336 Profit percent = 5% Let, the C.P. of the radio be Rs x. Now, Actual profit = 5% of C.P. = $\frac{5}{100}$ × Rs x = Rs $\frac{x}{20}$ Again, or, C.P. = S.P. - profit or,  x = Rs 336 - $\frac{x}{20}$ or, x + $\frac{x}{20}$ = Rs 336 or, $\frac{21x}{20}$ = Rs 336 or, x = $\frac{20 × Rs 336}{21}$ or, x = 320 So, he purchased the radio for Rs 320 Solution: Here, M.P. of the article = rs 450 S.P. of the article = Rs 405 ∴ Discount = M.P - S.P = Rs 450 - Rs 405 = Rs 45 Now, Discount percentage = $\frac{Discount}{M.P}$ × 100% = $\frac{rs 45}{Rs 450}$ × 100% = 10% So, the required discount percentage is 10% Solution: Here, M.P of the radio = Rs 960 Discount percent = 5% ∴ Discount amount = 5% of M.P = $\frac{5}{100}$ × Rs 960 = Rs 48 Solution: Here, M.P. of the camera = Rs 1800 Discount percent = 10% VAT percent = 10% Now, discount amount = 10% of M.P = $\frac{10}{100}$ × Rs 960 = Rs 180 ∴ S.P = M.P - Discount = Rs 1800 - Rs 180 = Rs 1620 Again, VAT amount = 10%  of S.P = $\frac{10}{100}$ × Rs 1620 = Rs 162 ∴ S.P with VAT = S.P + VAT amount = RS 1620 + Rs 162 Rs 1833 So, the customer pays Rs 1822. Solution: Here, C.P. of the bicycle = Rs 1200 ∴ M.P, of the bicycle = Rs 1200 + 20% of Rs 1200 = Rs 1200 + $\frac{12}{100}$ × Rs 1200 = Rs 1200 + Rs 240 = Rs 1440 Now, Discount amount = 10% of M.P = $\frac{10}{100}$ × Rs 1440 = Rs 144 ∴ S.P = M.P. - Discount amount = Rs 1440 - Rs 144 = Rs 1296 Again, VAT amount = 10% of S.P = $\frac{10}{100}$ × Rs 1296 = Rs 129.60 ∴ S.P. with VAT = S.P. + VAT amount = Rs 1296 + Rs 129.60 = Rs 1425.60 So, the customer should pay Rs 1425.60 Solution: Here, Principle (P) = Rs 2500 Rate (R) = 7% per year Time (T) = 5 years Now, Interest (I) = $\frac{P × T × R}{100}$ = Rs $\frac{2500 × 5 × 7}{100}$ = Rs 875 Again, Amount (A) = P + I = Rs 2500 + Rs 875 = Rs 3375 So, she received an amount of Rs 3375. Solution: Here, Principle (P) = Rs 3600 Amount (A) = Rs 5328 Rate (R) = 12% per year Now, Interest (I) = A - P = Rs 5328 - Rs 3600 Rs 1728 Again, Time (T) = $\frac{I × 100}{P × R}$ = $\frac{1728 × 100}{3600 × 12}$ = 4 years So, the required time is 4 years. ##### Quiz © 2019-20 Kullabs. All Rights Reserved.
2020-09-27 20:54:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5594552159309387, "perplexity": 8341.089822927546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401578485.67/warc/CC-MAIN-20200927183616-20200927213616-00598.warc.gz"}
https://electronics.stackexchange.com/questions/85220/minimum-clock-cycles-needed
# minimum clock cycles needed? The instruction call Rn,sub is a two word instruction.Assuming that PC is incremented during the fetch cycle of the first word of the instruction,its resister transfer operation is Rn<=PC+1; PC<=M[PC]; Someone please help me to calculate minimum clock cycles needed during the execution cycle of the instruction.I am student of computer science and not very good in microprocessor ,please explain how to count minimum clock cycles? Should it not be 5 in 8085. 2. Cycle to increment PC (PC+1) 3. Transferring it to Rn from accumulator. 4. 2 cycles for memory read operation. • It'd be worth mentioning the type of architecture. The usual way isn't really to calculate it, you'd normally look it up in the architecture guide for the particular processor to find it. If it's an assignment of some sort maybe you have a block diagram of the CPU you could add? – PeterJ Oct 13 '13 at 4:31 • minimum is asked here so what will be the minimum with any possible architecture? And also what will be if it is just 8085? – user1766481 Oct 13 '13 at 4:35 • If you allow arbitrary architectures, the answer of minimum clock cycles for any operation is always 1. This is assuming an architecture that has a assembly opcode that directly corresponds to the two lines of code you present, and can execute that operation in a single-cycle, is 1 clock cycle. Now, for real-world architectures, the answer is likely very different. – Connor Wolf Oct 13 '13 at 9:45 To answer such a question a lot more context must be given and assumptions must be made explicit. Just a few issues: 1) The call-method you describe here is typical for ARM/Cortex and some less known architectures. An 8085 uses the more common stack based method. 2) Most architectures have dedicated hardware and data paths for incrementing the PC, so the ALU does not need to be involved, and it can be done in parallel with another operation. 3) An 8085 is an 8-bit architecture with a 16-bit address, hence getting an address from memory involves two memory accesses (with accompanying PC increments). 4) You seem to assume that a memory access takes 2 internal cycles worth of time. IIRC it was 1 for an 8085 (but I might be wrong), and it is often many many more for modern processors. 5) In step 3) you mention an accumulator, you probably mean the ALU result register, which on most register-based architectures is not a programmer-visible register. 6) If storing the result in Rn takes a cycle, it seems reasonable to assume that storing the destination address in the PC also takes a cycle.
2020-06-06 20:55:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4573110342025757, "perplexity": 1226.4969206958335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348519531.94/warc/CC-MAIN-20200606190934-20200606220934-00241.warc.gz"}
https://answerie.com/a-student-makes-and-sells-necklaces-at-the-beach-during-the-summer-months-the-material-for-each-necklace-costs-her-6-and-she-has-been-selling-about-24-per-day-at-10-each-she-has-been-wondering-wheth-c/
# Question: Suppose IBM pays a dividend D on their shares S at time τ. Show that S(τ+) = S(τ−) − D. Actually, to be precise, τ should be what is called the ex-dividend date. You should again argue your solution from the assumption of no arbitrage. S(τ+) means the value of S just after τ, and S(τ−) the value just before. – Free Chegg Question Answer Suppose IBM pays a dividend D on their shares S at time τ. Show that S(τ+) = S(τ−) − D. Actually, to be precise, τ should be what is called the ex-dividend date. You should again argue your solution from the assumption of no arbitrage. S(τ+) means the value of S just after τ, and S(τ−) the value just before. `Transcribed text From Image: ` `Answer:` We know, the change of the stock price (S) after declaring a dividend (D) is calculated as: $\delta S = \frac{D *(1-T_D)}{1-T_{CG}}$ ………….(1) $T_D$ = tax rate payable on dividend $T_{CG}$= tax rate payable on capital gain In case of no-arbitrage possibility there will be no tax payable, i.e., $T_D = 0$ and $T_{CG} = 0$ So from (1) But $\delta S$ = price before dividend paid (value before ex-dividend)- price after dividend paid (value after ex-dividend) $= S_{(T-)} -S_{(T+)}$ So $D = S_{(T-)} - S_{(T+)}$ So $S_{(T+)} = S_{(T-)} -D$
2021-10-21 18:53:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6641507148742676, "perplexity": 4214.975225684765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00133.warc.gz"}
https://www.physicsforums.com/threads/decomposition-of-a-vector.879673/
# I Decomposition Of A Vector 1. Jul 22, 2016 ### dman12 Hello, I am trying to figure out how to best decompose a vector into a best fit linear superposition of other, given vectors. For instance is there a way of finding the best linear sum of: (3,5,7,0,1) (0,0,4,5,7) (8,9,2,0,4) That most closely gives you (1,2,3,4,5) My problem contains more, higher order vectors so if there is a general statistical way of doing a decomposition like this that would be great. Thanks! 2. Jul 22, 2016 ### blue_leaf77 You can use least square solution. First, realize that you can express a linear combination of $n$ $m\times 1$ column vectors as a matrix product between a matrix formed by placing those $n$ columns next to each other and a $n \times 1$ column vector consisting of the coefficients of each vector in the sum. Denote the first matrix as $A$ and the second (column) one as $x$, you are to find $x$ such that $||Ax-b||$ is minimized where $b$ is the $m \times 1$ column vector you want to fit to. 3. Jul 22, 2016 ### BvU My hunch was that the three vectors span a 3D space in which you can express the part of (1,2,3,4,5) that lies in that space exactly (by projections). For the two other dimensions there's nothing you can do. Am I deceiving myself ? 4. Jul 25, 2016 ### chiro Hey dman12. This is equivalent to solving the linear system in RREF. Understanding this process of row reduction and why it works will help you understand a lot of linear algebra in a practical capacity.
2018-05-27 20:19:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7374864816665649, "perplexity": 336.8987034589914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870082.90/warc/CC-MAIN-20180527190420-20180527210420-00474.warc.gz"}
https://ham.stackexchange.com/questions/20043/link-budget-are-there-any-restrictions-in-the-formula
# Link Budget - Are there any restrictions in the formula? Depending on the parameters used in the link budget formula it seems that one may come up with a received power (dBm) larger than the transmitter output power (dBm). For example: Frequency: 50 MHz Distance: 100 m Free-Space Path Loss (FSPL): 49.95 dB Tx Power: 10 dBm Tx Gain: 25 dBi Rx Gain: 25 dBi Link Budget (dBm) = Tx Power + Tx Gain - FSPL + Rx Gain Result: 13.57 dBm which is higher than Tx Power and should not be possible. (EIRP - Conservation of energy) I could not find any restriction to the usage of the link budget formula to avoid "impossible" results. Am I missing something? Thanks, • where does tx gain come from? is it possible to focus the light of a candle into a point that shines brighter than the candle itself? i'm guessing tx gain comes from the antenna, which "focuses" some of the energy (25 dBi is a lot). Oct 8 '21 at 19:45 • Yes your assumption is correct, "TX Gain" is the gain of the Tx antenna (dBi). Regarding the candle example, yes it would be brighter if we focus it, the "intensity" (energy / area) would be higher, but the total energy would be the same. Oct 8 '21 at 20:59 • I edited your question to clarify that "FSPL" is free-space path loss, and linked to the Wikipedia article. I hope you don't mind! Oct 9 '21 at 19:39 The Antenna Gain in the formula is the far field gain - at an infinite distance, or at least far enough that the antenna is indistinguishable from an isotropic source or a dipole. When you're close to the antenna, gain is not really a relevant concept, we would use something called Antenna Factor. AF is the field strength in V/m, for each 1 V applied to the antenna terminals. This is valid everywhere. Far away from the antenna, AF can simply be calculated from gain and distance. Near to the antenna, gain isn't a relevant concept. We use a rule of thumb for when you're in the far field: $${2D^2}\over{\lambda}$$ Where D is the width of the antenna. This is the position at which fields from all parts of the antenna are within 45 degrees of phase from each other, i.e. all parts are roughly at the same distance, as they would be at an infinite distance. So the flaw in your calculation is assuming that an antenna of 25 dBi, at 50 MHz, has a far field of only 50 metres. Here are the actual numbers: $$\text{Effective area} = {{G\lambda^2}\over{4\pi}} = 906 \text{ m}^2$$ Which assuming a circular aperture is a diameter of $$34 \text{ m}$$. (and, by the way, requires a yagi length of about $$45\lambda$$ or $$270 \text{ m}$$). The far field distance from this is $$385 \text{ m}$$. This applies to both ends of the link, so if you have two 25 dBi antennas, you'll need to put them 750 m apart before the free space path loss equation starts to make sense. Another way I like to look at Gain and EiRP of low and high gain antennas is this: From far away, . a 0 dBi antenna fed with 316 W, and . a 25 dBi antenna fed with 1 W, look exactly the same. The power density is just $$EiRP/\pi r^2$$. But close the antenna something different happens. • The power density of the isotropic antenna gets stronger and stronger as you move closer, until you're 3 m away and the 316 Watts starts to warm you up. • But the power density of the high gain antenna stops going up. Imagine standing right in front of this 34 m diameter horn antenna. The 1 watt power is evenly distributed over the whole aperture. So the power density never gets to more than $$1/900 \text{ W/m}^2$$. And this region of low density lasts for several times the diameter, so putting a receive antenna too close doesn't capture any more power. • Awesome! You helped me to understand why not taking the far field in consideration is a problem and also the energy conservation. Oct 8 '21 at 22:28 • @377ohms it's roughly the same reason why you can't use the inverse square law to conclude that it's infinitely bright at the center of a light bulb :) Oct 8 '21 at 23:35
2022-01-24 04:28:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7566841244697571, "perplexity": 723.2753434709356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00261.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvt&paperid=169&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS TVT: Year: Volume: Issue: Page: Find TVT, 2015, Volume 53, Issue 5, Pages 676–682 (Mi tvt169) Thermophysical Properties of Materials Correlation of temperature dependences of thermal expansion and the heat capacity of refractory metal up to the melting point: Tungsten V. Yu. Bodryakov Urals State Pedagogical University Abstract: In continuing the series of publications started by the article about molybdenum, a detailed study of the correlation between the volume thermal expansion coefficient, $o(T)$, and the heat capacity, $C(T)$, of another refractory metal, tungsten, is carried out. It is shown that a distinct correlation of $o(C)$ takes place not only at low temperatures, where it is linear and is known as the Grüneisen law, but also within a much wider temperature region, up to the melting point of the metal. A significant deviation from the low-temperature behavior of the linear dependence of $o(C)$ occurs when the heat capacity reaches its classical Dulong and Petit limit, $3R$. The concept of the temperature dependence of the differential Grüneisen parameter, $\gamma ' \sim (\partialo/\partial C)$ is introduced, and its evaluation is proposed. DOI: https://doi.org/10.7868/S0040364415040067 Full text: PDF file (219 kB) References: PDF file   HTML file English version: High Temperature, 2015, 53:5, 643–648 Bibliographic databases: UDC: 536.416; 536.631; 536.713 Accepted:05.11.2014 Citation: V. Yu. Bodryakov, “Correlation of temperature dependences of thermal expansion and the heat capacity of refractory metal up to the melting point: Tungsten”, TVT, 53:5 (2015), 676–682; High Temperature, 53:5 (2015), 643–648 Citation in format AMSBIB \Bibitem{Bod15} \by V.~Yu.~Bodryakov \paper Correlation of temperature dependences of thermal expansion and the heat capacity of refractory metal up to the melting point: Tungsten \jour TVT \yr 2015 \vol 53 \issue 5 \pages 676--682 \mathnet{http://mi.mathnet.ru/tvt169} \crossref{https://doi.org/10.7868/S0040364415040067} \elib{http://elibrary.ru/item.asp?id=24045263} \transl \jour High Temperature \yr 2015 \vol 53 \issue 5 \pages 643--648 \crossref{https://doi.org/10.1134/S0018151X15040069} \elib{http://elibrary.ru/item.asp?id=24961753} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84944450633} • http://mi.mathnet.ru/eng/tvt169 • http://mi.mathnet.ru/eng/tvt/v53/i5/p676 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. V. Yu. Bodryakov, “Correlation between temperature dependences of thermal expansivity and heat capacity up to the melting point of tantalum”, High Temperature, 54:3 (2016), 316–321 2. V. Yu. Bodryakov, “Joint study of temperature dependences of thermal expansion and heat capacity of solid beryllium”, High Temperature, 56:2 (2018), 177–183 3. J. W. Arblaster, “Thermodynamic properties of tungsten”, J. Phase Equilib. Diffus., 39:6 (2018), 891–907 • Number of views: This page: 198 Full text: 63 References: 54 First page: 5
2020-04-03 03:48:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3110351264476776, "perplexity": 6402.55515212115}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00412.warc.gz"}
https://www.physicsforums.com/threads/angular-ke-and-momentum-question.839/
# Angular KE and momentum question Rockazella Take for example the angular momentum experiment where you sit in a rotating chair and spin yerself while holding weights. If you bring the weights in closer, your angular velocity increases, but you angular momentum is conserved. Is angular kinetic energy also conserved with angular momentum? No, energy is not conserved because of the work of internal forces in the system... Angular momentum is conserved because of the absence of external torques. KLscilevothma The extra kinetic energy is due to the work done by the person who sit on the rotating chair to pull the weights in closer. hmm we never really talked about rotational energy in my physics class. Would it just be 1/2I&omega;2? btw. seeing linear momentum is mv. KE = 1/2mv^2. Is it just coincidence that KE is antiderivative of momentum or is it supposed to be that way. I've noted this a few times but I've never had a professor get up and say "Kinetic energy is the antiderivative of momentum" so i was just wondering. Rockazella I guess I just don't understand how that work done pulling the weights closer to you would go twords increasing KE. Would anyone mind laying out the problem and solving? It just doesnt make sense that momentum can stay the same while energy changes. Last edited: Originally posted by Rockazella I guess I just don't understand how that work done pulling the weights closer to you would go twords increasing KE. Would anyone mind laying out the problem and solving? It just doesnt make sense that momentum can stay the same while energy changes. Well it makes sense according to classical physics, but if you don't like it you can make your own... you must just be self-consistent and convince a few others ;) Besides teasing you, Let us put down a very symple system that will show you how the whole thing works. Something like two point-like masses hold together by a spring, rotating around an axis orthogonal to the spring with constant angular velocity. If the spring is initially extended beyong its rest lenght and kept so by a thin rod, we will have a system where we can easily describe the internal forces by a potential energy. If at some point the thin rod breaks, total energy will be conserved (no other force is involved in the system besides the conservative elastic one); since that time on the evolution of the system is dictated by conservation of angular momentum (no external momentum of force is present) and of total energy. While the two masses are coming together there is an increase of angular velocity which implies an increse of kinetic energy which is compensated (to keep total energy constant) by a reduction of potential energy. During the expansion the opposite happens. In the case of a human bringing two weights closer it is the chemical energy stored in some molecules that is converted to kinetic energy (and to heat), while conserving angular momentum. actually there is no such thing as Kinetic Energy or Potential Energy. there is only one energy which is vector product of the force vector and the so called X vector.this X vector is actually the distance of the point where the force is acting and the equilibrum point(the point where the force virtually reduce to zero).important is that the energy defined as above is always constant if the system is closed. i din't tought/comment of/on your wonder here but give me some finite time and i'll come up with something. Rockazella Thanks dg, I think it just clicked in my mind. It's starting to make more sense now. i'll assume this discussion is not over and i'll give you my approach: there is only one energy in the system and it is E=FxX=const X is equilibrum vector starting in the position of the weight and ending in the normal projection of the same position on the axis of rotation.Now the explanation is simple as A,B,C... pulling the weight closer to the axis of rotation declines X but since FxX=E=const increases F.X is normal to the trajectory of the weight and F is parallel(tangential).But it also has to be FxdX=F'xdX' left are the old values and right are the new values.that's why the tangential displacement is larger closer to the axis of rotation. when you increase X then F drops and dX (the old)> dX' (the new ). The only thing conserved here is the energy. I don't know about the momentum cause at the moment like everything else in physics i'm considering time being vector. Originally posted by dock actually there is no such thing as Kinetic Energy or Potential Energy. there is only one energy which is vector product of the force vector and the so called X vector.this X vector is actually the distance of the point where the force is acting and the equilibrum point(the point where the force virtually reduce to zero).important is that the energy defined as above is always constant if the system is closed. i din't tought/comment of/on your wonder here but give me some finite time and i'll come up with something. I'm curious to know how energy can be represented as a vector Originally posted by Claude Bile I'm curious to know how energy can be represented as a vector here is how: assume you have an weight hanging on a spring in equilibrium point. then the resulting force is zero thus no movement. now pull the weight down displacing it for some X value. now the resulting force is again zero but when you release the weight the active force is same as but oposite with the one you have invested to cause the initial displacement. there are two scenarios availabe to consider: (1st scenario) assuming that at the moment you release the weight the action force is Fmax. in that moment the weight is in equilibrium point and the equilibrium distance is zero. the action force is causing displacement in same direction of the the force cause the job done by the action force is always positive. as you increase the equilibrium distance you decline the force.when the force reduces to zero then the equlibrium distance is maximum. the force onwards change the sing and becomes negative therefore and the equilibrium distance does the same too. when the force reaches value -Fmax the the equilibrium distance drops to zero and so on. simbolically it looks like this: (1 moment)F=Fmax and X=0 (2 moment)F>0 and F->0 and X > 0 and X->Xmax (3 moment)F<0 and F->0 and X < 0 and X->-Xmax (4 moment)F=-Fmax and X=0 (5 moment)F<0 and F->0 and X < 0 and X->-Xmax (6 moment)F>0 and F->0 and X > 0 and X->Xmax (7 moment)same as moment 1 and everything from the begining thusF=Fmax * cos(fi1) and X=Xmax * sin(fi1) (2nd scenario)the second scenario is when you assume that at the moment of release of the weight X=Xmax then simbolically it looks like this: (1 moment)X=Xmax and F=0 (2 moment)X>0 and X->0 and F > 0 and F->Fmax (3 moment)X<0 and X->0 and F < 0 and F->-Fmax (4 moment)X=-Xmax and F=0 (5 moment)X<0 and X->0 and F < 0 and F->-Fmax (6 moment)X>0 and X->0 and F > 0 and F->Fmax (7 moment)same as moment 1 and everything from the begining thusX=Xmax * cos(-fi1) and F=Fmax * sin(-fi1) what i have found is that both scenarios are corect and that they are only the same story from different aspect. the first nlue tush is for the x coordinates and the 2nd tush are the y coordinates while the z coordinates both for X and F are zeros. but therefore the x and y coordinates of energy are zeros too.the z coordinate of energy then will be E(x)=Fmax * sin(-fi1)*0+Xmax * sin(fi1)*0=0 E(y)=Fmax * cos(fi1)*0+Xmax * sin(-fi1)*0=0 E(z)=Fmax * cos(fi1)Xmax * cos(-fi1)-Fmax * sin(-fi1)Xmax * cos(-fi1) E(z)=Fmax*Xmax*(cos^2(fi1)+sin^2(fi1))=Fmax*Xmax this equation comes from vector procut of two vectors.in the end i conclude that energy is actually a vector product of force vector and the equilibrium vector. STICK WITH ME YOU'LL LEARN MUCH MORE!! STAii Originally posted by dock actually there is no such thing as Kinetic Energy or Potential Energy. there is only one energy which is vector product of the force vector and the so called X vector.this X vector is actually the distance of the point where the force is acting and the equilibrum point(the point where the force virtually reduce to zero).important is that the energy defined as above is always constant if the system is closed. i din't tought/comment of/on your wonder here but give me some finite time and i'll come up with something. [/B] Not every system can have the 'point of equilibrum' that you are defining. Take for example, a guy is sliding a box on a rough surface, where is the point of equilibrum ? But we can say that all the laws (as far as i know so far) of energy can be derived from the definition of Work. Last edited: Originally posted by STAii Not every system can have the 'point of equilibrum' that you are defining. Take for example, a guy is sliding a box on a rough surface, where is the point of equilibrum ? But we can say that all the laws (as far as i know so far) of energy can be derived from the definition of Work. if you know the energy vector and the force vector then since E=FxX => Fx(ExF)=Fx(X) => X=ExF where X is the equilibrium vector. hey energy being vector is new to me too. redirecting...:
2022-10-04 06:32:39
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831551730632782, "perplexity": 754.9454976041231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00402.warc.gz"}
http://estebanmoro.org/page/13/
# Posts 25 August / / ## Emergence of pulled fronts in fermionic microscopic particle models Authors: Esteban Moro Phys. Rev. E, Rapid Communication, 68, 025102 (2003). LINK | arXiv Abstract: We study the emergence and dynamics of pulled fronts described by the Fisher-Kolmogorov-Petrovsky-Piscounov (FKPP) equation in the microscopic reaction-diffusion process $$A\leftrightarrow A+A$$ on the lattice when only a particle is allowed per site. To this end we identify the parameter that controls the strength of internal fluctuations in this model, namely, the number of particles per correlated volume. 11 March / / ## Defect formation in the Swift-Hohenberg equation Authors: Tobias Galla and Esteban Moro Journal: Phys. Rev. E, Rapid Communication 67, 035101 (2003). LINK | arXiv Abstract: We study numerically and analytically the dynamics of defect formation during a finite-time quench of the two-dimensional Swift-Hohenberg (SH) model of Rayleigh-Bénard convection. We find that the Kibble-Zurek picture of defect formation can be applied to describe the density of defects produced during the quench. Our study reveals the relevance of two factors: the effect of local variations of the striped patterns within defect-free domains and the presence of both pointlike and extended defects. 14 November / / ## Internal Fluctuations Effects on Fisher Waves Authors: Esteban Moro Journal: Physical Review Letters 87, 238303 (2001) LINK | arXiv Abstract: We study the diffusion-limited reaction $$A \leftrightarrow A = A$$ in various spatial dimensions to observe the effect of internal fluctuations on the interface between stable and unstable phases. We find that, similar to what has been observed in d = 1 dimensions, internal fluctuations modify the mean-field predictions for this process, which is given by Fisher’s reaction-diffusion equation.
2019-01-20 12:40:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5923716425895691, "perplexity": 1968.6155359998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00040.warc.gz"}
http://mathhelpforum.com/algebra/184371-value-k-number-root.html
# Thread: value of k and number of root 1. ## value of k and number of root how is k solved when 4x^2 + kx + 6 = 0... i really get so hard to solve when there is already k in the equation. kindly help me. Thanks a lot so much. 2. ## Re: value of k and number of root As is you cannot solve for k. To find k you'd need to know how many real roots the equation has and then use the discriminant as appropriate. Alternatively you can find x in terms of k using your favourite method 3. ## Re: value of k and number of root Originally Posted by rcs how is k solved when 4x^2 + kx + 6 = 0... i really get so hard to solve when there is already k in the equation. kindly help me. Thanks a lot so much. what the discriminant tells you ... $b^2 - 4ac < 0$ no real roots $b^2 - 4ac = 0$ one real root of multiplicity two $b^2 - 4ac > 0$ two real roots so, if you're looking for the value of $k$ that provides at least one real root ... $k^2 - 4(4)(6) \ge 0$ solve for $k$ 4. ## Re: value of k and number of root thanks... it this correct sir: k > = +- 4 (sqrt 6), is this the root now sir? 5. ## Re: value of k and number of root Almost: $|k| \geq 4\sqrt{6}$ is true. It follows that $k \geq 4\sqrt6$ which is one of your solutions yet we need to look at the other root: $-k \geq 4\sqrt{6} \longrightarrow \ k \leq -4\sqrt6$ 6. ## Re: value of k and number of root this one is not so bad. if we use the idea of comparing coefficients as has been shown you can think like this. $(A_1x+B_1)(A_2x+B_2)=A_1A_2x^2+A_1B_2x+A_2B_1x+B_1 B_2$ now you have $4x^2+kx+6$ all positive terms. the coefficients are 4 and 6. which factor to $A_1A_2=4=2\cdot 2\cdot 1$ and $B_1B_2=6=3\cdot 2\cdot 1$. so you have 2 factors of each coeffient just like the general form. now just substitute all the inormation into the general factors. $(A_1x+B_1)(A_2x+B_2) \rightarrow (2x+2)(2x+3) \rightarrow 4x^2+10x+6$ $k=10$ now that you know k solve your linear equations to find the roots. 7. ## Re: value of k and number of root skoker... im kind of confuse... how can you relate k = 10 from the answer given by e^(i*pi) above? how is it possible.. i think they are not the same thanks 8. ## Re: value of k and number of root I think I get it right no? k=10 roots= -1 , -3/2. I'm not sure what kind of business they have gotten into up there... 9. ## Re: value of k and number of root here is a extra think... to check k I do this. $b^2-4ac>0 \rightarrow 10^2-4\cdot 4\cdot \6>0 \rightarrow 4>0$ 2 real roots. now if we do like before we get this. $k^2-4ac>0 \rightarrow k^2-4\cdot 4 \cdot \6>0 \rightarrow k^2-96>0 \rightarrow k^2>96 \rightarrow k>\pm 4 \sqrt{6}$ now we check with k and decimals. $\pm 4 \sqrt{6}=\pm 9.7980 \;,\; k=10$ $10>9.7980 \;,\; 10>-9.7980$ the inequalities is only telling you what value k is greater then or less then not k itself. 10. ## Re: value of k and number of root To clarify mine and Skeeter's answers were stating the values of k for which there would be at least one real root. Skoker has given the actual value of k. For what it's worth I'm confused as to how $(A_1x + B_1)(A_2x + B_2)$ becomes $(2x+2)(2x+3)$ 11. ## Re: value of k and number of root Originally Posted by e^(i*pi) I'm confused as to how $(A_1x + B_1)(A_2x + B_2)$ becomes $(2x+2)(2x+3)$ I think you guys over think on this one. for quadratic equations this is true. all partial products of factors. $(A_1x+B_1)(A_2x+B_2)=A_1A_2x^2+A_1B_2x+A_2B_1x+B_1 B_2$ and we have the rule $ax^2+bx+c \;,\; a_1c_1+a_2c_2=b$ in this case a and c only have 2 factors each so they fall nicely into the LHS but you could use the same method for any ac coefficients that factor. $A_1=2 , A_2=2 , B_1=2 , B_2=3$
2017-11-22 01:55:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6850537061691284, "perplexity": 835.4489684726934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806447.28/warc/CC-MAIN-20171122012409-20171122032409-00034.warc.gz"}
http://math.stackexchange.com/questions/290536/a-question-about-poisson-and-binomial-distributions
# A question about Poisson and Binomial distributions I'm struggling with understanding why the following statement is true: Let $X$ be a Random Variable with Poisson distribution. Let $Z$ be a Random Variable independent from $X$, whose distribution is $P(Z=0.9)=0.2=1-P(Z=0.6)$. Let $Y$ be a Random Variable such that $Y |( X=x , Z=z)\sim \text{Binom}(x,z)$. Then given $Z=0.9, Y\sim \text{Poisson}$. I'm told that it is a consequence of some general fact about split Poisson variables but I couldn't make much of that fact, or wasn't able to see why it's true by myself. I don't really know how to go about this so any help would be greatly appreciated. Thanks! - To prove that the distribution of $Y$ conditional on $Z$ is Poisson, you could do the following \begin{eqnarray*} \Pr \left( Y = y|Z = z \right) & = & \sum_{x = y}^{\infty} \Pr \left( Y = y|Z = z, X = x \right) \Pr \left[ X = x \right]\\ & = & \sum_{x = y}^{\infty} \left( \begin{array}{c} x\\ y \end{array} \right) z^y \left( 1 - z \right)^{x - y} \frac{\lambda^x}{x!} e^{- \lambda}\\ & = & e^{- \lambda} z^y \lambda^y \frac{1}{y!} \sum_{x = y}^{\infty} \frac{\left\{ \left( 1 - z \right) \lambda \right\}^{x - y}}{(x - y) !}\\ & = & e^{- \lambda} z^y \lambda^y \frac{1}{y!} \underbrace{\sum_{w = 0}^{\infty} \frac{\left\{ \left( 1 - z \right) \lambda \right\}^w}{w!}}_{= e^{\left( 1 - z \right) \lambda}}\\ & = & e^{- z \lambda} \frac{\left( z \lambda \right)^y}{y!} \end{eqnarray*} The first line comes from $\Pr \left( Y = y | Z = z, X = x \right) = 0$ for $y >x$ (One thousand thanks for Stefan Hansen for pointing that out). The second line comes from inputting the formulas for the binomial and Poisson probabilities and the fourth line comes from applying the change of variable the change of variable $w = x - y$. By the last line we conclude that $Y$ conditional on $Z = z$ is Poisson with parameter $\lambda z$. Shouldn't your sum be from $x=y$, since $P(Y=y\mid Z=z,X=x)=0$ when $y>x$? –  Stefan Hansen Jan 30 '13 at 13:53
2015-10-05 04:51:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965486526489258, "perplexity": 68.07042609682343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676622.16/warc/CC-MAIN-20151001215756-00058-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.nature.com/articles/s41598-021-81712-8?error=cookies_not_supported&code=a527a7c4-3fcd-448c-aa34-5daa7cdc74c2
## Introduction Highly accurate position and displacement measurements are of tremendous importance in many applications, ranging from the detection of gravitational waves to industrial metrology to materials characterization in mechanics. The laws of classical physics do not impose any fundamental limits on the accuracy with which one can measure the position of an object. In quantum mechanics1, the standard deviation of the position measurement value, $$s$$, is subject to fundamental quantum mechanical uncertainty; however, the standard error of the mean or localization error, $$\sigma$$, can approach zero—as in classical physics. Only statistics limits the achievable accuracy. Therefore, in general, a goal of position and displacement metrology is to achieve a given $$\sigma$$ in as short a time as possible or to obtain minimum $$\sigma$$ in a given time. This optimization must appreciate constraints that may apply depending on the application. For example, mechanical contact to the sample may not be acceptable, in which case optical approaches are attractive. Furthermore, fluorescence detection may or may not be possible, a minimum physical distance to the sample could be required, etc. Optical approaches aiming at determining position vectors with ultra-small localization errors include laser interferometry2,3,4,5,6, laser Doppler vibrometry7,8,9, fluorescence-based single-molecule localization10,11,12,13, light-scattering-based single-particle localization14, localization by optical superoscillations from metasurfaces15, and optical-image cross-correlation analysis16,17,18,19,20,21. Concerning acoustical or mechanical metamaterials, laser Doppler vibrometry has frequently been used for measuring the out-of-plane displacement-vector component22,23. Sub-picometer precision is routinely available by commercial instruments24. Image cross-correlation analysis has widely been used for measuring the in-plane components. Here, nanometer-precision characterization of mechanical metamaterials has been achieved. For all of these applications, contact-free measurements at centimeter-scale working distances or beyond, without the need for fluorescent labels, are absolutely crucial19,20. However, some of the optical-image cross-correlation experiments were performed close to the noise limit defined by the accessible localization errors21,25. Therefore, smaller localization errors would have been highly desirable. The novelty of this paper is to push the optical-image cross-correlation approach towards atomic-scale localization errors, while maintaining all of its other virtues. As pointed out above, only statistics limits the achievable accuracy. For certain sample surfaces and under special fortuitous conditions, the statistics can be improved by using multiple regions of interest. However, to make the approach reliable, robust, and versatile, we introduce 3D printed 2D arrays of small and well-defined optical markers. Using an $$8\times 8$$ array of markers within a $$(40\,\upmu {\mathrm{m})}^{2}$$ measurement footprint, we obtain a mean localization error of less than one Angstrom within $$12.5\ \mathrm{ms}$$ measurement time, equivalent to $$80\, \mathrm{frames}/\mathrm{s}$$ frame rate. ## Methods ### Optical-image cross-correlation analysis Optical-image cross-correlation analysis16 starts with two optical images of the same object, $${I}_{1} (x,y)$$ and $${I}_{2}(x,y)$$, in the $$xy$$-image plane. These signals can, for example, be derived from an optical bright-field microscope connected to a digital camera, in which case $$x={n}_{x}p$$ and $$y={n}_{y}p$$ are pixelated, with pixel size $$p$$ and integers $${n}_{x}$$ and $${n}_{y}$$. Unlike for the single-particle tracking approaches cited above, the images need not necessarily be taken at the ultimate diffraction limit. In other words: It is possible to use low numerical-aperture microscope lenses. The images will generally contain perturbations, e.g., shot noise, excess electrical read-out noise, stray light, or combinations thereof. To derive a possible nonzero displacement vector, $$(\delta x,\delta y)$$, between the two images #1 and #2, we first calculate the two-dimensional (2D) cross-correlation function $$C\left(\Delta x,\Delta y\right)=\int {I}_{1}\left(x,y\right){I}_{2}\left(x+\Delta x,y+\Delta y\right){\mathrm{d}}x {\mathrm{d}}y.$$ (1) This integral can be performed over the entire available image or over only selected small regions of it, which we refer to as the regions of interest (ROI). This selection is based on large-contrast fine features within the ROI. We select $$M$$ different ROI, corresponding to $$M$$ individual measurements, from which we later compute the mean value and the localization error (see below). This procedure is justified if the systematic error due to the relative motion between these ROI during one measurement is smaller than the determined localization error. For a pixelated image, the integral in Eq. (1) reduces to a sum and the displacement components, $$\Delta x$$ and $$\Delta y$$, are integer multiples of the pixel size $$\left(\Delta x,\Delta y\right)=\left(\Delta {n}_{x}p,\Delta {n}_{y}p\right).$$ Provided that the shift of the object between the two images $${I}_{1}$$ and $${I}_{2}$$ is much smaller than the pixel size in the object plane, the cross-correlation function will exhibit a single maximum at $$(\Delta x,\Delta y)=(\mathrm{0,0})$$, possibly with noise on top. For each ROI, we determine the displacement vector with subpixel precision $$(\delta x,\delta y)$$ by a least-squares fit of a two-dimensional parabola to the maximum of $$C(\Delta x,\Delta y)$$ over $$3\times 3$$ pixels (each ROI corresponds to $$30\times 30$$ pixels). This overall procedure is implemented in an open-access software package26, which we have used for the image analysis in this paper. It has previously been used by us19,20,21,25. Here, we have also tested the software by feeding it with computer generated images $${I}_{1}$$ and $${I}_{2}$$ corresponding to a displacement of, e.g., $$1.0\,\mathrm{nm}$$, leading to a retrieved displacement of $$1.0\, \mathrm{nm}$$ indeed (not depicted). Further simulations are described below. ### Localization errors We use the common definitions of the standard deviation $$s$$, and the standard error of the mean or localization error $$\sigma$$. For all quantities, we distinguish between the $$x$$- and the $$y$$-component by corresponding indices. For $$M\gg 1$$ (with $$M$$ ROI as defined above) individual measurements at one position, we compute the standard deviation $${s}_{x}$$ as $${s}_{x}=\sqrt{\sum_{i=1}^{M}\frac{{\left(\delta {x}_{i}-\langle \delta x\rangle \right)}^{2}}{M-1} .}$$ (2) Here, $$\langle \delta x\rangle =\left(\sum_{i=1}^{M}\delta {x}_{i}\right)/M$$ is the mean value. This procedure is meaningful if the variances of the position determination for the $$M$$ ROI are similar. This aspect has been verified for the data to be shown below. The localization error is given by $${\sigma }_{x}=\frac{{s}_{x}}{\sqrt{M}} .$$ (3) The quantities $${s}_{y}$$ and $${\sigma }_{y}$$ are defined analogously. ### Setup Our simple home-built microscope setup shown in Fig. 1 is composed of one microscope objective lens (Zeiss LD Achroplan 20 × /0.40 Corr., $$\mathrm{NA}=0.4$$, free working distance $$11.2\,\mathrm{mm}$$) and one tube lens (Thorlabs SC254-200-A-ML, focal length $$200\, \mathrm{mm}$$). This microscope images the sample plane onto a silicon complementary metal–oxide–semiconductor (CMOS) black/white camera chip (Sony IMX264, $$2448\times 2048\,\mathrm{ pixels}$$), which is connected to a computer. One pixel of the camera chip in the image plane has a side length corresponding to $$138.6\,\mathrm{ nm}$$ in the sample plane. We operate the camera at its maximum frame rate of $$80\, \mathrm{frames}/\mathrm{s}$$ $$=1/(12.5\, \mathrm{ms})$$, corresponding to an individual exposure time of $$12.26\, \mathrm{ms}$$ plus a read-out time of about $$0.24\, \mathrm{ms}$$. This frame rate requires reading out only $$512\times 512\,\mathrm{ pixels}$$ of the camera chip. $$512$$ pixels correspond to a length of about $$71 \,\upmu\mathrm{m}$$. This length is much smaller than the diameter of the field of view of about $$1\, \mathrm{mm}$$ (in the sample plane). Therefore, we assume that image distortions are negligible for the investigated area. We illuminate the sample by a standard swan-neck incandescent lamp (Schott KL 1500 LCD, with additional Thorlabs FESH0700 cold filter) emitting visible white light, which is directed onto the sample under an angle with respect to the optical axis (see Fig. 1). This illumination is sufficiently bright to take full advantage of the camera’s dynamic range of $$8\, \mathrm{bit}$$ within the exposure time of $$12.26\,\mathrm{ ms}$$ (see below), while not overloading it. The sample can be translated by a precision one-axis piezoelectric translation stage (Physik Instrumente P-753.1CD) with capacitive position read-out and the possibility of active feedback control (Physik Instrumente digital controller E-710.3CD). This stage is specified with a resolution of $$0.1\, \mathrm{nm}$$. To quantify the contribution of the read-out noise, we also define the standard error of the mean $${\sigma }_{x}^{^{\prime}}$$ for the nominal $$x$$-position obtained by the capacitive sensor. For each frame, we average over $$K=5$$ sensor measurements acquired within the corresponding exposure time, such that $${\sigma }_{x}^{^{\prime}}=\sqrt{\sum_{k=1}^{K}\frac{{\left(\delta {\stackrel{\sim}{x}}_{k}-\langle \delta {\stackrel{\sim}{x}}\rangle \right)}^{2}}{K(K-1)} .}$$ (4) Here, $$\langle \delta{ \stackrel{\sim }{x}}\rangle =\left(\sum_{k=1}^{K}\delta {\stackrel{\sim }{x}}_{k}\right)/K$$ is the corresponding mean value. ## Results ### Experimental results We illustrate the optical-image cross-correlation approach using a set of different samples. Four electron micrographs are shown in Fig. 2. Sample #1 depicted in Fig. 2a is a sandblasted copper surface. The optical image of sample #1 exhibited in the first row of Fig. 3 is partly due to interference effects, which give rise to spatially narrow and high-contrast features. In Fig. 3, we will show a best-case example. However, typical examples are much worse. Sample #2 depicted in panel b consists of micrometer-sized gold grains that are randomly distributed on an optical-quality glass surface. The gold grains offer an easy way to provide high-contrast features to arbitrary low-contrast structures. However, the disordered arrangement of the grains makes the results very much dependent on the chosen sample position. In Fig. 3, we will again display a best-case example. Figure 2c shows a glass surface onto which we have added a periodic square array of polymer markers with a diameter of about $$d=2 \, \upmu\mathrm{m}$$ and a period of $$a=10\,\upmu\mathrm{m}$$. We have manufactured these markers by using standard 3D laser lithography27, using the commercial system Photonic Professional GT with photoresist IP-Dip (both Nanoscribe GmbH, Germany) and a 63x/1.4 NA objective. Thereafter, we have sputtered a $$54\,\mathrm{ nm}$$ thin film of gold onto this sample #4. Sample #3 is as sample #4, but without the sputtered gold film. Without a conductive layer, this sample cannot easily be imaged by electron microscopy. Sample #5, which is depicted in Fig. 2d, is as sample #4 but for a period of $$a=5\, \upmu\mathrm{ m}$$. Results for samples #1 to #5 are summarized in Fig. 3. The five samples correspond to the five rows of this $$5\times 3$$ matrix. The three columns a-c exhibit different measurements. The panels in a show typical raw camera images that are fed into the optical-image cross-correlation analysis. The $$M$$ ROI used for the analysis are indicated by the blue squares. They contain $$30\times 30$$ camera pixels each for all samples. Note that $$M$$ varies among the samples as indicated. All used ROI lie in an area in the image plane corresponding to a footprint of $${\left(40\,\upmu\mathrm{m}\right)}^{2}$$ in the sample plane (dashed white square). The panels in column b show the $$x$$- and the $$y$$-component of the displacement vector, and the nominal $$x$$-position of the 1D capacitive sensor, for $$800$$ points in time corresponding to a total time of $$10 s$$. For each of these $$800$$ points, the colored error bars correspond to $$\pm 1{\sigma }_{x},$$ $$\pm 1{\sigma }_{y}$$, and $$\pm 1{\sigma }_{x}^{^{\prime}}$$, respectively. To a large extent, the error bars are smaller than the symbol size. The mean values of $${\sigma }_{x}$$, $${\sigma }_{y}$$, and $${\sigma }_{x}^{^{\prime}}$$ for the $$800$$ points for the $$x$$- and the $$y$$-component, $$\langle {\sigma }_{x}\rangle$$, $$\langle {\sigma }_{y}\rangle$$, and $$\langle {\sigma }_{x}^{^{\prime}}\rangle$$, are indicated. Here, the sample has not been moved intentionally. Both the $$x$$- and $$y$$-components exhibit typical drifts which are due to a relative motion between sample and camera. The drifts tend to be yet larger if we remove the housing covering the setup (not depicted). Without the housing, unwanted displacements can be induced by airflow, increased temperature variations, and by external sound sources. The panels in column c exhibit the same quantities as in panels b, however, we now intentionally move the piezoelectric stage in a staircase manner with a step height of $$1\ \mathrm{nm}$$. In column c, for all samples, the steps in the $$x$$-direction can be seen clearly, in addition to the slower and subtle drift motions. This observation provides a first and intuitive confirmation that the localization error achieved by the optical-image cross-correlation approach is much less than one nanometer indeed. As we obtain a localization error for each image, corresponding to one data point in Fig. 3b, it is not meaningful to quote all localization errors individually. We rather quote for each sample the average value, $$\langle {\sigma }_{x}\rangle$$, over $$800$$ camera images. Inspecting rows 4 and 5 of Fig. 3, one can clearly see that the localization error decreases with increasing number $$M$$ of markers in the array. Furthermore, from row 3 to row 4, the localization error decreases when improving the image quality and image contrast by going from the bare polymer dots to the gold-coated polymer dots. The localizations errors shown in rows 1 and 2 are respectable, too. However, it must be noted that the depicted data are best-case examples taken on sample positions where we have fortuitously found a large number of well-localized and high-contrast bright spots. For many other sample positions (not depicted), we have found much worse results for the sand-blasted copper surface and for the surface covered with gold grains, respectively. Therefore, these approaches do not reliably provide sub-nanometer localization errors. In sharp contrast, the small localization errors on the samples including 3D printed marker arrays are immediately reproducible after a setup realignment and, hence, reliable. For example, the experiments in row 5 of Fig. 3 have been repeated 5 times. We find the same localization error within $$\pm 1.5\%$$ (not depicted). ### Simulation of localization errors References16 and18 give an overview of the various statistical and systematic errors in digital image cross-correlation analysis. In particular, the combination of finite pixel size and finite number of bits already has a significant contribution to the measured localization error28,29. To explore the limits for the localization error for our specific conditions, we have performed computer simulations in which we have generated $$8\times 8$$ arrays of Gaussian light spots with a width and arrangement comparable to those of sample #5 (see panel a in the fifth row of Fig. 3). Furthermore, we have considered the same pixel numbers for the ROI and for the fitting as well as a number of $$8$$ bits as in the experiments (see above). In our simulations, each pixel averages over the intensity within. The brightness of the light spots was chosen to cover the full $$8$$ bit dynamic range of the image. The processing of the simulated data was strictly identical to that of the experimental data. Accounting for read-out noise with an amplitude of, e.g., $$2.3$$ bits for each camera pixel has led to simulated statistical localization errors of $$\langle {\sigma }_{x}^{\mathrm{sim}}\rangle =0.08\,\mathrm{ nm}$$ and $$\langle {\sigma }_{y}^{\mathrm{sim}}\rangle =0.08\,\mathrm{ nm}$$ (not depicted). These values are comparable to $${\langle \sigma }_{x}\rangle =0.09\, \mathrm{nm}$$ and $${\langle \sigma }_{y}\rangle =0.10\, \mathrm{nm}$$ obtained for sample #5 (see panel b in the fifth row of Fig. 3). To investigate systematic errors28,29, we have located the spots at various different positions with respect to the simulated camera pixel array. Thereby, for zero read-out noise, we have obtained simulated localization errors of $$\langle {\sigma }_{x}^{\mathrm{sim}}\rangle =0.02\, \mathrm{ nm}$$ and $$\langle {\sigma }_{y}^{\mathrm{sim}}\rangle =0.03\,\mathrm{ nm}$$. These simulated systematic errors show that the localization errors achieved in our experiments already approach the limit of the underlying image cross-correlation algorithm under the given conditions. Finally, Figure S1 illustrates the dependence of the localization error on the width of the light spots and their brightness. As expected, the localization error increases with increasing spot width and decreasing brightness level. ## Discussion The image cross-correlation approach as presented here allows for determining the two in-plane components of the displacement-vector field with small localization errors simultaneously, for a total area for the used markers of $${\left(40\, \upmu \mathrm{m}\right)}^{2}$$, and for a large free working distance of the sample to the microscope lens of $$11.2\,\mathrm{ mm}$$. For example, for the $$8\times 8$$ gold-coated polymer-marker array presented in the last row of Fig. 3, we have achieved a mean localization error of $${\langle \sigma }_{x}\rangle =0.09\, \mathrm{nm}$$ at $$12.5\, \mathrm{ms}$$ time resolution. This value is significantly better than anything else that we have previously obtained by using the image cross-correlation approach on samples without dedicated marker arrays. This finding correlates positively with the fact that this sample has the largest density of non-overlapping ROI in the given footprint of $${\left(40\, \upmu \mathrm{m}\right)}^{2}$$. The image acquisition process itself is identical to not using marker arrays. The fabrication of the marker arrays onto 3D printed mechanical metamaterial architectures takes negligible time compared to that of the rest of such samples. For the same measurement time of $$12.5\, \mathrm{ms}$$, equivalent to a camera frame rate of $$80$$ $$\mathrm{frames}$$/s, the localization error obtained from the $$8\times 8$$ gold-coated marker-array sample in the last row of Fig. 3 is even significantly smaller than the localization error $$\langle {\sigma }_{x}^{^{\prime}}\rangle =0.17\, \mathrm{nm}$$ obtained from the capacitive sensor that is built into the high-quality piezoelectric actuator. However, to be fair, it should be noted that the signal obtained from the capacitive sensor is available in real time. The data for the optical-image cross-correlation approach are also acquired in real time, but the subsequent cross-correlation analysis described above takes considerable overhead time. On a state-of-the-art standard personal computer, it has taken us about $$15\ \mathrm{ms}$$ processing time for one ROI in one image, of which $$7\, \mathrm{ms}$$ are required to merely load the image into the software. With an increasing number of markers, the software overhead per marker decreases. For example, for $$8\times 8$$ ROI, this leads to a processing time of $$90\ \mathrm{ms}$$ for one image and to $$72\, \mathrm{s}$$ processing time total for the $$800$$ images for each sample shown in Fig. 3b. This timescale is essentially irrelevant when performing high-precision characterization experiments on mechanical metamaterials, which is the application we have in mind. However, if one aims at any sort of real-time active feedback of a displacement or position, this timescale would obviously be unacceptable. It has been shown for cross-correlation analysis that the processing time can be sped up substantially by using field-programmable gate arrays (FPGA)30,31 instead of the single standard personal computer in our experiments. Finally, we note that adding markers to a sample generally influences its properties. For example, the (metallized) polymer markers may influence the local dielectric properties and will increase the optical scattering. Likewise, the additional mass will affect the mechanical properties of the specimen under investigation. However, in previous experiments comparable markers have not had a major disturbing influence, neither in quasi-static regime32, nor in measurements at ultrasound frequencies25. ## Conclusion By introducing well-defined 3D printed marker arrays on surfaces, we have reliably pushed the optical-image cross-correlation approach to localization errors below one Angstrom at camera frames rates of $$80\, \mathrm{frames}/\mathrm{s}$$. Under our conditions, one Angstrom is several thousand times smaller than the wavelength of white light used for illumination and more than thousand times smaller than a single camera pixel. Most importantly, these values are achieved with a very simple and inexpensive optical setup that can immediately be used for applications, e.g., for the characterization of mechanical metamaterials.
2023-03-26 02:06:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6478382349014282, "perplexity": 870.6888568101595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00651.warc.gz"}
https://j-james.me/math/precalculus/prove-trig-identities.html
# j-james/math All my math notes, now in Markdown. View the Project on GitHub j-james/math # Prove Trig Identities ## Learning Targets You should be able to • Verify / Prove (informally) trigonometric identities ## Concepts / Definitions $\begin{matrix} \bold{Left-Hand-Side}&\ &\bold{Right-Hand-Side}\\ \cos{x}+\sin{x}\tan{x}&=&\sec{x}\\ \cos{x}+\frac{\sin{x}}{1}(\frac{\sin{x}}{\cos{x}})&=&\sec{x}\\ \cos{x}+\frac{\sin^2{x}}{\cos{x}}&=&\sec{x}\\ (\frac{\cos{x}}{\cos{x}})(\frac{\cos{x}}{1})+\frac{\sin^2{x}}{\cos{x}}&=&\sec{x}\\ \frac{\cos^2{x}}{\cos{x}}+\frac{\sin^2{x}}{\cos{x}}&=&\sec{x}\\ \frac{\sin^2{x}+\cos^2{x}}{\cos{x}}&=&\sec{x}\\ \frac{\sin^2{x}+\cos^2{x}}{\cos{x}}&=&\sec{x}\\ \frac{1}{\cos{x}}&=&\sec{x}\\ \sec{x}&=&\sec{x}\\ \end{matrix}$ To disprove an identity, you only need to show one particular example (like plugging in a number to show both sides are not equal).
2020-10-21 22:11:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6567635536193848, "perplexity": 5141.114895560358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00668.warc.gz"}
https://math.stackexchange.com/questions/1535494/the-set-of-traces-of-orthogonal-matrices-is-compact
# The set of traces of orthogonal matrices is compact Is the following set compact: $$M = \{ \operatorname{Tr}(A) : A \in M(n,\mathbb R) \text{ is orthogonal}\}$$ where $\operatorname{Tr}(A)$ denotes the trace of $A$? In order to be compact $M$ has to be closed and bounded. $\|A\|=\sqrt {\sum_{i,j} {a_{ij}}^2}=\sqrt n$ and hence bounded. So $\operatorname{Tr}(A)<\sqrt n$. Hence $M$ is bounded. Now we have to prove that $M$ is closed. Let $\operatorname{Tr}(A_n)$ be a sequence of matrices converging to $\operatorname{Tr}(A)$ where $A_n$ is a sequence of orthogonal matrices. The only thing remaining to show is that $A$ is orthogonal. • Since you were given this question, I would imagine that you're allowed to use the result that the set of orthogonal matrices is compact. – Omnomnomnom Nov 18 '15 at 17:23 • $A$ is not a set, so $A$ is not compact. The set of orthogonal matrices is compact under the usual topology. – Thomas Andrews Nov 18 '15 at 17:34 • There's a flaw in your argument. You cannot assume that $A_n \to A$ such that $tr(A_n) \to tr(A)$. In general you have to show that for any sequence of matrices $(A_n)$ such that $\lim tr(A_n) = l$ for some number $l$, then $l = tr(A)$ for some orthogonal matrix $A$. There's a difference. – Najib Idrissi Nov 18 '15 at 19:51 • It's not true that $\text{Tr}(A) \le \sqrt{n}$. The correct bound is $n$. – Robert Israel Nov 18 '15 at 20:05 Besides using compactness of the set of orthogonal matrices, you can show directly that the set of possible traces is $[-n,n]$. Note that $I$ and $-I$ are orthogonal with $\text{Tr}(I) = n$ and $\text{Tr}(-I) = -n$. On the other hand, each matrix element of an orthogonal matrix has absolute value at most $1$, so $\text{Tr}(T) = \sum_{i=1}^n T_{ii}$ has absolute value at most $n$. To get orthogonal matrices with every trace value from $-n$ to $n$, consider those made from diagonal blocks of the form $$\pmatrix{\cos \theta & \sin \theta\cr -\sin \theta & \cos\theta}$$ with an additional diagonal entry of $+1$ or $-1$ in case $n$ is odd. • Sir ,I did not get your last paragraph "to get orthogonal..." for example how to get a $3\times 3$ orthogonal matrix with $\operatorname{Tr A}=0.6$ say – Learnmore Nov 19 '15 at 2:23 • it will be great if you could explain – Learnmore Nov 19 '15 at 2:23 • $\pmatrix{\cos \theta & \sin\theta & 0\cr -\sin\theta & \cos\theta & 0\cr 0 & 0 & 1\cr}$ where $\cos \theta = -0.2$. – Robert Israel Nov 19 '15 at 3:30 Hint: To show that $A$ is orthogonal, consider $\lim_{n \to \infty} \left\| A_n^TA_n - I \right\|$, noting that the function $$f(X) = \left\| X^TX - I \right\|$$ is continuous. The orthogonal matrices are compact, as I show below. The trace function is continuous, so the image of the orthogonals under this function must be compact as well. To see that the orthogonals are compact, first note that the condition $A^TA = I$ is a closed condition. It is the preimage of a single point (closed) under a continuous map, as long as you define the continuous map properly. Further, they are bounded, since their columns are all of norm one. • Nitpick: having bounded eigenvalues is not sufficient to prove that the matrices themselves are bounded (e.g. transvections). – D. Thomine Nov 18 '15 at 17:33 • is the trace function continuous because every linear map on a finite dimensional space is continuous – Learnmore Nov 18 '15 at 17:33 • Or because the maps $f_{ij}$ which send $A\mapsto A_{ij}$ are continuous, and the linear combination of them is continuous. @learnmore – Thomas Andrews Nov 18 '15 at 17:36 • In fact, learnmore shows that the orthogonal matrices are bounded by noting that they all satisfy $\|A\| = \sqrt n$ – Omnomnomnom Nov 18 '15 at 17:43 • @Eric Auld they are bounded because every row/column vector has norm 1 – Learnmore Nov 18 '15 at 17:43
2020-02-22 01:08:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351252317428589, "perplexity": 166.78109962128426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00230.warc.gz"}
http://math.stackexchange.com/questions/79067/how-to-count-the-additional-lines
# How to count the additional lines? A plane has 6 lines of which no two lines are parallel and no three are concurrent. Their points of intersection are joined, how many of additional lines are so formed? I know that number of points of intersection for $n$ lines would be $\sum \limits_{i=1}^{n-1} i=\frac{n(n-1)}{2}$, but then how do I do the rest? - Are you sure that it doesn’t ask for the maximum possible number of additional lines? –  Brian M. Scott Nov 4 '11 at 22:35 What if three intersection points lie on a line? Do we call the resulting number of lines 1, 2, or 3? –  mixedmath Nov 4 '11 at 22:38 ## 1 Answer Every choice of two intersection points determines a line. You already have some of these -- how many pairs of intersection points will generate each of the original 6 lines? There's a risk that three of the intersection points will lie on a common line that was not one of the originals, but you're probably supposed to ignore that possibility. - The case where three of the intersection points lie on a new line is much more interesting than the original problem! But I'm sure you are right that one was meant to ignore that possibility. –  Gerry Myerson Nov 5 '11 at 3:36
2014-10-21 11:55:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6253571510314941, "perplexity": 310.1482257497699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444385.33/warc/CC-MAIN-20141017005724-00002-ip-10-16-133-185.ec2.internal.warc.gz"}
http://j4tg-lasers.com/79mmdm/6502d9-cube-root-function-desmos
Note that there is no problem taking a cube root, or any odd-integer root, of a negative number, and the resulting output is negative (it is an odd function). When I have to derive some physics equations and I'm bound to make mistakes, desmos allows me to simply correct that little mistake instead of starting all over again and what's more important is that it looks way better than my horrible handwritten equations. Mathematical Functions Available In WeBWorK. This activity is designed to allow students to identify transformations to the the parent cube root function. A power function is a function with a single term that is the product of a real number, coefficient, and variable raised to a fixed real number power. ≤ is an inequality that represents "less than or equal to." sin() the sine function. In the early rounds of the game, students may notice graph features from the list above, even though they may not use those words to describe them. Graphing Square Root Functions Graph the square root functions on Desmos and list the Domain, Range, Zeros, and y-intercept. Next lesson. will surely come handy for many of us. ... Transformations of Square Root and Cube Root Functions Trig/Math Analysis Graphing the Sine Function using Amplitude, Period, and Vertical Shift This is the currently selected item. Typing <= would result in ≤. Domain: $(-\infty,\infty)$ 2. a b x − c + d. 3. a = 1. Key vocabulary that may appear in student questions includes: intercept and quadrant. H=1 K=4 A=2. 3. Algebra Desmos Library: Basic Laws, Properties, and Definitions. Cube Root Graph. 5. c = 0. parts of a coordinate plane. cos() the cosine function. 2. a b x − c + d. 3. a = 1. Chapter 4 includes quadratics, Chapter 5 is cubics, and now in chapter 6 it is square root and cube root functions. Now, less than two months later, we’re adding that many new installs per day. Show/hide sidepanel 5. c = 0. properties of radicals. Thanks to a beautiful redesign of the Chrome store, it seems as though more and more users are finding, installing, and loving the Desmos Calculator. Graphing Square Root and Cube Root Functions Desmos Exploration. If you want some equation to specifically NOT work for some value m=n, then multiply something by (m-n)/(m-n) which gives 0/0 and thereby creates a hole in the function. Recognize graphs of parent functions. Karen, the "sign(x)" function returns 1 for positive values, -1 for negative values, and 0 for 0. There are huge benefits that are hard to foresee for example imagine a teacher making a video tutorial while typing equations instead of scribbling with hand, or a lecture where professor uses projector and desmos instead of a blackboard, typing is easier than writing and reading typed words is easier, everybody would be happy! Log InorSign Up. Strand: Functions Standard of Learning (SOL) AII.6a For absolute value, square root, cube root, rational, polynomial, exponential, and logarithmic functions, the student will recognize the general shape of function families. I created a day 2 or review type of activity to give students a chance to practice with putting some of the concepts together. This algebra video tutorial explains how to graph cube root functions in addition to writing the domain and range of the function in interval notation. Lessons 1-3 Graphing Stories. Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions. Today we will use the computer application Desmos Graphing Calculator to see how to graph a function with restrictions on the domain.. Students are given Writing Piecewise Activity to read over. Rational Expressions. Practice Problems. Students will graph functions that meet certain characteristics, describe the transformations of a particular function from the parent, and describe transformations of a given cube root function. Note: the cosine function uses radian measure. If you want positive and negative possibilities, simply square everything. My colleagues are well aware of my competitive nature. Graphs of exponential functions. YouTube. powered by $$x$$ y $$a 2$$ a b $$7$$ 8 ... Transformations: Scaling a Function. Do problems 13-18 from the IM3 – Unit 2 Homework <- PREVIOUS TOPIC. Desmos Activity (teacher use only): Transformations of Radical Functions. b? Instead it goes between equations. Polynomials intro. We began the lesson by looking back at the Desmos pre-made sheet for parabolas in vertex form for a reminder of how a, h, and k transform a graph. Thanks. There's no input received when I try typing on it for an equation; it only shows the desmos keyboard. Radical functions & their graphs. Khan Academy. Statistics: Linear Regression. It mostly looks like a straight line but it has this fascinating "hump" in the middle. I was at CMCS15, listening to Eli Luberoff (creator of DESMOS) discuss Technology and Intellectual Need. how do you underline an ARROW? it wont graph any more, and i'm afraid to refresh. I’m constantly setting goals and pushing myself to achieve them. AKA THIS <. Name: Graphing Cubic and Cube Root Functions Using Desmos Getting Started Go to desmos.com and click on Click on 4 Click on Click on and name it “Cubic Functions”and your Name Cubic Function Investigation w/ Slide Rulers The function y = x 3 is called a cubic function. DESMOS Graphing Art Project 1. Graphing Square Root and Cube Root Functions Desmos Exploration. By the time students get to graphing of square and cube root functions they have done a lot of graphing by hand and talked about transformations several times. 8. powered by. As an example, consider functions for area or volume. 1. To complete John's thought, there are three distinct cube roots of every non-zero number (positive real, negative real, complex), not just of the negative real numbers. f ( x ) = ∛ (x - 2) and find the range of f. Solution to Example 2. Solving Square Root / Cube Root Equations Pre-Algebra / Algebra 1. Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify Statistics Arithmetic Mean Geometric Mean Quadratic Mean Median Mode Order Minimum Maximum Probability Mid-Range Range Standard Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge Using table headers or lists are possibilities. Chapter 4 includes quadratics, Chapter 5 is cubics, and now in chapter 6 it is square root and cube root functions. In these cases, "a" is used to represent a list or table header previously defined by the user in the calculator. Example 1 Graphing Square Root Functions Graph the square root functions on Desmos and list the Domain, Range, Zeros, and y-intercept. But the problems I have is that lots of symbols are missing like integrals, Greek letters, matrices and so forth and there is too little space for writing equations so I hope in the future desmos will suit these purposes too and replace scratch papers. A power function is a function with a single term that is the product of a real number, coefficient, and variable raised to a fixed real number power. Could you add the atan2 function, pleazzzze :), Pearson Correlation Coefficient of Two Lists. For example, round(17.56789,2) would evaluate to 17.57 and would be nice for several practical tasks. We just updated the app to include integrals! You must create an electronic picture using the graphs of at least 30 different equations representing the functions named below. This activity is designed to allow students to identify transformations to the the parent cube root function. Is there a way to do plus or minus for quadratic equations? Cube roots is a specialized form of our common radicals calculator. example. The workaround is to type either + or - and then [-1, 1] * (whatever you wanted to type). Describe the Transformations using the correct terminology. Donate or volunteer today! Transformations: Inverse of a Function. NEXT TOPIC -> Proudly powered by WordPress. 8. powered by. Students will be able to construct and compare function models and solve contextual problems. In this investigation, you will use Desmos to graph cubic function s and … Function models: - absolute value - square root - cube root - piecewise Analyze multiple representations of functions using: - Key features - Translations - Parameters/limits of domain. types of rational expressions. (x+b/2a)^2=b^2-4ac, if you let d^2=b^2-4ac, then let x=(d-b)/2a, you should get the same result and in a way that can be understood easily, yes? Key vocabulary that may appear in student questions includes: intercept and quadrant. 6. d = 0. See Lesson 5.1 for a review of inverse functions, including how to algebraically find the inverse relation for a given function, and how the graphs of inverse functions are related.. The same would apply to the "greater than or equal to" symbol. Saturday, November 7, 2015, Palm Springs, CA. y = 3 x. example. This is similar to what I.e. d? When typing in a cube root (or odd root) function, be sure to include a coefficient, even if it is just 1. Ups, sorry some words are missing in my post XD, This helped a lot because I had to graph cube root and I was freaking out and i found this! Then, determine the values of a, h, and k in the general equation. The quadratic formula. Theme: Untitled by WordPress.com. Grade Level Skills: Would be great if there were a "Fraction" function (converts answer to reduced fraction if possible)... like the TI-84. :-). Also, because integrals can take a while sometimes, it would be nice to have a way to increase/decrease their accuracy somehow (perhaps just as a graph option) so that we can choose between having a more accurate or a more dynamic graph. ... Transformations of Square Root and Cube Root Functions Trig/Math Analysis Graphing the Sine Function using Amplitude, Period, and Vertical Shift Algebra 1 Eureka Math/Engage NY Module 1. How do I get degree symbol? By the time students get to graphing of square and cube root functions they have done a lot of graphing by hand and talked about transformations several times. Site Navigation. Videos. Keep in mind a number that multiplies a variable raised to an exponent is known as a coefficient. More Related Concepts. (I am on a tablet.) I love Desmos not only because its graphing capabilities but I'm also using as my notebook instead of the paper. b? solving cube root equations. The cube root function to determine the cube root of a number, here are some examples of special cubic roots given by the online calculator. Thanks to a beautiful redesign of the Chrome store, it seems as though more and more users are finding, installing, and loving the Desmos Calculator. Desmos card sort: function family (group the function family name, equation, and graph) Khan exercise: Graphs of square and cube root functions (the coefficient of x must be one; if not factor) Worksheet #1; 6.5 State the Domain of Root (Radical) Functions. Graph, Domain and Range of the Basic Cube Root Function: f (x) = ∛x The domain of function f defined by f (x) = ∛x is the set of all real numbers. After Eli’s session, as J.J, Tim and I walked to lunch, I announced my next goal: Create an activity bui… Name: Graphing Cubic and Cube Root Functions Using Desmos Getting Started ● Go to desmos.com and click on ● Click on ● 4 Click on ● Click on and name it “Cubic Functions”and your Name Cubic Function Investigation w/ Slide Rulers The function y = x 3 is called a cubic function. Currently, I'm using round(x*10^n)/10^n to round to "n" digits. order of operations. Cube root functions are, like square root functions, another type of radical function. Graphing Cube Root Functions. Franklin - is your app the most updated version? When I have to derive some physics equations and I'm bound to make mistakes, desmos allows me to simply correct that little mistake instead of starting all over again and what's more important is that it looks way better than my horrible handwritten equations. Graphing on Geogebra or Desmos as part of the day 1 intro will be helpful. The domain of the cube root function given above is the set of all real numbers. I love Desmos not only because its graphing capabilities but I'm also using as my notebook instead of the paper. To complete John's thought, there are three distinct cube roots of every non-zero number (positive real, negative real, complex), not just of the negative real numbers. ... a cube root or logarithm function, you must hit the “functions… Do problems 13-18 from the IM3 – Unit 2 Homework <- … example. Quadratic Function. As an example, consider functions for area or … This activity is designed to allow students to identify transformations to the the parent cube root function. Khan Academy is a 501(c)(3) nonprofit organization. What does changing a do? This algebra video tutorial explains how to graph cube root functions in addition to writing the domain and range of the function in interval notation. Describe the Transformations using the correct terminology. >Can you add the ± symbol to the graphing calculator? 2. On September 1 st of this year, we wrote on this blog about passing 2,000 Chrome App store installs of our graphing calculator. And as John points out, some of these roots are complex, so you need to know how the tools you are using behave in order to get the answer(s) you want. It could be given a shortcut like +-. They are the inverse of cubic functions (sometimes requiring a domain restriction). That is a great list of key shortcuts. You can input this by using a "less than" < symbol followed by an "equal" = symbol. Arnav - we don't have a degree symbol but you can toggle between radians and degrees in the graph settings menu! It easy to calculate ∛ (x - 2)if you select values of (x - 2) as -8, -1, 0, 1 and 8 to construct a table of values then find x in order to graph f. x - 2. You can find these keyboard shortcuts and more in the help menu in the calculator or by pressing ctrl + / on Windows, command + / on Mac. It would be great to have shortcuts for: 6. d = 0. Cube root For the cube root function $f\left(x\right)=\sqrt[3]{x}$, the domain and range include all real numbers. That’s where you can step in. Use this calculator to find the cube root of positive or negative numbers. Thanks! Looking for radicals with an index greater than 3? Key concepts include ... have the students graph the function using Desmos, then drag a point along the function’s graph to show the point is ... parts of a cube root function. Zoom in/out ... a cube root or logarithm function, you must hit the “functions” button. You must create an electronic picture using the graphs of at least 30 different equations representing the functions named below. My colleagues are well aware of my competitive nature. 2. Suggestion about "round(x)":  It would be nice if round took 2 arguments; the first for the value to be rounded, and the 2nd for the number of decimal places. Now, less than two months later, we’re adding that many new installs per day. Statistics: Anscombe's Quartet. We began the lesson by looking back at the Desmos pre-made sheet for parabolas in vertex form for a reminder of how a, h, and k transform a graph. The student will investigate and analyze linear, quadratic, absolute value, square root, cube root, rational, polynomial, exponential, and logarithmic function families algebraically and graphically. Is there a way to add bar notation for repeating decimals\. graph of a cube root function. Why is the iPad/iPhone app missing the integral and prime symbols which are available online? Add a new note: type " in an empty expression, Subscript: can be entered like y1 or v_ariable, You can also move around the expression list and within table cells using the arrow keys, √: type "sqrt" (you can also type "nthroot" for cubed roots, etc), A printable version of this page is available for download here: http://s3.amazonaws.com/desmos/Desmos_Calculator_User_Guide.pdf. You can get single points Practice Problems. c? Lesson: 1. The Desmos activities on this page are ones that I created, including a few that were edited by the awesome Desmos Teaching Faculty. Showing top 8 worksheets in the category cube root functions. 1. exponent laws. The cubic function y = x3 − 2 is shown on the coordinate grid below. His message was motivating. This illustrates why it's more useful than arctan: https://www.desmos.com/calculator/bgok2fnl9d. Basic square root and cube root functions. In the early rounds of the game, students may notice graph features from the list above, even though they may not use those words to describe them. Calculus: Integral with adjustable bounds. 2. 2. powered by. More Desmos activities are available at teacher.desmos.com and also at the Desmos Bank. sin 73 degree. Statistics: 4th … If not I'm gonna make it! Graphing Cube Root Functions. In algebra, a quadratic equation (from the Latin quadratus for "square") is any equation that can be rearranged in standard form as ax²+bx+c=0 where x represents an unknown, and a, b, and c represent known numbers, where a ≠ 0. Chapter 4 includes quadratics, Chapter 5 is cubics, and in chapter 6 it is square root and cube root functions. Writing the Equation of a Cube Root Graph. Key concepts include values of a function for elements in its domain. I started messing around in Desmos and I decided to see what it looks like when you put a cubic function inside of a cube root, and I got a truly fascinating curve. Khan Academy. The "exp(x)" function is the same as e^x, probably for compatibility with both calculators and programming languages. I decided to take a different approach this time around for the rest of the lesson. 1. Students will graph functions that meet certain characteristics, describe the transformations of a particular function from the parent, and describe transformations of a given cube root function. Can anyone tell me what does the sign function listed under other supported functions do? You can also type "sqrt" in the expression line, which will automatically convert into √ To enter the cubed root symbol from the Desmos keyboard, click on FUNCTIONS and then Misc. Can anyone tell me what the exp function under other supported functions do? 1. Could you implement the two argument arctangent (atan2) function as well? Keep in mind a number that multiplies a variable raised to an exponent is known as a coefficient. I can't write the quadratic formula in my graphs... Edit: I just learned that copy pasting it works. x ^(1/3) gives , the cube root of x. x ^(1/n) gives , the nth root of x. x ^(p/q) gives . And as John points out, some of these roots are complex, so you need to know how the tools you are using behave in order to get the answer(s) you want. MP1. 7. As for the suggestion above, it'd be best if the round function could work for both one argument and two arguments (when the second argument isn't specified, it should be 0 decimals). That’s where you can step in. Is there any way to add bar notation for repeating decimals? I decided to take a different approach this time around for the rest of the lesson. Graphing cube root functions worksheet date t2k01y4j kkiujtbak tsiomfqtowtaorcea dlplfchu nahlll mrligthetdsm orhesdegrcvbebdy. The range of f is the set of all real numbers. to calculate the cube root of 8 , enter cube_root(8) , the result is 2. I'm having trouble understanding why the function looks this way though. Students will graph functions that meet certain characteristics, describe the transformations of a particular function from the parent, and describe transformations of a given cube root function. Similar to the around() function in NumPy. Our mission is to provide a free, world-class education to anyone, anywhere. Transforming a Square Root Function. If you need a little more detailed explanation, this link goes to a another video, of a different example. Cube roots and nth Roots. It's a lot more useful than the standard arctangent function, and I'm getting tired of having to redefine it every project. Here are some examples of how to graph cube root functions. -8. His message was motivating. also, i don't know if it's my computer being really slow or the site breaking at a certain point..... but when doing about 20 exponential (x^n for 0
2022-05-18 17:21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47725963592529297, "perplexity": 972.5753803851387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00057.warc.gz"}
https://pressbooks.pub/guide/chapter/edit-content-with-the-visual-text-editors/
# Edit Content with the Visual & Text Editors When writing and editing your book in Pressbooks, you can choose between a default “Visual Editor” which displays your content with shows you much of the styling and formatting you have applied or a “Text Editor” which displays the full HTML structure of your content without any CSS applied. # Use the Visual Editor The visual editor is the default editor. It is a WYSIWYG (What You See Is What You Get) interface that allows you to see styling and formatting as they are applied. This interface also includes a toolbar that the top of the editor. The visual editor toolbar displays all formatting options by default. You can collapse the second and third rows of tools by clicking the Toolbar toggle button (Shift + Alt + Z) and can move into a ‘Distraction-free writing mode’ by pressing ‘Shift + Alt + W’. To shift focus to inline toolbar when an image, link, or preview is selected, press ‘Alt + F8’ (fn + F8 on a Mac); to shift focus to the visual editor menu, press ‘Alt + F9’ (fn + F9 on a Mac); to shift focus to the visual editor toolbar, press ‘Alt + F10’ (fn + F10 on a Mac); and to shift focus to the elements path, press ‘Alt + F11’ (fn + F11 on a Mac). You can also view a set of Keyboard Shortcuts for various keys in the visual editor by pressing ‘Shift + Alt + H’. ## Visual toolbar options: Top row: 1. Paragraph styles dropdown menu: choose from normal paragraph style (Shift + Alt + 7), six different heading styles (Shift + Alt + 1-6), or preformatted text 2. Bold (Ctrl + B) 3. Italics (Ctrl + I) 4. Unordered (bulleted) list (Shift + Alt + U) 5. Ordered (numbered) list (Shift + Alt + O) 6. Blockquote (Shift + Alt + Q) 7. Left-align (Shift + Alt + L) 8. Center-align (Shift + Alt + C) 9. Right-align (Shift + Alt + R) 11. Read more (Shift + Alt + T) 12. Toolbar toggle (Shift + Alt + Z) Second row: 1. Formats dropdown menu: choose from several text indent and tracking options, as well as pullquote options 2. Textboxes dropdown menu: choose from a variety of plain textboxes or predesign educational textboxes (read more here) 3. Underline (Ctrl + U) 4. Strikethrough (Shift + Alt + D) 5. Horizontal line 6. Justify (Shift + Alt + J) 7. Text color 8. Text background color 9. Paste as text 10. Clear formatting 11. Special character 12. Decrease indent 13. Increase indent 14. Undo (Ctrl + Z) and Redo (Ctrl + Y) 15. Keyboard shortcuts guide (Shift + Alt + H) Bottom row: 2. Apply Class 3. Anchor 4. Superscript 5. Subscript 6. Code (Shift + Alt + X) 7. Footnote[1] 8. Convert Microsoft Word footnotes 9. LaTeX shortcode 10. Glossary You can highlight a section of existing content and then click a tool on the toolbar to add formatting to that section. Alternatively, select the tool first, and then add new formatted content. # Use the Text Editor You can also choose to work in a text editor, or switch to it as necessary as needed (to clean up messy HTML for example). The text editor allows you to directly view and edit your book’s HTML content as HTML. ## Text Editor Options The text editor toolbar offers fewer options, tailored to working in HTML. None of the buttons in the HTML editor have keyboard shortcuts, but their functionality is detailed below. 1. Open and close <strong> tags to make text bold (click once to open, and again to close the tag) 2. Open and close <em> tags to make text italics 3. Insert Link text (a pop up will appear) 4. Insert <blockquote> tags 5. Strikethrough text (<del> tags) 6. Insert a date/time tag 7. Insert an image (from URL) 8. Insert an unordered (bulleted) list 9. Insert a ordered (numbered) list 11. Open and close <code> tags 12. Insert a ‘Read More’ tag (<!--more-->) 13. Close tags (automatically closes any open tags) 14. Insert footnote shortcode # HTML Basics Pressbooks is designed to make it easy for you to create attractive webbooks and export files without knowing much about book design or web development. Our ability to do this, however, is constrained in many ways by the quality of the underlying ‘markup’ in your book. Pressbooks uses HyperText Markup Language [HTML] to provide the content and structure of your book and Cascading Style Sheets (CSS) to apply the styles that control the appearance of your webbook and export files. You don’t need to know HTML or CSS to use Pressbooks, but understanding a little bit about how they work will help make sure your books look good when you export from Pressbooks. NOTE: Pressbooks’ visual editor allows you to write your book and apply several style choices without ever seeing any of your book’s underlying HTML. If you’ve find that you have formatting problems with your output, however, it’s almost always caused by problems with your book’s underlying markup. Here is a brief passage of text with some formatting: A long, long time ago, in a galaxy far, far away, there lived a fine young man unaware of various things about his past. This is what you might write into the VISUAL editor of Pressbooks. But if you look at the TEXT editor, you’ll see that the way that italic and bold is achieved is through “markup”, or HTML. So the markup of that text looks like: A long, <em>long</em> time ago, in a galaxy far, far away, there lived a fine young man <strong>unaware</strong> of various things about his past. The <em>tag</em> specifies that a text should be italicized. The <strong>tag</strong> specifies that it should be bold. A reader reading an ebook, or a print book, or a web page won’t see those tags. They are instead used to tell the ebook software, or browser, how those words should look. In addition to the em and strong tags, there are a handful of other basic HTML tags you should know about: tag name used for tags strong used to make text bold  or emphasis used to make text italic or blockquote used to quote a long text, can be used for instance for a letter, a poem etc unordered list used to create a list with bullets • item 1 • item 2 ordered list used to create a numbered list 1. item 1 2. item 2 headings used to make headings in your document , , ... Here is an extended version of the text from above with more HTML tags: ### The Background A long, long time ago, in a galaxy far, far away, there lived a fine young man unaware of various things about his past, including: • the Force • what his father was up to • how to use a lightsaber. All that, however, was about to change. Three things were about to happen: 1. he would discover the Force 2. he would learn how to use a lightsaber, and 3. he would meet his father. ### The Update Long after this fellow lived, a famous movie was made about his life. The movie was shot in Tunisia. Here is that text with markup: <h3>The Background</h3> A long, <em>long</em> time ago, in a galaxy far, far away, there lived a fine young man <strong>unaware</strong> of various things about his past, including: <ul> <li>the Force</li> <li>what his father was up to</li> <li>how to use a lightsaber.</li> </ul> All that, however, was about to change.  Three things were about to happen: <ol> <li>he would discover the Force</li> <li>he would learn how to use a lightsaber, and</li> <li>he would meet his father.</li> </ol> <h3>The Update</h3> Long after this fellow lived, a famous movie was made about his life. The movie was shot in Tunisia.` # Write in Markdown NOTE: We strongly recommend saving changes to existing content before enabling the Markdown editor, as unsupported elements will be removed when converting existing HTML content to Markdown. Users who prefer to write using Markdown can do so by activating the Parsedown Party plugin in their book. In networks where this plugin is installed and book admins are able to activate plugins, you can enable a Markdown editor in your book by doing the following: 1. Click Plugins from the left sidebar menu of your book’s dashboard 2. Click Activate on the Parsedown Party plugin 3. Open the visual editor for a chapter in your book that you’d like to use Markdown in 4. Click ‘Enable‘ next to the ‘Markdown‘ option in the ‘Status & Visibility‘ menu. The visual/text editor interface will now be replaced by a simple Markdown-based editor. You can revert the default visual/text editor interface by clicking the Disable button next to the Markdown option in the Status & Visibility menu. Note: The Parsedown Party plugin is not available on the Pressbooks.com network. Please contact your network manager if you are creating your book on another Pressbooks network and do not see the plugin menu or the Parsedown Party plugin in your book, 1. This is an example of a footnote.
2023-03-24 03:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1817248910665512, "perplexity": 6664.04464092325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00678.warc.gz"}
https://msdn.microsoft.com/en-us/library/bb787396(v=vs.85).aspx
TB_SETANCHORHIGHLIGHT message Sets the anchor highlight setting for a toolbar. Parameters wParam BOOL value that specifies if anchor highlighting is enabled or disabled. If this value is nonzero, anchor highlighting will be enabled. If this value is zero, anchor highlighting will be disabled. lParam Must be zero. Return value Returns the previous anchor highlight setting. If this value is nonzero, anchor highlighting was enabled. If this value is zero, anchor highlighting was disabled. Remarks Anchor highlighting in a toolbar means that the last highlighted item will remain highlighted until another item is highlighted. This occurs even if the cursor leaves the toolbar control. Requirements Minimum supported client Windows Vista [desktop apps only] Windows Server 2003 [desktop apps only] Commctrl.h
2015-05-23 07:08:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274247050285339, "perplexity": 13094.788562437298}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00013-ip-10-180-206-219.ec2.internal.warc.gz"}
https://jp.maplesoft.com/support/help/view.aspx?path=plots%2Fconformal
conformal - Maple Help Home : Support : Online Help : Graphics : 2-D : conformal plots conformal conformal plot of a complex function conformal3d conformal plot of a complex function on the Riemann sphere Calling Sequence conformal(F, r1 options) conformal(F, r1, r2, options) conformal3d(F, r1, options) Parameters F - complex procedure or expression r1, r2 - ranges of the form a..b, or name=a..b options - (optional) plot options; see plot/options and plot3d/options Description • A conformal plot of a complex function F(z) from a+bi to c+di maps a two-dimensional grid $a\le x\le c$, $b\le y\le d$  from the plane into a second (curved) grid determined by the images of the original gridlines under F.  The result is a set of curves in the plane, which has the property that they also intersect at right angles at the points where F is analytic. • The conformal command produces a conformal plot of a complex function F, where F can be an expression or a procedure.  The first range, r1, defines the gridlines in the plane that are to be conformally mapped via the complex function F. The second range, r2, is optional and defines the view of the plot.  The default view includes the full range of the conformal lines. • The conformal3d command works in the same way as the conformal command, except that it plots F on the Riemann sphere, and it accepts only the first range parameter. • Remaining arguments are interpreted as options which are specified as equations of the form option = value. • To change the number of gridlines displayed, use the grid=[m, n] option, with m and n integers. This option specifies the number of gridlines in both x and y directions that are to be mapped conformally.  The default is 11 lines in either direction, making an 11 by 11 grid. • To change the number of points sampled, use the numxy=[m, n] option, with m and n integers.  This option specifies the number of points that are to be plotted in each gridline, with m points in the x direction and n points in the y direction. The default is 21 points in each direction. • To specify the color of the gridlines, use the color=c option, where c is a valid color as described on the plot/color help page.  The value c can also be a list of two colors; in this case, each color is used for gridlines in a single direction. With the conformal3d command, the spherecolor=c option may also be used to specify the color of the sphere. • To map a grid defined in a different coordinate system, use the coords=t option, where t is one of the coordinate systems listed in the plot/coords and plot3d/coords help pages.  With this option, the gridlines in the default Cartesian coordinate system are first mapped to the new coordinate system and then the conformal mapping is applied. • There are a number of other standard 2-D and 3-D plot options that are available with the conformal and conformal3d commands.  These include specifications for style, and the number of horizontal and vertical tickmarks.  For more details, see plot/options and plot3d/options. Examples > $\mathrm{with}\left(\mathrm{plots}\right):$ > $\mathrm{conformal}\left({z}^{2},z=0..2+2I\right)$ When r2 is given, it specifies the view.  That is, the following produces the same plot as conformal(1/z, z=-1-I..1+I, view=-6-6*I..6+6*I, color=magenta, numxy=[80,80]): > $\mathrm{conformal}\left(\frac{1}{z},z=-1-I..1+I,-6-6I..6+6I,\mathrm{color}=\mathrm{magenta},\mathrm{numxy}=\left[80,80\right]\right)$ > $\mathrm{conformal}\left(\mathrm{cos}\left(z\right),z=0..2\mathrm{\pi }+\mathrm{\pi }I,\mathrm{grid}=\left[8,8\right],\mathrm{numxy}=\left[50,50\right]\right)$ > $\mathrm{conformal}\left({z}^{3},z=0..2+2I,\mathrm{tickmarks}=\left[3,6\right]\right)$ > $\mathrm{conformal}\left(\frac{2z-1}{2-z},z=-2-2I..2+2I,-2-2I..2+2I\right)$ > $\mathrm{conformal}\left(z+\frac{1}{z},z=-3-3I..3+3I,-3-3I..3+3I\right)$ > $\mathrm{conformal}\left(\frac{z-I}{z+I},z=-3-3I..3+3I,-4-4I..4+4I,\mathrm{grid}=\left[30,30\right],\mathrm{style}=\mathrm{line}\right)$ > $\mathrm{conformal}\left(\mathrm{sqrt}\left(z\right),z=-\frac{\mathrm{\pi }I}{2}..1+\frac{\mathrm{\pi }I}{2},\mathrm{coords}=\mathrm{polar}\right)$ > $\mathrm{conformal3d}\left(\mathrm{cos}\left(z\right),z=0..2\mathrm{\pi }+I\mathrm{\pi }\right)$ The commands to create the plots from the Plotting Guide are > $\mathrm{conformal}\left({z}^{3},z=-1-I..1+I\right)$ > $\mathrm{conformal3d}\left(\mathrm{cos}\left(z\right),z=0..2\mathrm{\pi }+I\mathrm{\pi },\mathrm{color}=\left["DeepPink","Yellow"\right],\mathrm{spherecolor}="black"\right)$
2023-03-22 18:18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 14, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017273783683777, "perplexity": 1107.097465677742}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00679.warc.gz"}
https://math.stackexchange.com/questions/1228832/im-having-a-conceptual-issue-with-similarity-matrices
# I'm having a conceptual issue with similarity matrices So I know that $A = T^{-1}AT \implies T \text{ is a similarity transformation matrix}$. Say $A = \begin{pmatrix}9 & 13 \\ -3 & -3\end{pmatrix}$, then how would I go about finding T without using eigenvalues? By sheer luck, I managed to find $T = \begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix}$ by picking it at random, plugging in and seeing it satisfy the conditions. So I thought to myself that ANY invertible 2x2 matrix would work because if you multiply $A$ by $T$ and then by $T$'s inverse, you would end up where you started (this is obviously one of my conceptual issues). So I picked $T$ to be $\begin{pmatrix}2 & 1 \\ 3 & 4\end{pmatrix}$, but I found that it didn't work for some reason. Why is this? I must be missing something fundamental here. I often find myself struggling with the abstractness of Linear Algebra and I'm not sure what it is. • But $\;A=T^{-1}AT\iff TA+AT\iff T4\;$ commutes with $\;A\;$ , and there are lots of possible matrices like, among others the trivials ones: scalar matrices of the form $\;k\cdot I_2\;,\;\;k\in\Bbb F=$ our definition field – Timbuc Apr 10 '15 at 16:58 • Look more closely at your definitions. The idea of similarity is to establish a relation between matrices $A$ and $B$ (rather than only $A$ and itself) that are connected by conjugation: $A = T^{-1}BT$. When matrices $A$ and $B$ are similar, they represent the same transformation with respect to different bases. The matrix $T$ transforms one basis into the other. – Sammy Black Apr 10 '15 at 17:01 • @SammyBlack So how would I algorithmically go about finding T in this case? I only found ((1,0)(0,1)) by accident. – aidandeno Apr 10 '15 at 17:06 • I'm actually not sure what question you're asking now. This? Given a matrix $A$ find a similar matrix $D$ that is diagonal; that is, find diagonal $D$ and invertible $T$ such that $D = T^{-1}AT$. – Sammy Black Apr 10 '15 at 17:16 • @SammyBlack: If $A = \begin{pmatrix}9&13,-3&-3 \end{pmatrix}$, find a $T$ such that $A=T^{-1}AT$ – aidandeno Apr 10 '15 at 17:19 Finding solutions $T$ to $A = T^{-1}AT$ is, to my knowledge, not a very fun procedure. Left-multiplying by $T$, we get the equivalent equation that $TA = AT$, so that $A$ and $T$ commute. If, as in your example, $$A = \begin{pmatrix}9 & 13 \\ -3 & -3\end{pmatrix},$$ then we're looking for a matrix $T = \begin{pmatrix}a & b \\ c & d\end{pmatrix}$ so that $$TA = \begin{pmatrix}a & b \\ c & d\end{pmatrix}\begin{pmatrix}9 & 13 \\ -3 & -3\end{pmatrix} = \begin{pmatrix}9a - 3b&13a - 3b\\9c - 3d&13c-3d\end{pmatrix}$$ while $$AT = \begin{pmatrix}9 & 13 \\ -3 & -3\end{pmatrix}\begin{pmatrix}a & b \\ c & d\end{pmatrix} = \begin{pmatrix}9a + 13c&9b + 13d\\-3a-3c&-3b-3d\end{pmatrix}.$$ Now, as we want $TA = AT$, we can get a system of $4$ equations in $4$ unknowns: \begin{align*} 9a - 3b &= 9a + 13c \\ 13a - 3b &= 9b + 13d \\ 9c - 3d &= -3a-3c \\ 13c-3d &= -3b-3d, \end{align*} and consulting WolframAlpha, we get solutions perameterized by $c = \dfrac{-3b}{13}$ and $d = a - \dfrac{-12b}{13}$. (Note that the matrix you found is obtained when $b = 0$ and $a = 1$). As you can see, it's not a very fun process! This is the most elementary way I can think of to answer the question. There are potentially more efficient methods known, especially if $A$ has some 'nice' properties. Generally, I think, it's still not a question whose concrete answer is nicely computable. • You're right, that's not fun at all. But it made perfect sense. Thank you. – aidandeno Apr 10 '15 at 17:50 • You're very welcome! I really should make it clear that this is one way to simply list the matrices $T$ for which $A = T^{-1}AT$; it certainly doesn't shed much light on why that relationship would hold, which is probably the right way to think about the situation. Unfortunately, I can't really comment on the right way to think about it. – pjs36 Apr 10 '15 at 18:45 If you multiply $T^{-1}AT$, then multiplication of matrices is not commutative so you won't necessarily get $A$.The set of matrices you CAN get from such a product form what's called the conjugacy class of $A$, if you consider all possible $T$.
2019-10-24 02:37:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812681674957275, "perplexity": 160.56019503357143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987838289.72/warc/CC-MAIN-20191024012613-20191024040113-00227.warc.gz"}
https://guitarknights.com/guitar-goddess-guitar-player.html
The musical theory of chords is reviewed, to provide terminology for a discussion of guitar chords. Three kinds of chords, which are emphasized in introductions to guitar-playing,[10][11] are discussed. These basic chords arise in chord-triples that are conventional in Western music, triples that are called three-chord progressions. After each type of chord is introduced, its role in three-chord progressions is noted. School of Rock's highly-trained guitar instructors are experts when it comes to inspiring teens to learn to play the guitar like a pro. Our proven formula for learning to play the guitar effectively and quickly starts with private guitar lessons plus group rehearsals in a safe and friendly environment. All teens are enrolled in or audition for one of the following programs: Rock 101, Performance, House Band, and AllStars. A "guitar pick" or "plectrum" is a small piece of hard material generally held between the thumb and first finger of the picking hand and is used to "pick" the strings. Though most classical players pick with a combination of fingernails and fleshy fingertips, the pick is most often used for electric and steel-string acoustic guitars. Though today they are mainly plastic, variations do exist, such as bone, wood, steel or tortoise shell. Tortoise shell was the most commonly used material in the early days of pick-making, but as tortoises and turtles became endangered, the practice of using their shells for picks or anything else was banned. Tortoise-shell picks made before the ban are often coveted for a supposedly superior tone and ease of use, and their scarcity has made them valuable. There's an abundance of guitar information out there on the web, some good, some not. I stumbled across Justin Sandercoe's site a year ago and now tell everyone about it. The lessons are conveyed so clearly, concisely and in the most congenial way. The site is laid out logically as well so you can to go straight to your area of interest... beginner, blues, rock, folk, jazz, rhythm, fingerpicking... it's all there and more. Spend ten minutes with Justin and you'll not only play better but feel better too. From novice to know-it-all, everyone will learn something from Sandercoe. {"eVar4":"shop: accessories","eVar5":"shop: accessories: strings","pageName":"[gc] shop: accessories: strings","reportSuiteIds":"guitarcenterprod","eVar3":"shop","prop2":"[gc] shop: accessories: strings","prop1":"[gc] shop: accessories","evar51":"default: united states","prop10":"category","prop11":"strings","prop5":"[gc] shop: accessories: strings","prop6":"[gc] shop: accessories: strings","prop3":"[gc] shop: accessories: strings","prop4":"[gc] shop: accessories: strings","channel":"[gc] shop","linkInternalFilters":"javascript:,guitarcenter.com","prop7":"[gc] sub category"} Fretboards are most commonly made of rosewood, ebony, maple, and sometimes manufactured using composite materials such as HPL or resin. See the section "Neck" below for the importance of the length of the fretboard in connection to other dimensions of the guitar. The fingerboard plays an essential role in the treble tone for acoustic guitars. The quality of vibration of the fingerboard is the principal characteristic for generating the best treble tone. For that reason, ebony wood is better, but because of high use, ebony has become rare and extremely expensive. Most guitar manufacturers have adopted rosewood instead of ebony. # I strongly recommend beginner guitar players to use the Uberchord app (click for free download) for practicing chord progressions and chord changes, and use the real-time feedback to improve your playing skills. While, I’ll help you expedite the process of grabbing chords confidently on the neck and get you on your way to playing along expertly with your favourite band, or better yet, running a band of your own. Adjusting the truss rod affects the intonation of a guitar as well as the height of the strings from the fingerboard, called the action. Some truss rod systems, called double action truss systems, tighten both ways, pushing the neck both forward and backward (standard truss rods can only release to a point beyond which the neck is no longer compressed and pulled backward). The artist and luthier Irving Sloane pointed out, in his book Steel-String Guitar Construction, that truss rods are intended primarily to remedy concave bowing of the neck, but cannot correct a neck with "back bow" or one that has become twisted.[page needed] Classical guitars do not require truss rods, as their nylon strings exert a lower tensile force with lesser potential to cause structural problems. However, their necks are often reinforced with a strip of harder wood, such as an ebony strip that runs down the back of a cedar neck. There is no tension adjustment on this form of reinforcement. The previously discussed I-IV-V chord progressions of major triads is a subsequence of the circle progression, which ascends by perfect fourths and descends by perfect fifths: Perfect fifths and perfect fourths are inverse intervals, because one reaches the same pitch class by either ascending by a perfect fourth (five semitones) or descending by a perfect fifth (seven semitones). For example, the jazz standard Autumn Leaves contains the iv7-VII7-VIM7-iiø7-i circle-of-fifths chord-progression;[80] its sevenths occur in the tertian harmonization in sevenths of the minor scale.[81] Other subsequences of the fifths-circle chord-progression are used in music. In particular, the ii-V-I progression is the most important chord progression in jazz music. Learning guitar is a lot of fun, and with the right lessons anyone can become a great guitar player. However, to be successful it's important to pick the right learning method and stay focused. We designed our Core Learning System to be a step-by-step system that keeps beginners on-track and having fun. Give it a try today by becoming a Full Access member. Justin Sandercoe has thought long and hard about how to teach people to play the guitar, and how to do this over the internet. He has come up with a well-designed series of courses that will take you from nowhere to proficiency. I tried to learn how to play years ago, using books, and got nowhere. I've been using Justin's site for just over a year and I feel I've made real progress. What's more, Justin offers his lessons for free - a boon for any young player who has the urge to play, but whose pockets are empty. I've seen and used other sites for learners: none of them offer as clearly marked a road as Justin does. Ive been playing guitar for about 3 years, and this is the best song book I have ever learned from. Songs range from sweet home alabama by lynard skynard all the way to raining blood by slayer. All of the songs are accurate and complete with notes, tabs, lyrics, and copyright info. If you are like me, and you prefer to learn songs the way they were meant to be played than this book is for you. There's no other instrument with as much presence and cultural identity as the guitar. Virtually everyone is familiar with tons of different guitar sounds, from intense metal shredding to soft and jaunty acoustic folk music. And behind all those iconic guitar tones are great sets of strings. Just like a saxophonist changes reeds from time to time or a drummer replaces sticks, putting new strings on your guitar every so often is an important part of owning and playing one. You need to place one finger on whatever fret you want to bar and hold it there over all of the strings on that fret. The rest of your fingers will act as the next finger down the line (second finger barring, so third finger will be your main finger, and so on). You can also buy a capo, so that you don't have to deal with the pain of the guitar's strings going against your fingers. The capo bars the frets for you. This also works with a ukulele. ###### For the standard tuning, there is exactly one interval of a major third between the second and third strings, and all the other intervals are fourths. The irregularity has a price - chords cannot be shifted around the fretboard in the standard tuning E-A-D-G-B-E, which requires four chord-shapes for the major chords. There are separate chord-forms for chords having their root note on the third, fourth, fifth, and sixth strings.[19] In contrast, regular tunings have equal intervals between the strings,[20] and so they have symmetrical scales all along the fretboard. This makes it simpler to translate chords. For the regular tunings, chords may be moved diagonally around the fretboard. The diagonal movement of chords is especially simple for the regular tunings that are repetitive, in which case chords can be moved vertically: Chords can be moved three strings up (or down) in major-thirds tuning and chords can be moved two strings up (or down) in augmented-fourths tuning. Regular tunings thus appeal to new guitarists and also to jazz-guitarists, whose improvisation is simplified by regular intervals. In music, a guitar chord is a set of notes played on a guitar. A chord's notes are often played simultaneously, but they can be played sequentially in an arpeggio. The implementation of guitar chords depends on the guitar tuning. Most guitars used in popular music have six strings with the "standard" tuning of the Spanish classical-guitar, namely E-A-D-G-B-E' (from the lowest pitched string to the highest); in standard tuning, the intervals present among adjacent strings are perfect fourths except for the major third (G,B). Standard tuning requires four chord-shapes for the major triads. Ernie Ball is the world's leading manufacturer of premium electric, acoustic, and classical guitar strings, bass strings, mandolin, banjo, pedal steel strings and guitar accessories. Our strings have been played on many of the best-selling albums of all time and are used by some of history’s greatest musicians including Paul McCartney, Eric Clapton, Jimmy Page, Slash, The Rolling Stones, Angus Young, Eagles, Jeff Beck, Pete Townshend, Aerosmith, Metallica, and more. I would especially like to stress the gentle approach Justin takes with two key aspects that contributed to my development as a musician - music theory and ear training. Justin has succeeded in conveying the importance and profoundness of understanding music both theoretically and through your ears while maintaining a simple and accessible approach to them, all while sticking to what is ultimately the most important motto: 'If it sounds good, it is good'. Further simplifications occur for the regular tunings that are repetitive, that is, which repeat their strings. For example, the E-G♯-c-e-g♯-c' M3 tuning repeats its octave after every two strings. Such repetition further simplifies the learning of chords and improvisation;[71] This repetition results in two copies of the three open-strings' notes, each in a different octave. Similarly, the B-F-B-F-B-F augmented-fourths tuning repeats itself after one string.[73] If you are starting or just stuck in rut, this is the site for you. I had been playing three years and was not advancing as I wanted to, then I saw an ad for Paul Gilbert lessons at Artistworks. I decided to check it out and I have to say, if I had this from the beginning, I would be a much better guitarist. The main reason why, is that Paul is known for his fast playing, but he focuses you on rhythm to start off. That is were I had lacked. I thought I was a decent rhythm player, but I was not. So much thanks to Artistworks and Paul for the great site. What ultimately sets these rock guitar lessons apart from other offerings is the ability to submit a video for review using the ArtistWorks Video Exchange Learning® platform. Paul reviews each submission and records a video response, offering specific guidance to take your guitar playing to the next level. All students can access the Video Exchange library and watch each other’s interactions with Paul. This library is constantly expanding and may contain the key to unlock your playing. What's the best way to learn guitar? No matter which method you choose, or what style of music you want to play, these three rules from guitar teacher Sean L. are sure to put you on the road to success... Learning guitar can be a daunting task when first approached. For many it is seen as only for the musically adept, but in reality anyone can learn guitar. By following these three simple rules, anyone can become a great guitarist. 1. Set Goals There is no one path to take for learning First off, there are two more techniques I want to talk about. These are fret placement and finger posture. Place your first finger on the first fret of the B string. For fret placement, you’ll want to have your finger right behind the fret. In the video, you can see that the further away from the fret I place my finger, the more buzz the note has. Depending on the program, School of Rock's guitar lessons can cost from around $150 to$350 per month. Exact prices vary between locations. What's included? Unlike most hourly guitar lessons, our programs include weekly private guitar lessons and group rehearsals that inspire confidence and teamwork. Guitar students are also welcome to use our facilities whenever we're open, even if they just want to hangout and learn from or collaborate with other musicians. The California Conservatory of Music offers guitar lessons with the most qualified teachers in the Bay Area at both our Santa Clara and Redwood City schools. Whether you're looking to start your young child with Suzuki guitar lessons, preparing for a college audition, or getting reading for an upcoming concert, we can assist you. We offer the Bay Area’s most comprehensive guitar lessons which include technique, sight reading, music theory, and in addition to the private lessons, we offer ensemble, repertoire, and theory classes on the weekends. For students under the age of 8, we ask the parents to be involved in their guitar lessons and practice at home. To better help parents develop in to this role, the first three lessons are dedicated to the parent education class. The child can then begin their guitar lessons. This helps ensures the student’s success and motivation. The top, back and ribs of an acoustic guitar body are very thin (1–2 mm), so a flexible piece of wood called lining is glued into the corners where the rib meets the top and back. This interior reinforcement provides 5 to 20 mm of solid gluing area for these corner joints. Solid linings are often used in classical guitars, while kerfed lining is most often found in steel string acoustics. Kerfed lining is also called kerfing because it is scored, or "kerfed"(incompletely sawn through), to allow it to bend with the shape of the rib). During final construction, a small section of the outside corners is carved or routed out and filled with binding material on the outside corners and decorative strips of material next to the binding, which are called purfling. This binding serves to seal off the end grain of the top and back. Purfling can also appear on the back of an acoustic guitar, marking the edge joints of the two or three sections of the back. Binding and purfling materials are generally made of either wood or plastic. As their categorical name suggests, extended chords indeed extend seventh chords by stacking one or more additional third-intervals, successively constructing ninth, eleventh, and finally thirteenth chords; thirteenth chords contain all seven notes of the diatonic scale. In closed position, extended chords contain dissonant intervals or may sound supersaturated, particularly thirteenth chords with their seven notes. Consequently, extended chords are often played with the omission of one or more tones, especially the fifth and often the third,[92][93] as already noted for seventh chords; similarly, eleventh chords often omit the ninth, and thirteenth chords the ninth or eleventh. Often, the third is raised an octave, mimicking its position in the root's sequence of harmonics.[92] Kyser®'s nickel-plated electric guitar strings give you a warm, rich, full sound. They are precision wound around a carefully drawn hex shaped carbon steel core. The outer nickel-plated wrap maintains constant contact with the hex core resulting in a string that vibrates evenly for maximum sustain, smooth sound, and allows easy bending. Click the individual string images to view more gauge information. Hello! My name is Jacob and I am a musician in the Boston area. I began playing guitar when I was seven and piano when I was nine. My father was a Berklee College of Music student and my mother sang in the Lexington pops, and so ever since I was young I knew that music was something I wanted to make a career out of. I would practice my instruments for hours each day, and started writing my own songs.    &nb... The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz. Getting to grips with how chords are formed gives you a basic introduction to music theory and helps you understand the ways you can alter them to create more interesting sounds. All chords are built from certain notes in scales. The C major scale is the easiest, because it just runs C, D, E, F, G, A and B. These notes are numbered (usually using Roman numerals) in that order, from one (I) to seven (VII). Archtop guitars are steel-string instruments in which the top (and often the back) of the instrument are carved, from a solid billet, into a curved, rather than a flat, shape. This violin-like construction is usually credited to the American Orville Gibson. Lloyd Loar of the Gibson Mandolin-Guitar Mfg. Co introduced the violin-inspired "F"-shaped hole design now usually associated with archtop guitars, after designing a style of mandolin of the same type. The typical archtop guitar has a large, deep, hollow body whose form is much like that of a mandolin or a violin-family instrument. Nowadays, most archtops are equipped with magnetic pickups, and they are therefore both acoustic and electric. F-hole archtop guitars were immediately adopted, upon their release, by both jazz and country musicians, and have remained particularly popular in jazz music, usually with flatwound strings. Spiral bound guitar book arrived on time as promised. As reference book for guitar chords, it's quite convenient to use for all levels of guitar expertise. It also provides alternatives to play a certain chord. It's easy to follow and to use. Using the tabs near the edge of the page, chords are arranged from A to G & "other chords". Obviously, the guitar greenhorn needs to learn a few basic chords first, and this book builds on those skills. Although the first edition was published in 2006, guitar chords don't really change, unlike other fields of study, so it's relevant today as it was years ago. I deducted 1 star because the back cover arrived crumpled, and I like to keep my books pristine. This book is supposed to be brand new. The person who packed the box was not careful. I still recommend this guitar book as a quick reference. It's faster to use this than look up chords individually on the web. Fretboards are most commonly made of rosewood, ebony, maple, and sometimes manufactured using composite materials such as HPL or resin. See the section "Neck" below for the importance of the length of the fretboard in connection to other dimensions of the guitar. The fingerboard plays an essential role in the treble tone for acoustic guitars. The quality of vibration of the fingerboard is the principal characteristic for generating the best treble tone. For that reason, ebony wood is better, but because of high use, ebony has become rare and extremely expensive. Most guitar manufacturers have adopted rosewood instead of ebony. # Adding a minor seventh to a major triad creates a dominant seventh (denoted V7). In music theory, the "dominant seventh" described here is called a major-minor seventh, emphasizing the chord's construction rather than its usual function.[27] Dominant sevenths are often the dominant chords in three-chord progressions,[18] in which they increase the tension with the tonic "already inherent in the dominant triad".[28] I was lucky enough to meet Justin at the Guitar Institute during a summer school in 2004, and to have some private lessons with him afterwards.  He was the teacher who kickstarted my guitar career and persuaded me that I was ready to join a band.  That was 14 years ago and many dozens of gigs later.  I’m now just finishing a degree in Popular Music Performance.  Justin's online lessons are easy to follow and he has a manner about him which makes you believe that you can achieve.  Where he demonstrates songs, I have found his versions to be consistently more accurate and easy to follow than those of any other online teacher.  On this website you really will find all the skills and information you need to become an excellent musician.  Many thanks. Ian. Inlays are visual elements set into the exterior surface of a guitar, both for decoration and artistic purposes and, in the case of the markings on the 3rd, 5th, 7th and 12th fret (and in higher octaves), to provide guidance to the performer about the location of frets on the instrument. The typical locations for inlay are on the fretboard, headstock, and on acoustic guitars around the soundhole, known as the rosette. Inlays range from simple plastic dots on the fretboard to intricate works of art covering the entire exterior surface of a guitar (front and back). Some guitar players have used LEDs in the fretboard to produce unique lighting effects onstage. Fretboard inlays are most commonly shaped like dots, diamond shapes, parallelograms, or large blocks in between the frets. Modern pickups are tailored to the sound desired. A commonly applied approximation used in selection of a pickup is that less wire (lower electrical impedance) gives brighter sound, more wire gives a "fat" tone. Other options include specialized switching that produces coil-splitting, in/out of phase and other effects. Guitar circuits are either active, needing a battery to power their circuit, or, as in most cases, equipped with a passive circuit. Whether you just started guitar lessons or you've been playing for a while, you may be itching to learn some new songs and take on some new challenges. You might be wondering: where can I go from here? That's where alternate guitar tunings come in! With this guide from Michael L., you'll learn how alternate guitar tunings can take your playing to the next level... One of the amazing things about the guitar is its versatility. Not only can you play rhythm and/or melody in different genres, I was lucky enough to meet Justin at the Guitar Institute during a summer school in 2004, and to have some private lessons with him afterwards.  He was the teacher who kickstarted my guitar career and persuaded me that I was ready to join a band.  That was 14 years ago and many dozens of gigs later.  I’m now just finishing a degree in Popular Music Performance.  Justin's online lessons are easy to follow and he has a manner about him which makes you believe that you can achieve.  Where he demonstrates songs, I have found his versions to be consistently more accurate and easy to follow than those of any other online teacher.  On this website you really will find all the skills and information you need to become an excellent musician.  Many thanks. Ian. With the massive range of options available, you'd have to spend the whole day here to go through every one. There are six and twelve-strings, models specifically made for beginners, limited edition double necks; you name it, you'll find it! For a real classic, strap on a Rickenbacker 330 electric guitar. A staple in 60's mod culture, the unique hollowbody construction, slim neck and contoured body make the Rickenbacker 330 so easy to play that it has held the status as one of the all-time greatest guitars for decades. All courses are available as instant downloads, on disc, or as streaming video on our website and mobile apps for iOS and Android. Study anywhere, anytime in the format of your choice. Interactive features and functions include standard notation, Power Tab, Guitar Pro, jam tracks, playback controls, video looping, slow-mo, tuner, metronome and other learning tools. OCB RELAX, relaxing music, music for meditation, relaxation music, music for studying, yoga music, music for learning, background music, sleep music,music to relax, instrumental music, minecraft music, positive,study music, peaceful music, music for homework, yoga music, wonderful music, spiritual music, ambient music, relaxdaily, chillout, slow music, piano music, soothing music, new age music, peaceful music,beautiful music, anti-stress music, entspannungsmusik, relax music, reiki, reiki music, playlist,musica relax, relaxing music, musica rilassante, spiritualismo, zen music, massage music, spa music, enya, soundtrack, best relax music 2015, game music, soft music, slow music, musica anti-stress, ea games, healing music, wellness music, piano music, guitar music, mood music, youtube music, 16:9, HD, tranquil music, slow instrumental, minecraft, slow background music,music for meditation, musica relax, chill study music, musica chillout, chillout study music, relax daily, nujabes, new age music playlist, musica de relax, homework music relaxing, musica rilassante playlist, ambient music for studying, music for relax, softinstrumental music, blank and jones, best soft music, peaceful tunes, soft tunes, peaceful background music, peace music, slow instrumentals, positive background music, soothing background music, entspannungsmusik baby, baby music, soothing music, relaxing study music, guitar music, musica zen, slow music instrumental, background music instrumental, slow instrumental music, royalty free music
2019-07-17 07:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18289686739444733, "perplexity": 4027.9489144972263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00440.warc.gz"}
https://iacr.org/cryptodb/data/author.php?authorkey=3499
## CryptoDB ### Chunxiang Gu #### Publications Year Venue Title 2008 EPRINT A ring signature allows a user from a set of possible signers to convince the verifier that the author of the signature belongs to the set but identity of the author is not disclosed. It protects the anonymity of a signer since the verifier knows only that the signature comes from a member of a ring, but doesn't know exactly who the signer is. This paper proposes a new ID-based ring signature scheme based on the bilinear pairings. The new scheme provides signatures with constant-size without counting the list of identities to be included in the ring. When using elliptic curve groups of order 160 bit prime, our ring signature size is only about 61 bytes. There is no pairing operation involved in the ring sign procedure, and there are only three paring operations involved in the verification procedure. So our scheme is more efficient compared to schemes previously proposed. The new scheme can be proved secure with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse problem, in the random oracle model. 2006 EPRINT In this paper, we propose an efficient ID-based signature scheme based on pairing. The number of paring operation involved in the verification procedure is one. Our scheme is proved secure against existential forgery on adaptively chosen message and ID attack under the hardness assumption of computational Diffie-Hellman problem, in the random oracle model. 2006 EPRINT Public key encryption with keyword search (PEKS) enables user Alice to send a secret key $T_W$ to a server that will enable the server to locate all encrypted messages containing the keyword $W$, but learn nothing else. In this paper, we propose a new PKES scheme based on pairings. There is no pairing operation involved in the encryption procedure. Then, we provide further discussion on removing secure channel from PKES, and present an efficient secure channel free PKES scheme. Our two new schemes can be proved secure in the random oracle model, under the appropriate computational assumptions. 2006 EPRINT This paper proposes a new ID-based proxy signature scheme based on the bilinear pairings. The number of paring operation involved in the verification procedure of our scheme is only one, so our scheme is more efficient comparatively. The new scheme can be proved secure with the hardness assumption of the k-Bilinear Diffie-Hellman Inverse problem, in the random oracle model. 2006 EPRINT Signature schemes with message recovery have been wildly investigated a decade ago in the literature, but the first ID-based signature with message recovery goes out into the world until 2005. In this paper, we first point out and revise one little but important problem which occurs in the previous ID-based signature with message recovery scheme. Then, by completely different setting, we propose a new ID-based signature scheme with message recovery. Our scheme is much more efficient than the previous scheme. In our scheme (as well as other signature schemes with message recovery), the message itself is not required to be transmitted together with the signature, it turns out to have the least data size of communication cost comparing with generic (not short) signature schemes. Although the communication overhead is still larger than Boneh et al. 's short signature (which is not ID-based), the computational cost of our scheme is more efficient than Boneh et al. 's scheme in the verification phase. We will also prove that the proposed scheme is provably secure in the random oracle model under CDH Assumption. #### Coauthors Takeshi Okamoto (1) Eiji Okamoto (1) Xiaoyu Pan (1) Raylin Tso (1) Yajuan Zhang (1) Yuefei Zhu (4)
2021-09-17 21:29:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5264613032341003, "perplexity": 993.8387861718064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00157.warc.gz"}
https://codereview.meta.stackexchange.com/questions/2603/when-are-edits-too-minor-to-approve
# When are edits too minor to approve? Are there edits that are too small to make or to approve? Or is any improvement of a question or post good? • minor spelling mistakes, typos, or grammatical mistakes • minor formatting mistakes (for example not using code tags for variable names) • etc (removing smilies, removing thanks, ...) I'm sure that I at some point read a question about this topic, but now I'm not able to find it anymore, so maybe this has not been discussed before? • edits post... did I fall for the trap? D: – Jamal Mod Oct 7 '14 at 20:04 • @TopinFrassi yes, I noticed that as well (which is why I'm asking :) ). I approved some of them, because there's no rule against those edits. On the one hand, I think it's not all that bad: Even slight improvements make a post better, not worse. On the other hand, it pollutes the active question tab. – tim Oct 7 '14 at 20:11 • Related Meta Stack Exchange post: Why are trivial edits discouraged? Oct 7 '14 at 21:47 • Related Meta Stack Exchange posts that suggest the 'trivial edit' rules are changing: Blog Post with new workflow Oct 8 '14 at 10:50 • Especially concernig the etc. Section: removing fluff has never actually been deemed too minor, see this answer of mine on MSO Oct 14 '14 at 23:23 The whole issue of editing posts is currently being discussed at a Stack Exchange 'site wide' level. Just hours after this question as asked, Stack Exchange introduced the second of potentially many new features and processes in the Suggested Edit workflow (I consider the first to be the auto-convert-to-wiki process of many-times-edited posts being removed). Part of the revised attitude toward editing is that editing is actively encouraged, "... while ensuring that truly helpful edits – even small ones – are more consistently approved." # No more "Too Trivial" Reject reason! The new workflow, introduced hours ago, no longer has a 'too trivial' reject reason. The threshold is now: "no improvement whatsoever", which is a lower threshold. It is my feeling that the current thresholds for what is considered to be 'too minor' needs to be revisited. Fixing typos and spelling mistakes in posts (no matter how old), will lead to improved quality over the site, improved visibility for questions that matter (why were you looking at the post if it did not matter?), leading to more eyes-on-code. The abuse-side of editing - editing just to bump, or vandalize, or spam - are still managed through the review queues, and still managed by the increased visibility (bumping) of the post. The downside is easily controlled. I think a new attitude toward editing, suggesting edits, and approving edits is in order. When you perceive a suggested edit as too minor, consider looking for the rest of the post to see if there were obvious things that could've been improved. Of course, you can accept/reject and improve yourself, but it's not necessary. As this is a suggested edit, users should be taking more time to make sure the post is really improved. If there's nothing else you see that could've been improved, then it may be worth approving. If not, you could consider rejecting edit, but then you'd have to come up with a custom reason. Then you can ask yourself: does this edit still improve something, and was it worth the time to review? Another thing to keep in mind: there is a limited queue for suggested edits, so it's not good if a user is suggested a lot of minor edits, while there are more substantial suggested edits being added. However, given the low number of suggested edits seen regularly, that's probably not an issue on this site. But it could still be an indication that someone is not taking too much care in their suggested edits. • what is the limit for the queue? Normally, we get a really small amount of suggested edits, but right now, someone is quite obviously looking for common spelling mistakes (such as wierd), and correcting them in all posts. Could this lead to a full queue? – tim Oct 7 '14 at 20:28 • @tim: Probably not, unless no one else was around and the edits kept coming in. I don't know the exact number, either. – Jamal Mod Oct 7 '14 at 20:30 Personally, I think an edit should add value to the question/answer. If the edit consists of correcting a spelling mistake, I don't see how it makes the question any better. I think that fixing tags, formatting code and removing the casual thanks/hello/etc.. or fixing a plain wrong post (full of spelling mistakes for example) is okay. • "If the edit consists of correcting a spelling mistake, I don't see how it makes the question any better." - I hate these kinds of things and not being able to fix them due to arbitrary barriers enrages me to the point where I leave. Oct 11 '14 at 0:01 • I understand your point, though now I think the question isn't important because of the changes SE brought to the edit system. If the system says no edits are too minor, then be it and it's fine with me :) Oct 11 '14 at 2:30 For users who don't yet have editing privileges, the editing screen will say: Your edit will be placed in a queue until it is peer reviewed. We welcome all constructive edits, but please make them substantial. Avoid trivial edits unless absolutely necessary. Users should avoid making trivial edit suggestions, and reviewers should reject such trivial suggestions. This is especially true for old posts that are no longer featured on the front page. Stack Exchange policy for suggested edits has changed! The new rules now encourage any suggestion that has a positive impact. Examples of good edit suggestions include: • Tagging improvements • Correcting a mis-worded post when it is obvious from the context that the author meant something else (e.g. "override" vs. "overload") • Changing a generic title to be descriptive • Reorganizing thoughts to flow more logically • Reducing noise (discarding irrelevant introductory sentences, sorry, thanks, etc.) • Typesetting expressions in MathJax — only if it enhances comprehension or eliminates use of an image
2022-01-27 21:14:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4267703592777252, "perplexity": 2002.2909987063676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00487.warc.gz"}
http://www.edugeek.net/forums/educational-software/78884-scratch-1-4-visible-drives-image-file.html
Educational Software Thread, Scratch 1.4 Visible drives and Image file in Technical; Hi I have put Scratch 1.4 on a shared drive and edited the scratch.ini to show only drives U and ... 1. ## Scratch 1.4 Visible drives and Image file Hi I have put Scratch 1.4 on a shared drive and edited the scratch.ini to show only drives U and S but the C drive is still visable. Everything is locked down but we dont want pupils browsing the C drive. Also when opening Scratch on the user machine it always asks for the image file even though it is in the install dir (standard install apart from location) on the shared drive. I have locked on the Scratch forums and searched here but the only reference I can find is adding VisibleDrives to the ini file. TIA 2. That's how we do it. Things sound like you've got it right, but here's a copy of our ini file; we set the drive letter to H: (which is the pupils' home folder). No other drives are visible in Scratch. [Global] DeferUpdate=1 ShowConsole=0 DynamicConsole=0 ReduceCPUUsage=0 ReduceCPUInBackground=0 3ButtonMouse=0 1ButtonMouse=0 UseDirectSound=1 PriorityBoost=1 B3DXUsesOpenGL=1 CaseSensitiveFileMode=0 EnableAltF4Quit=0 VisibleDrives=H: Home=H: 3. Thanks for the quick reply heres our ini file [Global] DeferUpdate=1 ShowConsole=0 DynamicConsole=0 ReduceCPUUsage=0 ReduceCPUInBackground=0 3ButtonMouse=0 1ButtonMouse=0 UseDirectSound=1 PriorityBoost=1 B3DXUsesOpenGL=1 CaseSensitiveFileMode=0 EnableAltF4Quit=0 Home=U:\* VisibleDrives=U:,S: Pupils home folder is U: - shared resouces in S: C and D drives are 'hidden - restricted' by GPO and they cant be seen in other apps or by going to my computer, they are locked down but like I say we dont want browsing. Its the local C drive they can see not the server C drive so that is something I suppose 4. We don't have Scratch installed on a shared drive. I made an msi, and we deploy it to machines via group policy. Sorry I can't help further. 5. Thanks anyway :-) 6. Ours looks like the following: Code: [Global] DeferUpdate=1 ShowConsole=0 DynamicConsole=0 ReduceCPUUsage=0 ReduceCPUInBackground=0 3ButtonMouse=0 1ButtonMouse=0 UseDirectSound=1 PriorityBoost=1 B3DXUsesOpenGL=1 CaseSensitiveFileMode=0 EnableAltF4Quit=0 VisibleDrives=N:,P: Home=P:\Scratch\* I also use an MSI to install it locally on each machine with this .ini being distributed to the right place on the machine using a startup script. Works fine for us. Mike. 7. Sorry to refresh such an old topic, but got similar problem. Scratch installed from MSI, home drive is N. Each time pupils or teacher try to open scratch file from documents (double click) they get windows asking for image file. If I place image file (copied from installation folder) in same folder as scratch file, it works fine. For some strange reason scratch does not use file from it's own folder. Any ideas? Below our ini file Code: [Global] DeferUpdate=1 ShowConsole=0 DynamicConsole=0 ReduceCPUUsage=0 ReduceCPUInBackground=0 3ButtonMouse=0 1ButtonMouse=0 UseDirectSound=1 PriorityBoost=1 B3DXUsesOpenGL=1 CaseSensitiveFileMode=0 EnableAltF4Quit=0 Home=N: 8. Don't use the MSI. I had the exact same problem and after some research the general consensus was don't use the MSI as it's a pile of turd, use the exe installer instead. There is now an online version which I believe is now the way forward for Scratch as the offline installer has not been updated since 1.4 and the online version is now 2.0 something or other. 9. Originally Posted by awan247 What drivers??? 1.4 is still available from here - http://scratch.mit.edu/scratch_1.4/ 10. Originally Posted by nwblue Also when opening Scratch on the user machine it always asks for the image file even though it is in the install dir (standard install apart from location) on the shared drive. TIA On the off chance that anyone comes across this in the future or is still suffering from the image file thing (Because I also didn't find anything) You can't 'just' point Windows to Scratch.exe The open path from the (Default) value in HKCR\Scratch Project\shell\open\command needs to be my:\path\to\Scratch.exe "my:\path\to\Scratch.image" "%1" I'm just pushing that out to everything using Group Policy Preferences. 11. I still find it unbelievable that an educational establishment like that would actively encourage the use of Adobe Air on a network. For something as simple as Scratch they *love* to overcomplicate. SHARE:
2014-12-26 22:44:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5613545179367065, "perplexity": 3635.0107508935034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549808.107/warc/CC-MAIN-20141224185909-00007-ip-10-231-17-201.ec2.internal.warc.gz"}
https://search.r-project.org/CRAN/refmans/condir/html/csReport.html
csReport {condir} R Documentation Report results of conditioning data Description Report results of data analyses run with the csCompare. Usage csReport( csCompareObj = NULL, csSensitivityObj = NULL, save = FALSE, fileName = "report", alphaLevel = 0.05, interpretation = FALSE ) Arguments csCompareObj a list or data frame returned from the csCompare function. The object should be of class csCompare. csSensitivityObj Sensitivity analysis results returned from the csSensitivity function. The object should be of class csSensitivity. save If code argument is set to FALSE (default), the results are printed on the screen. Otherwise, a '.txt' file with the report is generated. fileName The file name of the produced report. The argument is ignored if save is set to FALSE. alphaLevel The alpha level to be used for determining significant or non-significant results. interpretation Should an interpretation of the results be included? (FALSE). In case of the Bayesian results, the results are interpreted according to Lee and Wagenmakers (2013). Examples set.seed(1000) tmp <- csCompare(cs1 = rnorm(n = 100, mean = 10), cs2 = rnorm(n = 100, mean = 9)) csReport(tmp) [Package condir version 0.1.3 Index]
2022-11-28 22:44:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20863589644432068, "perplexity": 5313.488656531223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00768.warc.gz"}
https://cstheory.stackexchange.com/questions/31223/theoretical-background-of-classes-and-objects
Theoretical background of Classes and Objects I would like to learn about the possible ways of formalizing Classes and Objects (in programing languages like java) using formal languages. Where should I start? This might be related to my previous question I asked here. • Why are you committing to using formal languages? There is an entire subfield of computer science called semantics devoted to studying the mathematics underlying programming language concepts. – Vijay D Apr 22 '15 at 4:59 • This question might be more appropriate to cs.stackexchange.com. Anyway, there are various calculi, of which Abadi and Cardelli's object calculus, a variant of the $\lambda$-calculus, might be the most well-known. It models classes as special kind of objects. Calculi such as Featherweight Java and Middleweight Java model classes directly. – Martin Berger Apr 22 '15 at 14:18 • @MartinBerger, Is there anyway that I can move this seamlessly to cs.stackexchange.com, or I need to re-post the questions there and provide a link to this one!? – qartal Apr 24 '15 at 6:59 • @qartal Yes, see e.g. here. – Martin Berger Apr 24 '15 at 7:03
2020-08-11 07:26:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3704449236392975, "perplexity": 1051.809887619385}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738735.44/warc/CC-MAIN-20200811055449-20200811085449-00557.warc.gz"}
http://mathematica.stackexchange.com/questions/10220/obtaining-joint-distributions-and-conditional-distributions-using-mathematica?answertab=oldest
# Obtaining joint distributions and conditional distributions using Mathematica I have two multivariate Gaussian distributions $p(x)$ and $p(z)$ with mean vectors $m_x$ and $m_z$, and covariance matrices $\Sigma_x$ and $\Sigma_z$. my model is a simple linear model $x = W z+n$ where $n$ is a noise vector with mean $0$ and diagonal covariance matrix of the form $\sigma^2 I$, where $I$ is the identity matrix. I observe the variable $x$. now how can I calculate the joint distribution $p(x,z)$ and the conditional distributions $p(x|z)$ and $p(z|x)$? - It is unclear in this question what is known and what is not. For instance, does $W$ need to be estimated from the data or is it known? What about $\sigma$? What about the other parameters $m_x$ etc.? Do you observe the corresponding value of $z$? When you write you "observe" $x$, does that mean you make a single observation of a multivariate normal random variable, or do you have a dataset of multiple independent identically distributed observations? –  whuber Sep 4 '12 at 18:46 To get the joint density of two distributions, you need to use ProductDistribution. For example, consider the two distributions $p(x)$ and $p(y)$: px = NormalDistribution[2, 2]; py = NormalDistribution[-2, 3]; (* visualize *) {Plot[PDF[px, x], {x, -3, 7}, PlotRange -> All], Plot[PDF[py, y], {y, -10, 8}, PlotRange -> All]} // GraphicsRow Now obtain the joint distribution $p(x,y)$ and visualize: pxy = ProductDistribution[px, py]; Quiet@Plot3D[PDF[pxy, {x, y}], {x, -3, 7}, {y, -10, 8}, PlotRange -> All, AxesLabel -> {"x", "y"}] The conditional distribution $p(x|y)$ is then simply the ratio of the joint distribution to the marginal of $y$. pcxy = PDF[pxy, {x, y}]/PDF[MarginalDistribution[pxy, 2], x] Quiet@Plot3D[pcxy, {x, -3, 12}, {y, -10, 8}, PlotRange -> All, AxesLabel -> {"x", "y"}] - @SjoerdC.deVries Which of the above are you referring to? I've only used the definition of conditional probability and not assumed independence explicitly. My probability theory is a bit rusty and I can't say I haven't overlooked something, but it seems correct to me. It easily extends to multivariate cases with covariance matrices. I didn't use them because it's harder to visualize. –  rm -rf Sep 4 '12 at 18:31 I retracted the comment as the question isn't totally clear on the relationship between x and y. The OP mentions a model that relates the two, which implies (I think) that x and y are related. In that case a CopulaDistribution comes to mind (but I ain't no expert here either). –  Sjoerd C. de Vries Sep 4 '12 at 18:43 @SjoerdC.deVries, speaking of copulas... ("The formula that killed Wall Street" 2009, about Li's copula model); also amusing. –  alancalvitti Sep 4 '12 at 19:36 @alancalvitti Very readable article, I like it. –  Sjoerd C. de Vries Sep 4 '12 at 20:44 As I use this a lot in my own research, let me answer your question by generalizing it to possibly larger dimensions and with a possibly correlated joint probability. Let me define ConditionalMultinormalDistribution::usage ="ConditionalMultinormalDistribution[pdf,val,idx] returns the conditional MultiNormal PDF from the joint PDF pdf while setting the variables of index idx to values vals" so that for example: m = Table[i, {i, 3}]; S = Table[i + j, {i, 3}, {j, 3}]/20 + DiagonalMatrix[Table[1, {3}]]; pdf = MultinormalDistribution[m, S]; cpdf = ConditionalMultinormalDistribution[pdf, {1, 5}, {1, 3}] (* NormalDistribution[327/139, 317/278] *) or slightly less trivially, m = Table[i, {i, 5}]; S = Table[i + j, {i, 5}, {j, 5}]/20 + DiagonalMatrix[Table[1, {5}]]; pdf = MultinormalDistribution[m, S]; cpdf = ConditionalMultinormalDistribution[pdf, {1, 1, 1}, {1, 3, 5}] (* MultinormalDistribution[{1, 63/23}, {{35/32, 5/32}, {5/32, 885/736}}] *) ContourPlot[PDF[cpdf, {x, y}], {x, -2, 4}, {y, 0, 6}, PlotRange -> All, PlotPoints -> 50, Contours -> 15] The actual code: ConditionalMultinormalDistribution[pdf_, val_, idx_] := Module[ {S = pdf[[2]], m = pdf[[1]], odx, Σa, Σb, Σc, μ2, S2, idx2, val2}, odx = Flatten[{Complement[Range[Length[S]], Flatten[{idx}]]}]; Σa = (S[[odx]] // Transpose)[[odx]]; idx2 = Flatten[{idx}]; val2 = Flatten[{val}]; Σc = (S[[odx]] // Transpose)[[idx2]] // Transpose; Σb = (S[[idx2]] // Transpose)[[idx2]]; μ2 = m[[odx]] + Σc.Inverse[Σb].(val2 - m[[idx]]); S2 = Σa - Σc.Inverse[Σb]. Transpose[ Σc]; S2 = 1/2 (S2 + Transpose[S2]); If[ Length[μ2] == 1, NormalDistribution[μ2 // First, Sqrt[S2 // First // First]], MultinormalDistribution[μ2, S2] ] Since Σb only enters as its inverse, you can do some precalculation with LinearSolve[], e.g. Σbsol = LinearSolve[(S[[idx2]] // Transpose)[[idx2]]], and then compute, say, S2, as S2 = ((# + Transpose[#])/2) &[Σa - Σc.Σbsol[Transpose[Σc]]]. –  J. M. Sep 4 '12 at 22:29
2014-04-20 03:18:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17559698224067688, "perplexity": 2283.0454511321386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/421151/resources-where-i-can-find-open-problems-in-number-theory-along-with-their-level
# Resources where I can find open problems in number theory along with their level of difficulty NOTE: I will not accept an answer because a lot of answers are really good and if anyone want to post under this question later then they are most welcome to post as comment or answer because it will certainly help many people. Thanks to all that contributed. I have completed my master's in mathematics a couple of years ago and due to very strong personal and professional reasons I couldn't get admitted to grad school despite having a good academic background. I have a really good background in number theory. I don't have guidance of any professor right now and I want to try working on an open problem. Can you please let me know of resources( websites/ blogs/books) where I can find open problems in number theory to work on along with their estimated level of difficulty ( if possible)? Thanks! • For my various open conjectures in number theory, you may visit my homepage maths.nju.edu.cn/~zwsun . Apr 26 at 15:49 • might it be possible that open conjectures exist which are not "difficult" to prove? a more productive strategy I would imagine is to identify a research direction where the open conjectures have not yet been formulated, so you can hope to find a conjecture that is both interesting and not too difficult; it is likely that you will need the guidance of an experienced researcher to find such a promising research direction, it's not something you will get from the web. Apr 26 at 16:21 • If a problem is open, then how is anyone to know its level of difficulty? Apr 26 at 23:45 • Any problem that is clearly formulated, clearly publicised, and still open, is almost inevitably going to be difficult. (Proof: Any problem that’s clear, publicised, and not difficult will swiftly get solved.) Apr 27 at 8:26 • You could search this very website (and also math.stackexchange) for unanswered questions tagged number-theory. You can also have a look at the problem sets from the annual West Coast Number Theory meetings, posted at westcoastnumbertheory.org/problem-sets Apr 27 at 13:46 There is the famous Unsolved Problems in Number Theory by Richard Guy, 3rd Edition, 2004. PDF available at the Springer link. • Springer should perhaps try to convince several number theorists to update the book to a 4th edition. Apr 27 at 16:27 • @TimothyChow For example, the discussion around the Goldbach conjecture has been affected by Helfgott's work on ternary Goldbach. Looking through a couple chapters, I would concur that most likely a majority have not seen progress. Apr 27 at 17:42 • Noga Alon tells an interesting story about Guy's book (see page 7). The first time he picked up the book, he opened it randomly and read a problem that he had never seen before. He solved it fairly rapidly. Encouraged by this success, he combed through the book, but was unable to make any progress on any other problem. Apr 27 at 20:39 • @Timothy, problem E19 asks, "Are the integer parts of the powers of a fraction infinitely often prime?" Guy notes that Forman and Shapiro proved that $[(3/2)^n]$ and $[(4/3)^n]$ are composite infinitely often. Since then, Dubickas and others have found more examples on the composite side of the question. See my answer at mathoverflow.net/questions/153426/… Apr 28 at 1:02 • Problem A8 asks about gaps between primes, in particular, about twin primes. In 2013, Yitang Zhang produced a tremendous improvement on previous results, showing there were infinitely many pairs of primes separated by at most $70,000,000$. That number has been reduced to $246$. The 1st edition of Guy's book asked, "are there six points in the plane, no three on a line, no four on a circle, all of whose mutual distances are rational?" Such a set having been found by Leech, the 3rd edition asked "are there any sets of more than six such points?" (continued) Apr 28 at 1:22 To some extent, research questions are really research themes, and these can definitely be found in journals. Just open a journal (e.g. Algebra and Number Theory) and read an article or review. Sometimes picked at random it can be interesting to read to the end, and you're left with many ideas for new developments, and often direct open questions stated at the end of the paper. Attacking the "open problems" (as they are called) in number theory (e.g. Riemann etc) straight away may not be fruitful as they can be very difficult if attacked directly from first principles, though this is tenacious and can teach you a lot about the problem. Though to some extent almost no one is just attacking them. Even Wiles was working on something else when he realized he could therein work to prove Fermat's final theorem as a consequence of what he'd seen. He was working on the arithmetic of elliptic curves with complex multiplication by the methods of Iwasawa theory, which had surrounding workers publishing about it (from what I understand). But there are plenty of research themes in number theory (and otherwise) which are currently being worked on (e.g. probabilistic number theory, etc), and I suppose one would look at research articles in journals of number theory to see what those are. What are other students in number theory working on? What did they publish? Even take a recent PhD thesis of interest and ask "what can be done to develop these results", or "why are these results not interesting"? Knowing all the research themes is quite difficult, but one learns about them in conferences etc, sometimes presented by experts, but more often presented by PhD students just learning the ropes of the area, each are useful to listen to. Each is a window into a research area one could pick up and get into, like an investment or stock, which hopefully pays off in the long run. Go to a good math library, pick up a few books (perhaps even randomly) in number theory and start reading them until you find one book which you really like. I am (almost) sure that you will ask yourself plenty of questions when studying deeply some interesting mathematics. At the beginning, many of these questions are already solved (and perhaps not hard) but questions help advancing. At some point, some of your questions will have no known answer. With some luck, one of these questions is interesting and solvable. (Books can of course be replaced by attendance of lectures, discussions with collegues etc.) The point is that one needs often some external input and good books can provide this. A last piece of advice: Do not chose the crowdest corners (e.g. around Riemann hypothesis), at least at the beginning, if you want to make some contributions. Do check out the Handbook of Number Theory (Volumes I and II), if you need to refer to a compendium of the latest results on number theory. (I forget which Volume has open problems.)
2022-08-19 02:17:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5402313470840454, "perplexity": 489.19259489143855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00024.warc.gz"}
https://math.stackexchange.com/questions/2984763/lipschitz-constant-of-difference-of-convex-functions
# Lipschitz constant of difference of convex functions Let $$\mathcal{H}$$ be a real Hilbert space and $$f:= g-h$$ where $$g, h \colon \mathcal{H} \to \mathbb{R}$$ are continuously differentiable and convex functions with $$\lambda-$$ and $$\mu-$$ Lipschitz continuous gradient, respectively. It's not hard to show that $$f$$ is $$\left( \lambda + \mu \right) -$$ Lipschitz continuous gradient. I do not think that it is optimal but I'm not sure how to derive a better constant as well. My guess, it could be $$\max \left\lbrace \lambda , \mu \right\rbrace$$. • It is optimal. Take $h=-g$. – Severin Schraven Nov 4 '18 at 20:29 • @SeverinSchraven thanks for pointing this out. In fact, I forgot the very important property that $g$ and $h$ are convex (as written in the title) that is, we can not take $h = -g$ – mortal Nov 4 '18 at 20:31 • $g(x) = \lambda x$, $h(x) = -\mu x$? – LinAlg Nov 4 '18 at 22:52 I think you can find it from the Hessian Matrix. I change a bit your notation to include the strong convexity as well. The following holds: $$\mu_f I \preceq \nabla^2 f(x) \preceq \lambda_f I, ~ and ~ \mu_g I \preceq \nabla^2 g(x) \preceq \mu_g I,$$ where $$\lambda$$ denotes the Lipschitz gradient constant and $$\mu$$ denotes the strong convexity constant. Thus, the Hessian of $$h=f-g$$ satisfies $$(\mu_f - \lambda_g)I \preceq \nabla^2 h(x) \preceq (\lambda_f - \mu_g) I \preceq \lambda_f I \preceq (\lambda_f+\lambda_g) I.$$ So your bound is too conservative as you said. • sorry I still could not understand your answer. So you take $h:= f-g$ in your proof means $f:=g-h$ in my case? – mortal Nov 12 '18 at 12:05
2019-06-24 13:19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852763414382935, "perplexity": 155.16253705422147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00146.warc.gz"}
https://im.openupresources.org/8/students/7/12.html
# Lesson 12: Applications of Arithmetic with Powers of 10 Let's use powers of 10 to help us make calculations with large and small numbers. ## 12.1: What Information Do You Need? What information would you need to answer these questions? 1. How many meter sticks does it take to equal the mass of the Moon? 2. If all of these meter sticks were lined up end to end, would they reach the Moon? ## 12.2: Meter Sticks to the Moon 1. How many meter sticks does it take to equal the mass of the Moon? Explain or show your reasoning. 2. Label the number line and plot your answer for the number of meter sticks. 3. If you took all the meter sticks from the last question and lined them up end to end, will they reach the Moon? Will they reach beyond the Moon? If yes, how many times farther will they reach? Explain your reasoning. 4. One light year is approximately $10^{16}$ meters. How many light years away would the meter sticks reach? Label the number line and plot your answer. ## 12.3: That’s a Tall Stack of Cash In 2016, the Burj Khalifa was the tallest building in the world. It was very expensive to build. Consider the question: Which is taller, the Burj Khalifa or a stack of the money it cost to build the Burj Khalifa? 1. What information would you need to be able to solve the problem? 2. Record the information your teacher shares with the class. 3. Answer the question “Which is taller, the Burj Khalifa or a stack of the money it cost to build the Burj Khalifa?” and explain or show your reasoning. 4. Decide what power of 10 to use to label the rightmost tick mark of the number line, and plot the height of the stack of money and the height of the Burj Khalifa. 5. Which has more mass, the Burj Khalifa or the mass of the pennies it cost to build the Burj Khalifa? What information do you need to answer this? 6. Decide what power of 10 to use to label the rightmost tick mark of the number line, and plot the mass of the Burj Khalifa and the mass of the pennies it cost to build the Burj Khalifa. ## Summary Powers of 10 can be helpful for making calculations with large or small numbers. For example, in 2014, the United States had 318,586,495 people who used the equivalent of 2,203,799,778,107 kilograms of oil in energy. The amount of energy per person is the total energy divided by the total number of people. We can use powers of 10 to estimate the total energy as $$2 \boldcdot 10^{12}$$ and the population as $$3 \boldcdot 10^8$$ So the amount of energy per person in the U.S. is roughly $$2 \boldcdot 10^{12} \div 3 \boldcdot 10^8$$ That is the equivalent of $$\frac{2}{3} \boldcdot 10^4$$ kilograms of oil in energy. That’s a lot of energy—the equivalent of almost 7,000 kilograms of oil per person! In general, when we want to perform arithmetic with very large or small quantities, estimating with powers of 10 and using exponent rules can help simplify the process. If we wanted to find the exact quotient of 2,203,799,778,107 by 318,586,495, then using powers of 10 would not simplify the calculation.
2019-02-16 11:58:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6298640966415405, "perplexity": 840.595328339298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00083.warc.gz"}
https://math.paperswithcode.com/paper/local-well-posedness-for-the-derivative
Local Well-Posedness for the Derivative Nonlinear Schr\"odinger Equations with $L^2$ Subcritical Data 10 Aug 2016 Guo Shaoming Ren Xianfeng Wang Baoxiang We will show its local well-posedness in modulation spaces $M^{1/2}_{2,q}({\Real})$ $(2\leq q<\infty)$. It is well-known that $H^{1/2}$ is a critical Sobolev space of DNLS so that it is locally well-posedness in $H^s$ for $s\geq 1/2$ and ill-posed in $H^{s'}$ with $s'<1/2.$ Noticing that that $M^{1/2}_{2,q} \subset B^{1/q}_{2,q}$ is a sharp embedding and $L^2 \subset B^0_{2,\infty}$, our result contains all of the subcritical data in $M^{1/2}_{2,q}$, which contains a class of functions in $L^2\setminus H^{1/2}$... PDF Abstract
2021-05-09 07:57:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6604442000389099, "perplexity": 530.9058413744575}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00248.warc.gz"}
http://mathhelpforum.com/differential-geometry/188706-topology-x-print.html
# Topology on X • September 24th 2011, 10:14 AM dwsmith Topology on X Consider the set $X=\{a,b,c\}$ The collection of subsets are $\O,X,\{a,b\},\{b,c\},\{b\}$ The book says the one point set $\{b\}$ is not closed because its complement is not open. What is the complement of $\{b\}\mbox{?}$ • September 24th 2011, 10:25 AM Plato Re: Topology on X Quote: Originally Posted by dwsmith Consider the set $X=\{a,b,c\}$ The collection of subsets are $\O,X,\{a,b\},\{b,c\},\{b\}$ The book says the one point set $\{b\}$ is not closed because its complement is not open. What is the complement of $\{b\}\mbox{?}$ $\color{blue}X\setminus\{b\}=\{a,c\}$ • September 24th 2011, 11:19 AM dwsmith Re: Topology on X Quote: Originally Posted by Plato $\color{blue}X\setminus\{b\}=\{a,c\}$ But that isn't in topology. • September 24th 2011, 11:37 AM Plato Re: Topology on X Quote: Originally Posted by dwsmith But that isn't in topology. That is exactly the point. The complement of $\{b\}$ is not an open set, not in the topology. There because the complement of $\{b\}$ is not open means $\{b\}$ is not closed.
2016-02-11 22:41:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7645920515060425, "perplexity": 824.912355212599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701162808.51/warc/CC-MAIN-20160205193922-00327-ip-10-236-182-209.ec2.internal.warc.gz"}
https://etnaetnaetna.be/things+we+get+from+calcium+in+metals+producers+6673.html
# things we get from calcium in metals producers #### Complete List of Essential Trace Minerals: Food Sources … 8/6/2018· This trace mineral is also very important for retaining calcium in bones and for converting blood sugar into glycogen, which is the form we store and use it in. Biggest Health Impact Mild potassium deficiencies are common and are associated with elevated blood pressure, as well as unbalanced kidney and heart function. #### Reactions of acids with metals - Metals - KS3 … 26/7/2020· metal + acid → salt + hydrogen. For example, magnesium reacts with hydrochloric acid to produce magnesium chloride: magnesium + hydrochloric … #### Reactions of acids with metals - Metals - KS3 Chemistry … 26/7/2020· Reactions of acids with metals Acids react with most metals and, when they do, a salt is produced. But unlike the reaction between acids and bases, we do not get water. Instead we get … #### Electrical Conductivity of Metals - ThoughtCo 2/3/2020· Calcium 3.36x10 -8 2.82x10 7 Beryllium 4.00x10 -8 2.500x10 7 Rhodium 4.49x10 -8 2.23x10 7 Magnesium 4.66x10 -8 2.15x10 7 Molybdenum 5.225x10 -8 1.914x10 7 Iridium 5.289x10 -8 1.891x10 7 Tungsten 5.49x10 -8 1.82x10 7 Zinc 5.945x10 -8 1.682x10 7 Cobalt #### Calcium Element | History, Uses, Facts, Physical Chemical … In many deoxidizing, reducing, and degasifying appliions, however, calcium is preferred because of its lower volatility and is used to prepare chromium, thorium, uranium, zirconium, and other metals … #### Chem4Kids: Calcium: General Info and Everyday Items Clams, oysters, and other animals in the oceans have shells. When you do a little looking, you will find out that their shells are made with Calcium. It''s not pure calcium, but it adds strength the way it does in your bones and teeth. #### 5 Calcium-Rich Foods (Many Are Non-Dairy) 27/7/2018· However, many non-dairy sources are also high in this mineral. These include seafood, leafy greens, legumes, dried fruit, tofu and various foods that are fortified with calcium. Here are 15 foods #### Calcium carbonate - Essential Chemical Industry Limestone and chalk are both forms of calcium carbonate and dolomite is a mixture of calcium and magnesium carbonates. All have impurities such as clay but some rocks are over 97% pure. Limestone and other products derived from it are used extensively in the construction industry and to neutralise acidic compounds in a variety of contexts. #### Calcium carbonate - Essential Chemical Industry Limestone and chalk are both forms of calcium carbonate and dolomite is a mixture of calcium and magnesium carbonates. All have impurities such as clay but some rocks are over 97% pure. Limestone and other products derived from it are used extensively in the construction industry and to neutralise acidic compounds in a variety of contexts. #### Facts About Calcium | Live Science 26/10/2016· Limestone, or calcium carbonate, is used directly as construction material and indirectly for cement. When limestone is heated it releases carbon dioxide, leaving behind quicklime (calcium oxide #### Calcium - Element information, properties and uses | … Calcium is a silvery-white, soft metal that tarnishes rapidly in air and reacts with water. Uses. Calcium metal is used as a reducing agent in preparing other metals such as thorium and uranium. It is also used as an alloying agent for aluminium, beryllium, … #### Minerals (for Kids) - Nemours KidsHealth The macromineral group is made up of calcium, phosphorus, magnesium, sodium, potassium, chloride, and sulfur. A trace of something means that there is only a little of it. So even though your body needs trace minerals, it needs just a tiny bit of each one. #### Calcium Content of Foods | Patient Eduion | UCSF Health Cereals (calcium fortified) 0.5 to 1 cup 250 to 1000 Amaranth, cooked 0.5 cup 135 Bread, calcium fortified 1 slice 150 to 200 Brown rice, long grain, raw 1 cup 50 Oatmeal, instant 1 package 100 to 150 Tortillas, corn 2 85 #### Keeping heavy metals out of beer and wine -- … 20/2/2019· Researchers report that a material often used as a filter in the production of alcoholic beverages could be transferring heavy metals such as arsenic to beer and wine. #### Facts About Calcium | Live Science 17/8/2020· When metals react with water, metal hydroxides and hydrogen gas are formed. This can be represented in different ways as shown: Word equation \[Calcium + water \to calcium\,hydroxide + … #### Rare Earth Metal Production: 0 Countries | INN 23/3/2021· The US is a major importer of rare earth materials, with demand for compounds and metals worth US$110 million in 2020; that’s down from US$160 million in 2019. #### It''s Elemental - The Element Calcium About 4.2% of the earth''s crust is composed of calcium. Due to its high reactivity with common materials, there is very little demand for metallic calcium. It is used in some chemical processes to refine thorium, uranium and zirconium. Calcium is also used to remove oxygen, sulfur and carbon from certain alloys. #### Asian Metal - The World Metal Information Center B Metals Sell Magnesium Ingots 7.5kg±0.5kg Mg99.95 Fugu Taida Coal Co., Ltd. Sell Ferro Molybdenum Mo 50-70%, Ferro Molybdenum,Per Customer Request Jinzhou Xintai Metallurgy Co., Ltd Sell AmmoniumMoly Ammonium Molybdate,Ammonium Sell #### Precious metals and other important minerals for health - … 15/2/2021· Instead, many essential metals are needed to activate enzymes — molecules with important jobs in the body. And metals have many other essential roles as well. For example: Calcium builds bones and teeth; activates enzymes throughout the body; helps #### Cobalt’s 3 Month Price Hike a Sign of Things to Come? | … 8/4/2021· There are also a nuer of factors that amount to a considerable level of vulnerability on the supply side. The metal is predominantly the co-product of copper production in the Democratic #### What Are the Properties of the Alkaline Earth Metals? 7/9/2019· Relatively low melting points and boiling points, as far as metals are concerned Typically malleable and ductile. Relatively soft and strong. The elements readily form divalent ions (such as Mg 2+ and Ca 2+). The alkaline earth metals are very reactive, although #### Calcium carbonate - Essential Chemical Industry Limestone and chalk are both forms of calcium carbonate and dolomite is a mixture of calcium and magnesium carbonates. All have impurities such as clay but some rocks are over 97% pure. Limestone and other products derived from it are used extensively in the construction industry and to neutralise acidic compounds in a variety of contexts. #### Metals Safety Information - Ganoksin Jewelry Making … Exposure to multiple metals can result in interactions between them, which causes greater damage than exposure to a single metal alone. An example is the interaction of cadmium and zinc, or the ability of lead to displace calcium in the body, and thus affect the nervous system (Waldron 13). #### It''s Elemental - The Element Calcium About 4.2% of the earth''s crust is composed of calcium. Due to its high reactivity with common materials, there is very little demand for metallic calcium. It is used in some chemical processes to refine thorium, uranium and zirconium. Calcium is also used to remove oxygen, sulfur and carbon from certain alloys. #### Reactions of acids with metals - Metals - KS3 … 26/7/2020· metal + acid → salt + hydrogen. For example, magnesium reacts with hydrochloric acid to produce magnesium chloride: magnesium + hydrochloric … #### Cheney Lime & Cement Company When sugar is added, an intermediate product is formed, calcium sucrate (calcium hydroxide saccharate) which is significantly more soluble than calcium hydroxide. For example; the addition of 35 grams of sugar will increase the solubility of the calcium hydroxide from 0.159 to 13.332 grams per 100 grams of saturated solution at 25 o C; which is a solubility factor increase of 84. #### Calcium - Wikipedia face-centred cubic (fcc) Calcium is a chemical element with the syol Ca and atomic nuer 20. As an alkaline earth metal, calcium is a reactive metal that forms a dark oxide-nitride layer when exposed to air. Its physical and chemical properties are most similar to its heavier homologues strontium and barium.
2021-11-29 17:11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46741563081741333, "perplexity": 8763.250449878038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00538.warc.gz"}
http://st551.cwick.co.nz/homework-3/
# Homework 3 Due 2017/10/19 ## 1. p-values Download the ASA’s Statement on p-values, (see Wasserstein and Lazar 2016 reference below). Skip the “Context, Process and Purpose” section and read the section “ASA Statement on Statistical Significance and P-values” starting on page 3. 1. We usually think about a small p-value providing evidence against the null hypothesis. What else does the article imply a small p-value may cast doubt on? 2. What is the primary argument for not basing scientific conclusions or policy decision solely on whether the p-value is below some threshold? 3. What is p-hacking? 4. Can a p-value measure the size of an effect? What can measure the size of an effect? 5. Skim through the references in the “A brief p-Values and Statistical Significance Reference List”, and shortlist three article titles that interest you. (You may be required to read one of these in a future homework) ## 2. Data analysis The Behavioral Risk Factor Surveillance System (BRFSS) is a nationwide health-related survey of U.S. residents. For this question you can get a sample of responses from the 2003 survey by downloading an R data file from the class website: library(tidyverse) "brfss.rds", mode = "wb") Then load it into the variable brfss with: brfss <- read_rds("brfss.rds") brfss The variables weight_kg and wtdesire_kg correspond to the responses to the questions: • About how much do you weigh without shoes? • How much would you like to weigh? respectively converted to kilograms. You can create a variable to represent the amount of weight a respondent would like to lose with: brfss <- mutate(brfss, desired_loss = weight_kg - wtdesire_kg) 1. Find summary statistics (mean, standard deviation and number of observations) for desired_loss for both males and females in the sample. 2. Produce histograms of desired_loss for both males and females. 3. Do US resident females, on average, want to lose weight (i.e. is the mean desired loss greater than zero)? Conduct the appropriate analyses and write a statistical summary of your findings 4. Do US resident males, on average, want to lose weight (i.e. is the mean desired loss greater than zero)? Conduct the appropriate analyses and write a statistical summary of your findings ## 3. Performance of t-test Explore the Type I error rate of the t-test for a two-sided level $$\alpha = 0.05$$ test, for samples of size $$n = 5, 10, 25, 50$$, for one of the following population distributions: • Uniform(0, 1) • Chi-squared(1) • Beta(.5, .5) • Exponential(1) Use at least 10,000 simulations for each scenario. 1. Provide a table of the estimated Type I error rate by sample size. 2. Write a short (3-5 sentence) summary of how the t-test performs: is it close enough to exact that you would be comfortable using it even when the underlying distribution is as far from normal as these distributions? ## References Wasserstein, Ronald L, and Nicole A Lazar. 2016. “The ASA’s Statement on P-Values: Context, Process, and Purpose” 70 (2): 129–33. doi:10.1080/00031305.2016.1154108.
2021-09-27 21:44:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504586935043335, "perplexity": 3054.670232968803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00280.warc.gz"}
https://www.transtutors.com/questions/using-the-information-in-e18-7-and-assuming-that-the-implicit-rate-for-the-lessor-is-4377322.htm
# Using the information in E18-7 and assuming that the implicit rate for the­lessor is 7%, prepare the Using the information in E18-7 and assuming that the implicit rate for the­lessor is 7%, prepare the journal entries for 2016 and 2017 for Pollet Products. Pollet Products’ year end is December 31. In E18-7 Mr. Kay Food Mart Incorporated, as lessee, enters into a lease agreement on July 1, 2016, to lease mobile refrigeration equipment from Pollet Products. The cost of the equipment to Pollet is $180,000. The following information is relevant to the lease agreement. • The term of the non- cancellable lease is five years with no renewal options and there is no transfer of title. Payments of$ 44,880 are due beginning on July 1, 2016. • The fair value of the equipment at July 1, 2016, is \$ 196,898. The equipment has an economic life of five years with no residual value. • Mr. Kay Food Mart depreciates similar equipment it owns on the straight- line basis over the economic life of the property. • Mr. Kay Food Mart’s incremental borrowing rate is 8% and the lessor’s implicit rate in the lease is not known to Mr. Kay Food Mart. • There are no executory costs related to this lease. • There are no material uncertainties as to future costs and collectability is reasonably assured. View Solution: Using the information in E18 7 and assuming that the implicit ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
2020-09-19 21:02:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3008471727371216, "perplexity": 6422.802447056594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00483.warc.gz"}
https://wikieducator.org/User:Wsiaosi/My_Sandbox
# User:Wsiaosi/My Sandbox Jump to: navigation, search This is bold and this is italics This is bold and this is italics This is bold and this is italics This is bold and this is italics # Level 1 heading • One • Two • This is a sub of two • Third ### Level 2 heading 1. One • This is a sub of one 2. Two 1. This is a sub of two 3. Three #### Level 3 heading 1. One • This is a sub of one 2. Two 1. This is a sub of two 3. Three 1. This is a sub of three 4. Four
2022-10-07 19:18:33
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622215986251831, "perplexity": 13357.455994917951}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00235.warc.gz"}
https://economics.stackexchange.com/questions/9632/midas-mixed-frequency-regression-sampling-switching-the-typical-use-case
# MIDAS (mixed frequency regression/sampling) - switching the typical use case? Mixed-frequency methods typically involve using the higher frequency data (stock ticks, etc) to forecast the lower-frequency (GDP, in an example exaggerated to stress frequency disparity). This may be a very simple/obvious answer, but because I haven't been able to find the opposite goal addressed in any of the literature (a great guide to this literature is: http://www.uclouvain.be/cps/ucl/doc/ssh-ilsm/images/MIDAS_Course_Syllabus_NBB.pdf), I'm just not sure what I can get away with when it comes to combining two datasets at different frequencies before any time series work. It's possible that this is addressed in the literature listed in the above link, and I just haven't seen it. But the goal always seems to be in one direction: forecast lower-frequency data with higher-frequency. What if I want to allow for the possibility of some endogenous two-way effects, and try to forecast the higher-frequency based on the lower-frequency? "At a general level, the interest in MIDAS regressions addresses a situation often encountered in practice where the relevant information is high frequency data, whereas the variable of interest is sampled at a lower frequency. One example pertains to models of stock market volatility. The low frequency variable is for instance the quadratic variation or other volatility process over some long future horizon corresponding to the time to maturity of an option, whereas the high frequency data set is past market information potentially at the tick-by-tick level." See also from same paper: "Take for instance the relationship between inflation and growth. Instead of aggregating the inflation series to a quarterly sampling frequency to match GDP data, one can run a MIDAS regression combining monthly and quarterly data." What about the opposite? IE, if I have a dataset of inflation: YearQ Inflation 2006Q1 3 2006Q2 3.5 2006Q3 3 2006Q4 3.5 2007Q1 3.5 2007Q2 3.4 2007Q3 3.4 2007Q4 3.4 And a dataset of GDP: Year GDP 2006 3 2007 3.1 Can I simply combine the 2 like so? (really easy as it's just a simple join, so appealing to me for that reason when it comes to more complicated examples of same) YearQ Inflation GDP 2006Q1 3 3 3 2006Q2 3.5 3 2006Q3 3 3 3 2006Q4 3.5 3 2007Q1 3.5 3.1 2007Q2 3.4 3.1 2007Q3 3.4 3.1 2007Q4 3.4 3.1 Can I now try to model inflation using RU-MIDAS/VAR/ETS/ARIMA/GARCH/VECM/whatever? That still feels clumsy to me. It would even be helpful if someone could just verify for me, "yes, it's a) covered in the literature, and b) it's not a conceptually worthless pursuit to try to predict high-frequency with lower-frequency data." So, from the same basic intro paper I quoted before, after it introduces basic MIDAS equation: "The annual/quarterly example would imply that the above equation is a projection of yearly Yt onto quarterly data X (m) t using up to j max quarterly lags.4" That sounds like what I want - but the formulation is to have the independent variable (ignoring time for a moment) still be the higher-frequency. Do I simply invert the equation at this point for what I want ("solve for x," literally)? Edit: Found a very recent working paper from Norges Bank (I can't post more than 2 links, google "Using low frequency information for predicting high frequency variables"), that is pretty good. The lit review on benchmarked approaches like MF-VAR should be useful too. It looks from this paper that my approach is theoretically acceptable in the abstract, although I should use more intelligent interpolation of the low-frequency data to the higher-frequency dataset, like cubic splines in SAS "PROC EXPAND" function. There are some people, like Marc Wildi, who use this same exaggerated example of GDP as the low-frequency variable to make the argument that there is less usefulness from the use case I'm proposing than the traditional opposite. He does have some cool MDFA signal extraction methods that I don't understand, so that's something else to read up about.
2020-11-24 04:00:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5736666321754456, "perplexity": 1509.3943999700798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00654.warc.gz"}
http://axpw.rocknaechtchen.de/18-problem-set-5th.html
Published: Saturday, November 30, 2019. Free step-by-step solutions to Advanced Engineering Mathematics (9780470458365) - Slader. Check in, change seats, track your bag, check flight status, and more. Products on Woot. By inspection, the. 5th Grade Math Test Prep 5th Grade Word Problems with Grid Ins Multiplication worksheets Division worksheets Place value worksheets Finding Average worksheets Rounding worksheets Roman Numeral worksheets Solve for unknown one variable equations - Algebra notation Order of Operatons Worksheets Decimal Worksheets for Fifth Grade. Question A (i) With the ideal gases, we don't have to worry whether the gas is made up from helium, or oxygen, or chlorine or whatever. Determine the magnitude and direction of the electric field at this location. (Q17-18) Problem set 3 (Use Excel file) The following tabulations are actual sales of units for six months and a starting forecast in January. Problem 1: Brillouin zones and band diagrams In class, we derived the irreducible Brillouin zone for a lattice of cylindrical dielectric rods in air with lattice constant a: either a square lattice, where the lattice vectors di er by 90 , or a tri-angular lattice, where the lattice vectors di er. If 50 di erent students try out for a team of 30 players, in how many di erent ways can the coach choose the team? 2. For generations our products have graced countertops across the country, and every day we work to earn the opportunity to stay there. Gri ths: 9. 781 Problem Set 8 - Fall 2008 Due Tuesday, Nov. 06 Problem Set 2 Solutions 1. Problem Set 5. 50x10-5 N when placed a distance of 25. Determine the mean of the following set of numbers: 40, 61, 95, 79, 9, 50, 80, 63, 109, 42. In this problem, students may also choose to make a table or draw a picture to organize and represent their thinking. The driver was booked into King County jail on vehicular homicide. Smooth vector elds w 1;w 2;w 3 are de ned by w 1 = x2 @ @x 3 x3 @ @x 2; w 2 = x3 @ @x 1 x1 @ @x 3; w 3 = x1 @ @x 2 x2 @ @x 1:. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. In $2014$, the unemployment rate for those with only a high school degree was [latex]\text{6. Create a free account today. Sophomore Andrea Fuentes is 17 th in the nation with 1,231 assists. Feb 23, 2015 · Module 3 Lesson 18 Distance on the Coordinate Plane. See the computer programs on the website: 13. If H is a subgroup of the Abelian group G and. Each number in the sequence is called a term (or sometimes "element" or "member"), read Sequences and Series for a more in-depth discussion. If x = S and y = 2, then x. Hand in your homework to me by email. Solutions should be typeset. (b) the faces are the same on the dice. Zearn Math is a K-5 math curriculum based on Eureka Math / EngageNY with top-rated materials for teacher-led and digital instruction. ) is prohibited. 1) The proof is by contradiction. Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. The radius of the circular base (bottom) is 3 units and the formula for area of a circle is A = πr 2. Graph points on the coordinate plane to solve real-world and mathematical problems. Georgia Department of Education releases lists of 2019-20 Distinguished and Reward Schools 11/21/2019 Georgia Department of Education dedicates 270,000 to expand fine arts education in rural schools. These problems should be completed before discussion on Thursday. Show that. 9 Note Problem 18. Node 1 of 5 Consider a simple product mix problem where a furniture company tries to find an optimal product mix of four. Suppose f : ( 1;1) !R satis es the following property: for any A with 1 2 >A>0, jyj0, there is an Nsuch that, if n>N, then max(j 2j;j. Submit this problem set using github. Start studying Problem Set 2 Chapter 1. Hand in your homework to me by email. 18x10-8 C experiences a northward force of 4. 100B Problem Set 1 Solutions. These free software update include general improvements and performance enhancements. How-ever, you should write the solutions using your own words and thought. 542) chapter 20 Problem a2 (pg. aleks k-12 Teachers // Administrators Build learning momentum and student confidence with individualized, adaptive learning and assessment correlated to the Common Core and all 50 states' standards. I post a problem set each week, and then come back later in the week and post the answers and video solutions. 25, 2006 at 4:00 p. Delta Air Lines. Eureka Math is probably very different than your math classes in school or the way you learned math. There were winners and losers, including Alabama and Utah. Math 20 { Problem Set 3 (due July 18) This problem set is due at the beginning of class. Note: some problems are optional, problems with one or more stars are more difficult and can be treated as optional as well, although an ambitious student should attempt all problems. Use the Definition of Percents In the following exercises, write each percent as a ratio. The weights of four of the computer stations are 158. outdoor enthusiast on your list. ) Proportional Reasoning Sequences, Series & Patterns Solid Geometry Statistics & Data. 29 (since this is the first week, the actual due date will be 6pm Sat. Whoops! There was a problem previewing Datesheet 3rd sessional. In the Harberger two-sector model, with overall supplies of labor and capital fixed and. Solve using the standard algorithm. They use the same fact layouts as the spaceship math addition worksheets above, so try the first two sets worksheets if you are looking for the full set of addition facts or practice without the easier problems, or look at the others for an incremental approach to learning the facts. Spin correlations 8 points Consider a one-dimensional lattice with N lattice sites and assume that the ith lattice site has spin si = §1. Module 2 lesson 22 problem set by Samantha Legorreta - October 30, 2014. Find the probability that (a) the sum of the dice is 5. Read unlimited* books, audiobooks, Access to millions of documents. Eventbrite brings people together through live experiences. Each multiple choice must be answered (1 point each). Each set of problems is presented in horizontal and vertical formats. Try for free. Comerica Incorporated (NYSE: CMA) is a financial services company headquartered in Dallas, Texas, and strategically aligned by three business segments: The Business Bank, The Retail Bank, and Wealth Management. Title: Microsoft Word - solution problem set 9 Author. Fifth grade P. Set students up for success in 5th grade and beyond! Explore the entire 5th grade math curriculum: multiplication, division, fractions, and more. [Burris-Sanka. The first. 101 PROBLEM SET 4 due November 5th 1pm You can collaborate with other students when working on problems. Log in with ClassLink. Below, you will find links to LearnZillion videos that will help you throughout Module 2. You do not need to simplify binomial coe cients n k. Nov 24, 2019 · Chargers take fifth at Div. Show that there is a total (i. The massive water main break at 44th Street and Fifth Avenue on Sunday is just the latest in a series of infrastructure issues that have. Spiral Addition Facts. 6 percent and getting 18 assists on 27 field goals. At the instant the disk's mass center aligns has angular velocity o-5 angular acceleration and the horizontal components of force pin O exerts on it at the instant depicted tally with point O the disk ra Determine the disk's Problem 3. Show your thinking about the units of your product. 18, 2019 that both sides of the political spectrum need to defuse the anger swirling around political discourse. if f n!fin measure , and jf nj gfor some. Aristotle would say that a ball rolling to a stop is an example of the ball's natural. With loan amounts from35,000 to 200,000, terms from 10 to 30 years, and no cash required at closing, a home equity loan from Discover is a simple way to consolidate debt, make home improvements, cover college costs, and pay for other major expenses. Problem 1 [20 points] a) Consider a spherical shell of radius R, with uniform surface charge density σ o, centered on the origin. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Over £100 million has been raised so far, and with your help this figure will continue to increase. 603 chapter 21 Problem B6 - Answered by a verified Tutor We use cookies to give you the best possible experience on our website. The first. 6; those problems indicated by bold underline. 5 V battery. (b)Compute a set of eigenvectors and generalized eigenvectors (as de ned in the handout) of Ato give a complete basis. Book a trip. We appreciate your business and are available by phone at 1-800-424-2460 from 9 a. Problem Set 5. Problem Set Assignments for 18. MATH 216 PROBLEM SET 18 This set is due by noon on Friday, May 25. (a) The graphs of x → F 2(x,t) = cos2(x − 2t) for t = −1, 0, 1 all have the same sinusoidal shape f(u) = cos2(u) shifted along the x-axis. What’s Good? Catalog your books from Amazon, the Library of Congress and 4941 other libraries. I post a problem set each week, and then come back later in the week and post the answers and video solutions. Then we may write r as with a, b ∈ Z and we can assume that. Find events, book tickets and discover opera at ENO. ME 215, Thermodynamics I Homework Problems Set #9. An authenticator for a message broadcast by s 1 is a vector of MACs, one for each of the message’s recipients. Write an expression for G mixing in terms of x. 100B Problem Set 1 Solutions. You are welcome to brainstorm with other students in the class; however, you have to write your own solutions in your own words without looking at the write-up of other students’ in the class. Check out new themes, send GIFs, find every photo you’ve ever sent or received, and search your account faster than ever. 01 (Fall 2006) Problem Set Part 2 answers - any alternatives or places to obtain them? and I've been working on the problem sets that are provided, and I. Photo by Thomas Metthe. analyzemath. Equal Opportunity Notice The Issaquah School District complies with all applicable federal and state rules and regulations and does not discriminate on the basis of sex, race, creed, religion, color, national origin, age, honorably discharged veteran or military status, sexual orientation including gender expression or identity, the presence of any sensory, mental or physical disability, or. We protect insurance consumers, the public interest, and our state’s economy through fair and efficient regulation of the insurance industry. 747: Problem set 8; due Tuesday, April 5 1. Taylor Swift undoubtedly has one of the catchiest music catalogs of all time. 7 is an exercise. Dec 22, 2016 · Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. Eureka Math is probably very different than your math classes in school or the way you learned math. Problem 3. 5 Guess-and-check problems DRJ. Shop HomePod, AirPods, and headphones. 02 Problem Set 4, Part II Solutions 1. Birthday Party Ideas - Kids birthday party plans shared by parents around the world to help inspire you and plan a child's birthday party!. Problem Set 9 - Do for Wednesday Nov. Support FAQs Send Feedback Subscriptions Give a Subscription. i, with preferences over private goods c and local public goods g governed by the utility function log(c - a i) + log(g) and endowment y i of the private good. Unformatted text preview: 18 03 Problem Set 1 I encourage collaboration on homework in this course However if you do your homework in a group be sure it works to your advantage rather than against you Good grades for homework you have not thought through translate to poor grades on exams You must turn in your own writeups of all problems and if you do collaborate you must write on the front of. KZ builds recreational vehicles that fit a variety of lifestyles. 2 pencil, completely fill in circles, and track your place so you won’t get tripped up if you skip a question. Staple this sheet to the front of your essay responses Multiple Choice. Counting Base Ten Blocks Worksheets. The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website. (Q17-18) Problem set 3 (Use Excel file) The following tabulations are actual sales of units for six months and a starting forecast in January. LECTURE 18 PROBLEM SET 179 (9) An arithmetic sequence has 1st term 6 and common difference 624. Solution: We noted this in class today. 2 swim finals; Newman wins three events and Swimmer of Year honor. Most popular job search locations: United Kingdom. Understand that a variable can represent an unknown number Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true?. Show your thinking about the units of your product. 06 Problem Set 4 Solutions Problem 1: Do problem 13 from section 3. By setting up a system and following it, you can be successful with word problems. 102, FALL 2007 DUE THURSDAY 20 SEPT, AT 2:30 IN CLASS (2-102). Problem Set 9 Due: Thursday, March 15 at the beginning of class Problem 1. Numbers, such as 495,784, have six digits. The counting out begins at some point in the circle and proceeds around the circle in a. Problem Set 1. This site is a companion to the textbook Neuroscience, Fifth Edition Edited by Dale Purves, George J. Node 9 of 18. This page provides a summary of the key fifth grade curriculum and learning objectives for language arts, math, social studies, and science. It is not that one way is better than the other, the key is the expectations that are being put on our students today. Enjoy complimentary shipping and returns on all orders. Discrete Math, 5th Problem Set (July 28) REU 2003 Instructor: L aszl o Babai Scribe: Ben Wieland Exercise 0. Find out all you can about it by carrying out the parameter. Plottheerrorinthiseigenvalue as a function of how many Ax matrix-vector multiplies you perform (use a semilog or log-log scale as appropriate). Learn fifth grade math aligned to the Eureka Math/EngageNY curriculum—arithmetic with fractions and decimals, volume problems, unit conversion, graphing points, and more. Submit this as a PDF on Learning Suite. pdf (but replace Surname with your. The most up-to-date breaking news for the Pittsburgh Penguins including highlights, roster, schedule, scores and archives. 06 Problem Set 6 Due Wednesday, Oct. More than 39,000 Oklahomans have enjoyed the convenience of renewing their vehicle registration tag online. Note: some problems are optional, problems with one or more stars are more difficult and can be treated as optional as well, although an ambitious student should attempt all problems. (Amanda borrowed from the book, Quick Quiz 23. in 2-106 Problem 1 Wednesday 10/18 Some theory of orthogonal matrices: (a) Show that, if two matrices Q1 and Q2 are orthogonal, then their product Q1Q2 is orthogonal. Great Minds is a non-profit organization founded in 2007 by teachers and scholars who want to ensure that all students receive a content-rich education. HomeworkMarket. It's simple and fun. The world's largest digital library. This essentially means that you can only use ten unique digits (0 to 9) in each place of a base ten number. Problem 2: (a) Trefethen, 36. (2) Let V be a Banach space with norm jj jj V, let V 0 V be a subspace, and let Wdenote the quotient V=V 0. Search the world's information, including webpages, images, videos and more. Grade 5 Mathematics Start - Grade 5 Mathematics Module 1 In order to assist educators with the implementation of the Common Core, the New York State Education Department provides curricular modules in P-12 English Language Arts and Mathematics that schools and districts can adopt or adapt for local purposes. Since some offers vary by store, we want to make sure we're showing you the correct offers for your favorite store. Discover events that match your passions, or create your own with online ticketing tools. All of Google. Show that fis bounded if and only if the kernel ker(f) is a closed subset of V. For nearly 50 years, Jayco has built high-quality RVs, camping trailers, travel trailers, fifth wheels, motorhomes and toy haulers. 2 swim finals; Newman wins three events and Swimmer of Year honor. ly/eurekapusd PLEASE leave a message if a video has a technical difficulty (audi. (a) The graphs of x → F. Submit this problem set using github. 0 cm from a source charge. LECTURE 18 PROBLEM SET 179 (9) An arithmetic sequence has 1st term 6 and common difference 624. De ne a map jj jj W: W!R by. P8-23 P8-2(m) omit parts (h) and (i) on page 571. Below, you will find links to LearnZillion videos that will help you throughout Module 3. 2 days ago · Chicago Bears do-it-all wide receiver Cordarrelle Patterson was named NFC Special Teams Player of the Month on Thursday following an outstanding November slate of games that included 294 return yards, four tackles and two punts down inside the 10-yard line. this wave form traveling down the string over time with the ‘wave speed’. This page provides a summary of the key fifth grade curriculum and learning objectives for language arts, math, social studies, and science. PullRite is the leader in fifth wheel hitch innovation for 40 years! Quality, American-made, SAE J2638 tested 5th wheel, OE gooseneck, and sliding trailer hitches. All homework should either be submitted to me by email (it does not have to be in TeX, scanned written pages are fine) or on paper of reasonable quality, so it goes through the scanner!. 034 PROBLEM SET 1 Due February 11 in class. Math 20 { Problem Set 3 (due July 18) This problem set is due at the beginning of class. Concise, chapter 3 The categorical sections 3{6 may be heavy going and will probably not. 02 Problem Set 4, Part II Solutions 1. This Understanding the Effects of Differences in Speed—Problem Set D Interactive is suitable for 5th - 9th Grade. 8 pounds, 165 pounds, and 178. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more. These are not the same (and they have different rank), so the two matrices are not row-equivalent. 13 { variation of an action that is a pure divergence. Equal Opportunity Notice The Issaquah School District complies with all applicable federal and state rules and regulations and does not discriminate on the basis of sex, race, creed, religion, color, national origin, age, honorably discharged veteran or military status, sexual orientation including gender expression or identity, the presence of any sensory, mental or physical disability, or. She calculated the sales tax to be exactly [latex]\text{67. BIO 184 - PAL Problem Set Lecture 6 (Brooker Chapter 18) Mutations Section A. Fifth grade P. WebMath is designed to help you solve your math problems. 5 Rudin, Ch. Fifth Problem Set for Physics 846 (Statistical Physics I) Fall quarter 2003 Important dates: Oct 30 10:30am-12:18pm midterm exam, Nov 11 no class, Nov 27 no class, Dec 11 9:30am-11:18am flnal exam Due date: Tuesday, Nov 4 13. At the instant the disk's mass center aligns has angular velocity o-5 angular acceleration and the horizontal components of force pin O exerts on it at the instant depicted tally with point O the disk ra Determine the disk's Problem 3. These problems should be completed before discussion on Thursday. IKEA offers everything from living room furniture to mattresses and bedroom furniture so that you can design your life at home. Practice math facts including multiplication, division, addition, and subtraction, and much more. Then we may write r as with a, b ∈ Z and we can assume that. Lesson 18 Problem Set. If Gis a nonabelian group, show that G=Z(G) is not cyclic. With their ultra-spacious interiors, split-level living space, and expansive storage, fifth-wheel RVs are a great choice long trips and seasonal living. It is important to check each number for primality and to check each application of. Problem of the Week Archive Topics / Content Areas - Any - Algebraic Expressions & Equations Coordinate Geometry General Math Logic Measurement Number Theory Percents & Fractions Plane Geometry Probability, Counting & Combinatorics Problem Solving (Misc. 20 ppm which is a doublet and could only be connected to the CH of the isopropyl. This problem involves looking at the motion (in this case just the velocity and not the acceleration) of a body (the collar) relative to a frame that is rotating (rod DB) relative to the fixed frame (the wall). Free Math Worksheets for Grade 5 This is a comprehensive collection of free printable math worksheets for grade 5, organized by topics such as addition, subtraction, algebraic thinking, place value, multiplication, division, prime factorization, decimals, fractions, measurement, coordinate grid, and geometry. And again, remember that the academic content of the 2. If you have questions or concerns, please contact us through Chase customer service or let us know about Chase complaints and feedback. Our site allows visitors to access hundreds of pre-generated math worksheets or use awesome worksheet generators to create custom worksheets to meet a. Problem Set #1 (due 10/9/18) 1. You are a member of a team of scientists that recently discovered a previously unknown animal species, which is not a mammal. Equivalently, the exponent is the smallest positive integer ksuch. "Age" Word Problems (page 1 of 2) In January of the year 2000, I was one more than eleven times as old as my son William. If an enzyme enhances the rate of a particular enzymatic reaction by 4. See your Ford or Lincoln Dealer for complete details and qualifications. Solution (a)If m = n then the row space of A equals the column space. 5 Rudin, Ch. 2 pencil, completely fill in circles, and track your place so you won’t get tripped up if you skip a question. 06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in 2-106. Develop a personal relationship with this reactor and reaction. Calculus AB Problem Set 18-02. Find events, book tickets and discover opera at ENO. HOMEWORK PROBLEM SET 2: DUE SEPTEMBER 14, 2018 110. 3213 m = 32. A collection of mathematics problems with an answer and solution to each problem. Math-U-See is math you'll love. Regelation 8 points A light rigid metallic bar of rectangular cross section lies on a block of ice, extending slightly. Set students up for success in 5th grade and beyond! Explore the entire 5th grade math curriculum: multiplication, division, fractions, and more. outdoor enthusiast on your list. The problems archives table shows problems 1 to 681. 325 Problem Set 6 Due Thursda,y 17 November 2005. ) is prohibited. i, with preferences over private goods c and local public goods g governed by the utility function log(c - a i) + log(g) and endowment y i of the private good. Create a new RStudio project somewhere on your computer. HomeworkMarket. Stream 60 million songs, ad-free on Apple Music. See the 3,700+ problems in our core curriculum, updated each year by our math faculty. Math-U-See is a complete K-12 math curriculum focused on homeschool and small group learning environments that uses manipulatives to illustrate and teach math concepts. Math 114, Problem Set 9 (due Monday, November 18) November 11, 2013 (1) Let V be a Banach space, and let f: V !Rn be a linear map. Fall 2019 October 18, 2019 Problem Set 4 This fourth problem set explores set cardinality and graph theory. Soccer Saturday Sky Sports Super 6 - It's free to join and play! Correctly predict the scores of 6 football games for your chance to win £250,000 each week. Solve using the standard algorithm. Individuals are identical within each community. Visit us to check Sports, News, Freeview, Freesat, Sky TV, Virgin TV, History, Discovery, TLC, BBC, and more. As noted above, we have added to the already complete end-of-chapter problem sets to give students and instructors more opportunity to assess student understanding. SAT Practice Test Answer Sheet. Problem Set 18 Due Tuesday, March 25, 2008 (Lecture 21) PLQ 20 - What are Kappa, T C and C Po? Why are there two pathways (branches) in Figure 8-13 page 525? Individual. 9 { the eld equation for an action that depends on the second deriva-tives of a scalar eld ˚. If she was registering for 5 different sports, how much did it cost. Assume a linear (dc power flow approximation) system model. Taylor Swift undoubtedly has one of the catchiest music catalogs of all time. Problem Set Assignments for 18. 233 – Chapter 5 Problem Set Page 3 of 7 Thermodynamics: Equilibria 6) For each of the following, determine whether or not the equilibrium will favor products over reactants. VIRTUAL JUDGE Recent Contest F. With 13 Grammys under her belt, she's got a go-to set of tracks to choose from when looking to show off your vocal. 5 Guess-and-check problems DRJ. Loading Unsubscribe from Mrs. Find the energy stored in the magnetic eld of the toroid. Problem 25-6A Problem 25-1B Problem 25-2B Problem 25-3B Problem 25-5B Problem 25-6B Updates to Groom and Board Practice Set Within this Errata Sheet , you will find any corrections for Accounting, 21e. NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 15 Problem Set 5 4 If he harvested 18 tons of corn, how many tons did he take to market? Title: Student Workbook. (Q17-18) Problem set 3 (Use Excel file) The following tabulations are actual sales of units for six months and a starting forecast in January. Fifth Problem Set for Physics 846 (Statistical Physics I) Fall quarter 2003 Important dates: Oct 30 10:30am-12:18pm midterm exam, Nov 11 no class, Nov 27 no class, Dec 11 9:30am-11:18am flnal exam Due date: Tuesday, Nov 4 13. If the length of the tire rod connecting the three tires of the larger train as shown below is 36 inches, write an equation to find the length of the tire rod of the smaller train. Problem Set 18 (prepared by Jonny Trampe) 7/21/08 Conceptual Problems 1. Determine the magnitude and direction of the electric field at this location. 9-digit Number. This page provides a summary of the key fifth grade curriculum and learning objectives for language arts, math, social studies, and science. Download new and previously released drivers including support software, bios, utilities, firmware and patches for Intel products. Problem Set: Lessons 12 -18 7 Lesson 14 - Problem Set 1. A third way to compute percentiles (presented below) is a weighted average of the percentiles computed according to the first two definitions. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by-step solution. 0 cm from a source charge. ME 330 Fluid Mechanics Solutions for Problem Set #7 Fox and McDonald, 5th Edition Problems: 6. Math Teaching Materials | Phillips Exeter Academy. The mean weight of five complete computer stations is 167. Your solutions are to be written up in latex (you can use the latex source for the problem set as a template) and submitted as a pdf- le named SurnamePset2. Create a free account today. Derive it in the case of a ne sl(2) from the Jacobi triple product identity. A geometric sequence has 1st term 2 and common ratio 3. Prove that elements xand yare conjugate in a group Gif and only if ˜(x) = ˜(y) for all irreducible characters ˜of G. (2) Let V be a Banach space with norm jj jj V, let V 0 V be a subspace, and let Wdenote the quotient V=V 0. The store has put a robotics kit on sale for 15% off the original price of \$40. Show that there must be a maximal one and that any maximal one is a total order. Derive it in the case of a ne sl(2) from the Jacobi triple product identity. This Understanding the Effects of Differences in Speed—Problem Set D Interactive is suitable for 5th - 9th Grade. Visit Mathway on the web. Hox cluster. CS91 Problem Set 2 This problem set is due at 11:59pm on Sunday, October 28th. PROBLEM SET 2, FOR 18. You can also turn in your homework in class to the Instructor or TA. 5 Rudin, Ch. No matter which model or style you choose, our travel trailers and fifth wheels are true recreational vehicles, a solid foundation on which you can build a lifetime of memories. ) is prohibited. Tina Jennings - AES 1,580 views. Book a trip. Regelation 8 points A light rigid metallic bar of rectangular cross section lies on a block of ice, extending slightly. Module 5 Lesson 18 Problem Set Mrs. Zearn Math is a K-5 math curriculum based on Eureka Math / EngageNY with top-rated materials for teacher-led and digital instruction. This site is a companion to the textbook Neuroscience, Fifth Edition Edited by Dale Purves, George J. 9 Note Problem 18. A card is drawn from a well-shuffled deck of 52 cards. Mounted in the bed of a pickup truck, fifth wheel hitches provide an extra measure of security for these larger trailers. Grade 5 Mathematics Start - Grade 5 Mathematics Module 1 In order to assist educators with the implementation of the Common Core, the New York State Education Department provides curricular modules in P-12 English Language Arts and Mathematics that schools and districts can adopt or adapt for local purposes. In computer science and mathematics, the Josephus Problem (or Josephus permutation) is a theoretical problem. A/X/Z Plan pricing, including A/X/Z Plan option pricing, is exclusively for eligible Ford Motor Company employees, friends and family members of eligible employees, and Ford Motor Company eligible partners. Patterson's productive November was a. Free step-by-step solutions to Advanced Engineering Mathematics (9780470458365) - Slader. Org (GSO) is a free, public website providing information and resources necessary to help meet the educational needs of students. 6 m/s/°C * T where T is the Celsius temperature of the air. George Lenchner can help you to teach solving these types of problems. We are the home to award-winning digital textbooks, multimedia content, and the largest professional development community of its kind. Check out our furniture and home furnishings!. Each math lesson provides in-depth instruction ideal for learners of all ages and abilities. Math Teaching Materials | Phillips Exeter Academy. And again, remember that the academic content of the 2.
2020-01-21 14:37:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2576234042644501, "perplexity": 2803.9278203818844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604397.40/warc/CC-MAIN-20200121132900-20200121161900-00080.warc.gz"}
http://codeforces.com/problemset/problem/1283/D
Please, try EDU on Codeforces! New educational section with videos, subtitles, texts, and problems. × D. Christmas Trees time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output There are $n$ Christmas trees on an infinite number line. The $i$-th tree grows at the position $x_i$. All $x_i$ are guaranteed to be distinct. Each integer point can be either occupied by the Christmas tree, by the human or not occupied at all. Non-integer points cannot be occupied by anything. There are $m$ people who want to celebrate Christmas. Let $y_1, y_2, \dots, y_m$ be the positions of people (note that all values $x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_m$ should be distinct and all $y_j$ should be integer). You want to find such an arrangement of people that the value $\sum\limits_{j=1}^{m}\min\limits_{i=1}^{n}|x_i - y_j|$ is the minimum possible (in other words, the sum of distances to the nearest Christmas tree for all people should be minimized). In other words, let $d_j$ be the distance from the $j$-th human to the nearest Christmas tree ($d_j = \min\limits_{i=1}^{n} |y_j - x_i|$). Then you need to choose such positions $y_1, y_2, \dots, y_m$ that $\sum\limits_{j=1}^{m} d_j$ is the minimum possible. Input The first line of the input contains two integers $n$ and $m$ ($1 \le n, m \le 2 \cdot 10^5$) — the number of Christmas trees and the number of people. The second line of the input contains $n$ integers $x_1, x_2, \dots, x_n$ ($-10^9 \le x_i \le 10^9$), where $x_i$ is the position of the $i$-th Christmas tree. It is guaranteed that all $x_i$ are distinct. Output In the first line print one integer $res$ — the minimum possible value of $\sum\limits_{j=1}^{m}\min\limits_{i=1}^{n}|x_i - y_j|$ (in other words, the sum of distances to the nearest Christmas tree for all people). In the second line print $m$ integers $y_1, y_2, \dots, y_m$ ($-2 \cdot 10^9 \le y_j \le 2 \cdot 10^9$), where $y_j$ is the position of the $j$-th human. All $y_j$ should be distinct and all values $x_1, x_2, \dots, x_n, y_1, y_2, \dots, y_m$ should be distinct. If there are multiple answers, print any of them. Examples Input 2 6 1 5 Output 8 -1 2 6 4 0 3 Input 3 5 0 3 1 Output 7 5 -2 4 -1 2
2020-07-08 14:44:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4805241525173187, "perplexity": 471.0646831809628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897027.14/warc/CC-MAIN-20200708124912-20200708154912-00087.warc.gz"}
http://www.math.princeton.edu/events/seminars/algebraic-topology-seminar-topology-seminar/cohomology-g-spaces-g-compact-group
# The Cohomology of G-spaces (G compact group) Thursday, April 30, 2015 - 3:00pm to 4:00pm Please note special time.  P. A. Smith proved in the late 1930'sTheorem: If $G = (Z/p)^r$ acts freely on the sphere $S^n$, then $r <2$.  This leads to:Rank Conjecture: If $G$ acts freely on a product of spheres $X = (S^{n_1}) \times \ldots \times (S^{n_k})$ , as the identity on $H^{*}$,  then $r < k+1$.If a group $G$ acts freely on a space  $X$  then there is  a map $f\colon X/G \longrightarrow B_G$ (the classifying space of $G$), making $H^{*}(X/G)$ into an $H^{*}(B_G)$-module.  For $G$ anelementary abelian  $p$-group (as above), $H^2(B_G) =$ the vector space over $F_p$ of rank  $r$ , and let  $y_1,\ldots, y_r$ be a basis.  Define  (the linear) map  $\phi \colon H^2(B_G) \longrightarrow L =$  direct limit of $H^{2p^{q}}(X/G)$ using the $p$-th power map. Define the nil rank of $X$ to be the rank of the kernel$(\phi)$.  If  $F \colon Y \longrightarrow X$ is a $G$-map, then nil rank of  $Y$ is at least as large as nil rank of $X$.  $H^{*}(X/G)$ is $H^{*}(B_G)$ nilpotent means nil rank $X = r$.   If  $H^{*}(X)$ is finitely generated then $H^{*}(X/G)$ is also if and only if nil rank of  $X = r$.  This of course is the case for a finite dimensional $G$-space $X$.  But infinite dimensional spaces may be studied and one might try to climb the Postnikov tower of $X$ to prove theorems, where the spaces are rarely finite dimensional.  For example we show:Theorem:  If $G$ acts $H^{*}(B/G)$ nilpotently (as above) on $X$ homotopy equivalent to $Y \times (S^1)^q$  then a subgoup $K$ of  $G$ of rank $\geq r - q$ acts $H^{*}(B/K)$ nilpotently on $Y$.  Corollary:  The rank conjecture holds for $X$ homotopy equivalent to a product of circles and  $S^n$'s (single $n$).Theorem:  Let $p= 2$ and let $G$ act $H^{*}(B/G)$ nilpotently on  $X$ of the homotopy type of $(S^3)^s \times (S^n)^t$  Then  $r \leq s + t$. A similar more complicated theorem holds for $p > 2$. Speaker: William Browder Princeton University Event Location: Fine Hall 314
2017-11-24 14:38:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886425137519836, "perplexity": 1282.408142630194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808254.76/warc/CC-MAIN-20171124142303-20171124162303-00249.warc.gz"}
https://math.stackexchange.com/questions/2892459/how-to-minimize-this-quartic-function
# How to minimize this quartic function? I need to find the vector that minimizes this matrix equation; $$\bar{v}.M_1.v-(\bar{v}.M_2.v)^2$$ $v$ is a normalized complex vector and $\bar{v}$ is the complex conjugate but I can arrange parameters in such a way that it can be all real. $M_1$ and $M_2$ are Hermitian matrices and therefore the function I am asking is eventually a real number. My idea was to rewrite the vector in terms of coefficients and then calculate the product and I get a quartic polynomial but solving it in higher dimensions is a mess. So I thought maybe it is possible to use some matrix techniques which always simplifies things. Approximations are also accepted. Also is there calculus techniques such that I can take derivative of this function and find the optimum vector? • Just a usage point: the object you're trying to minimize is a function, not an equation. – Matthew Leingang Aug 23 '18 at 19:04 • Edited... Thanks – user532063 Aug 23 '18 at 19:06 • I missed the normalized vector constraint. I’ll delete my comment. – mathcounterexamples.net Aug 23 '18 at 19:53 Yes, there is a calculus for functions defined in terms of vector and matrix operations. It takes advantage of the algebraic rules for those operations (which is good), but care must be taken because of the noncommutativity of matrix multiplication. Here's an example. If $x$ is a real variable, $a$ is a constant, and $f(x) = ax^2$, then $f'(x) = 2ax$. if $\mathbf{x}$ is a vector variable, and $A$ is a constant square matrix, then the analogous quadratic function is $f(\mathbf{x}) = \mathbf{x}\cdot A \mathbf{x} = \mathbf{x}^T A \mathbf{x}$. We can express the derivative of $f$ in terms of $\mathbf{x}$. If the $k$-th component of $\mathbf{x}$ is $x_k$, and the $(i,j)$-entry of $A$ is $a_{ij}$, then $$f(\mathbf{x}) = \sum_{i,j=1}^n x_i a_{ij} x_j$$ So for each $k$, \begin{align*} \frac{\partial f}{\partial x_k} &= \sum_{i,j=1}^n \frac{\partial}{\partial x_k}(x_i a_{ij} x_j) = \sum_{i,j=1}^n \left(\delta_{ik} a_{ij} x_j + x_i a_{ij} \delta_{jk}\right) \\ &= \sum_{j=1}^n a_{kj}x_j + \sum_{i=1}^n a_{ik}x_i \end{align*} The first summation is equal to the $k$-th component of $A\mathbf{x}$. The second summation is equal to the $k$-th component of $A^T\mathbf{x}$. So we can say: $$\frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^T A \mathbf{x}) = A \mathbf{x} + A^T \mathbf{x}$$ If $A$ is symmetric, then $\frac{\partial}{\partial \mathbf{x}}(\mathbf{x}^T A \mathbf{x}) = 2A\mathbf{x}$, which is pretty close to $\frac{d}{dx}(ax^2) = 2ax$. You can read more about this kind of calculus on Wikipedia. The thing is, you almost need to work it out in coordinates first, in order to convince yourself that the matrix algebra shortcuts are valid. • So, after inroducing a Lagrange multiplier, you get the equation $2 M_1 v - 4 (\bar{v}^T M_2 v) M_2 v - 2 \lambda v = 0$ as well as $v^T v = 1$. Not so easy to solve... – Robert Israel Aug 23 '18 at 19:33 • @RobertIsrael: True – Matthew Leingang Aug 23 '18 at 19:35 If you didn't have normalization, this would be exactly solvable in the n=3 case by Sums of Squares methods. In general this is hard, but there exist approximation algorithms with $1 - Cn^{-2}$ approximation ratios for quartic optimization with quadratic constraints (exactly what you have here). As far as an explicit form for this problem, I'll write $A=M_1$, $B=M_2$, and $x=v$. The Lagrangian becomes $$L(x,\lambda)=x^* A x - (x^* B x)^2 + \lambda(1 - x^* x)$$ Taking the derivative says that at optimality $$Ax - 2B x x^* B x - \lambda x=0\text{, or } (A - 2Bxx^*B - \lambda I)x = 0$$ and, necessarily, $x^*x=1$. As far as I can tell this isn't easily solved. • Many many thanks, I hope this article really helps... What about matrix differentiation techniques or similar approaches? – user532063 Aug 23 '18 at 19:13 • If you didn't have normalization, the minimimum wouldn't exist: as long as $\bar{v}.M_2.v \ne 0$, you can make the objective an arbitrarily large negative number by appropriate scaling. – Robert Israel Aug 23 '18 at 19:20 Hint Note $f(v)=\bar{v}.M_1.v-(\bar{v}.M_2.v)^2$ the function. Your target is to find $v$ such that $f^\prime(v)=0$, where $f^\prime$ is the Fréchet derivative. To do that you need to use heavily the chain rule writing $f$ as a composition of functions. For example $\varphi(v)= \bar{v}.M_1v=\varphi_1(\varphi_2(v),v)$ where $\varphi_2(v)=\bar{v}$ and $\varphi_1(u,v)=u.M_1.v$. Now using Fréchet derivatives, $$\begin{cases}\varphi_2^\prime(v).h= \bar{h}\\ \varphi_1^\prime(u,v).(h,k)= h.M_1.v+ \bar{u}.M_1k \end{cases}$$ and chain rule $$\varphi^\prime(v).h=\bar{h}.M_1v +\bar{v}.M_1.h=2\bar{h}.M_1.v$$ using the fact that $M_1$ is hermitian. You can then follow a similar path for the function $\psi(v)=(\bar{v}.M_2.v)^2$. And finally use Lagrange multipliers to take into consideration the normalization. • For the normalization I should add Lagrange multiplier to the equation right? – user532063 Aug 23 '18 at 19:43 • Yes indeed. In that case you’ll have to use Lagrange multiplier. – mathcounterexamples.net Aug 23 '18 at 19:51 • It is not sufficient that the derivative is zero but also the second derivative must be negative. – user532063 Aug 25 '18 at 18:52 Given the matrix $X\in{\mathbb C}^{n\times n},\,\,$ define a function $$F(X) = \begin{bmatrix}{\rm Re}(X) & -{\rm Im}(X)\\{\rm Im}(X) & {\rm Re}(X) \end{bmatrix}\in{\mathbb R}^{2n\times 2n}$$ Similarly, given the vector $x\in{\mathbb C}^{n},\,\,$ define a function $$f(x) = \begin{bmatrix}{\rm Re}(x)\\{\rm Im}(x)\end{bmatrix}\in{\mathbb R}^{2n}$$ Using these functions, your complex problem can be transformed into a real problem, which makes it easier to calculate gradients. Define some new, real variables \eqalign{ A &= F(M_1),\,\,\,B = F(M_2),\,\,\,x = f(v) \cr \alpha &= v^*M_1v = \tfrac{x^TAx}{x^Tx} \cr \beta &= v^*M_2v = \tfrac{x^TBx}{x^Tx} \cr } Since the matrices $(M_1,M_2)$ are hermitian, the matrices $(A,B)$ are symmetric. Also note that $v$ was constrained to be a unit vector, whereas $x$ is unconstrained. Start by finding the differential and gradient of $\alpha$ \eqalign{ d\alpha &= \frac{(2Ax)^Tdx}{x^Tx} - \frac{(x^TAx)(2x^T)dx}{(x^Tx)^2} \cr &= \frac{2(Ax-\alpha x)^Tdx}{x^Tx} \cr \frac{\partial\alpha}{\partial x} &= \tfrac{2(A-\alpha I)x}{x^Tx} \cr } Similarly, the gradient of $\beta$ is $\frac{\partial\beta}{\partial x} = \frac{2(B-\beta I)x}{x^Tx}$ The gradient of your target function can be calculated as \eqalign{ \phi &= \alpha-\beta^2 \cr \frac{\partial\phi}{\partial x} &= \frac{\partial\alpha}{\partial x} - 2\beta\frac{\partial\beta}{\partial x} = \frac{2\,\big(Ax-2\beta Bx + 2\beta^2x-\alpha x\big)}{x^Tx} \cr } Now use your favorite gradient-based unconstrained minimization method (e.g. Barzilai-Borwein) to solve for $x$ -- and remember that $(\alpha,\beta)$ are not constants, but are functions of $x$. Once you have $x$, you can recover the $v$ vector which solves the original complex problem. Here's some Julia code function q(A,x); return sum(x'*A*x); end # quadratic form function normF(x); return sqrt(sum(x'*x)); end # frobenius norm function F(A,B,x); return q(A,x) - q(B,x)^2; end a,b,c = q(A,x), q(B,x), q(one(A),x) return (2/c) * ((A*x-a*x) - 2*b*(B*x-b*x)) end # Barzilai-Borwein function bbSolve(tol,nMax,A,B,x) # initialize g=x.+0; dx=x*0; dg=x*0; xBest=x.+0; g=gradF(A,B,x); vBest=v=normF(g) n=0; b=1e-5; dx=b*(x.+b); dg=g.+0; g=gradF(A,B,x-dx); dg -= g # iterate while true n += 1 b = sum(dg'*dx) / sum(dg'*dg) # barzilai steplength dx = -b*g; x += dx dg = -g; g=gradF(A,B,x); dg += g v = normF(g) if v < vBest vBest = v; xBest = x end if isnan(v) || isinf(v) break elseif v < tol || n > nMax break end end return n, xBest end # test with randomly generated matrices n=5; x=ones(2*n,1); # initial guess x = 1 M1=rand(n,n)+im*rand(n,n); M1+=M1'; # hermitian M2=rand(n,n)+im*rand(n,n); M2+=M2'; A=[real(M1) -imag(M1); imag(M1) real(M1)]; B=[real(M2) -imag(M2); imag(M2) real(M2)]; @time k,x = bbSolve(1e-14, 200, A,B,x); @printf( "k, F(x), |x|, x = %s\n", (k, F(A,B,x), normF(x), x) );
2019-08-26 04:53:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972357153892517, "perplexity": 660.82594246891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00098.warc.gz"}
https://gamedev.stackexchange.com/questions/32141/z-axis-in-isometric-tilemap
# Z axis in isometric tilemap I'm experimenting with isometric tile maps in JavaScript and HTML5 Canvas. I'm storing tile map data in JavaScript 2D array. // 0 - Grass // 1 - Dirt // ... var mapData = [ [0, 0, 0, 0, 0], [0, 0, 1, 0, 0, ... ] and draw for(var i = 0; i < mapData.length; i++) { for(var j = 0; j < mapData[i].length; j++) { var p = iso2screen(i, j, 0); // X, Y, Z context.drawImage(tileArray[mapData[i][j]], p.x, p.y); } } but this function mean's all tile Z axis is equal to zero. var p = iso2screen(i, j, 0); Maybe anyone have idea and how to do something like mapData[0][0] Z axis equal to 3 mapData[5][5] Z axis equal to 5? I have idea: Write function for grass, dirt and store this function to 2D array and draw and later mapData[0][0].setZ(3); But it is good idea to write functions for each tiles? • Your question is kind of unclear. Instead of saying "do something like..." Can you tell us what you want to do? I don't know how you're going to call .setZ(3) on an integer stored in an array. – MichaelHouse Jul 12 '12 at 4:56 • .setZ(3) work if I'm create functions for each tiles and create "setZ" method, but maybe is other method when create functions for all tiles? – gyhgowvi Jul 12 '12 at 5:29 I'm assuming by Z axis we're kind of talking about the 'height' of the tile here. If you're sure that there would only ever be one tile on each of the X and Y axes (like a heightmap) then you can just modify your map array to store two values rather than a number: // mapData[i][j][0] is the tile Id, mapData[i][j][1] is the Z axis var mapData = [ [[0,0], [0,1], [0,0], [0,2], [0,3]], [[0,1], [0,0], [1,0, [0,1], [0,2], ... ] If you're not sure of this (i.e, you might have caves or overhangs or similar features) then the way to do it is slices. Slices are all the same size (the whole of your map) and you would have one for each and every Z level possible. You would also require a new tile type of 'Nothing', which means you just don't draw anything. This way is more expensive, but if you need the flexibility it's the way to do it.
2020-09-29 00:19:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20592617988586426, "perplexity": 1782.3101061373552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00601.warc.gz"}
https://www.physicsforums.com/threads/determining-the-distribution-function.116894/
# Determining the distribution function 1. Apr 7, 2006 ### shan I've gotten a weird answer after doing the problem but I'm stuck as to where I messed up. The density function is this: $$f_{X} (x) = \frac{1}{6}x$$ for $$0<x\leq2$$ $$= \frac{1}{3}(2x-3)$$ for 2<x<3 and 0 otherwise And the question is to find the distribution function. So integrating for the first part from 0 to x: $$\int \frac{1}{6}u du = \frac {1}{6} \frac{x^2}{2} = \frac{x^2}{16}$$ for 0<x<=2 I have a big problem with the second, part. This is what I did (integrating from 2 to x): $$\int \frac{1}{3} (2u-3) du = \frac{1}{3} [u^2-3u] = \frac{1}{3} (x^2-3x - (2^2-6)) = \frac{x^2}{3} - x + \frac{2}{3}$$ for 2<x<3 I know my answer for the second part is wrong as when x=2, the distribution function = 0 and when x=3, the distribution function = 2/3. But the distribution shouldn't be broken up like that at x=2 and supposedly at x=3, it should = 1. So did I forget to do something to the end points or did I not integrate properly? Last edited: Apr 7, 2006 2. Apr 7, 2006 ### Hurkyl Staff Emeritus The distribution function is given by: $$F_X(x) := \int_{-\infty}^x f_X(x) \, dx$$ So when you computed $\int_2^x f_X(x) \, dx$, you computed the wrong thing, and there's no reason you should have gotten the right answer. P.S. 6*2 is not 16. 3. Apr 7, 2006 ### shan whoops sorry, typo that first part is $$\int_0^x \frac{1}{6}u du = \frac {1}{6} \frac{x^2}{2} = \frac{x^2}{12}$$ which shows I don't really understand what I'm doing but now that you mentioned it... is the second part then given by $$\int_0^2 \frac{1}{6}u du + \int_2^x \frac{1}{3} (2u-3) du$$? ie I forgot to add the first part of f(x)? 4. Apr 7, 2006 ### Hurkyl Staff Emeritus Right. Of course, there should also be a $\int_{-\infty}^0 f_X(u) \, du$ component as well. (But you know it's zero, so I suppose that's why you left it out) 5. Apr 7, 2006 ### shan Thank you very much for your help :)
2017-08-22 13:12:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019365906715393, "perplexity": 821.5871307301347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110774.86/warc/CC-MAIN-20170822123737-20170822143737-00348.warc.gz"}
https://engineering.stackexchange.com/questions/26010/what-type-of-bearing-used-in-swivel-arm
# What type of bearing used in swivel arm? To model a swivel arm on column that rotates by hand What type of bearing should i use Should it support radial and axial load or just supporting axial load is enough ? • Use a bearing capable of supporting all the loads applied - first define those loads. Feb 16, 2019 at 10:26 • It probably needs to support some moments, as well as forces. Feb 16, 2019 at 11:11 • Ok can i use a pair of deep groove ball bearing and thrust ball bearing ? Feb 16, 2019 at 11:55 • What direction is the thrust? Feb 16, 2019 at 12:50 • Thrust direction is down cause i want to hang a 9kg mass from the end of the tube at the right . Feb 16, 2019 at 13:52 you have both: $$M= 9kg*L+ lever\ weight*L/2$$ $$P = 9 + Lever\ weight$$
2022-11-29 11:10:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2286100685596466, "perplexity": 1877.211173380461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00376.warc.gz"}
http://mathhelpforum.com/algebra/200963-parametric-equation-problem.html
# Math Help - Parametric Equation Problem 1. ## Parametric Equation Problem x^2-6xy+10y^2=4 find all the integer pairs of x and y which satisfiy the equation. Thanks! 2. ## Re: Parametric Equation Problem You can solve for x and then choose y values that give integer values of x. 3. ## Re: Parametric Equation Problem Originally Posted by Telo x^2-6xy+10y^2=4 find all the integer pairs of x and y which satisfiy the equation. Thanks! $x^2 - 6xy + 10y^2 - 4 = 0$ $a = 1$ , $b = -6y$ , $c = 10y^2 - 4$ the quadratic formula yields ... $x = 3y \pm \sqrt{4 - y^2}$ since x is an integer value, the value under the radical must be a positive, perfect square value ... finish it.
2015-10-14 03:27:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7947283983230591, "perplexity": 1245.4833328096872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738095178.99/warc/CC-MAIN-20151001222135-00193-ip-10-137-6-227.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/27588/repetition-of-a-word-on-two-lines?answertab=oldest
# Repetition of a word on two lines This question led to a new feature in a package: impnattypo There have been questions about rivers asked in the past. I'm wondering about words that are repeated in the same place on two consecutive lines, like "le monde" in the following example: Does this have a name? Should this be avoided? If so, how do you avoid such things (I guess in a similar way as you would avoid rivers)? Edit: A LuaTeX implementation allowing to automate this would be welcome. Addendum by Mico: I've added a bounty of 100 points to second Raphink's plea for a LuaTeX implementation of his idea. Edit 2: I understand that fixing it automatically greatly increases time complexity since you need to do the analysis after the paragraph rendering, but the changes require to trigger a new paragraph rendering so it's a recursive process. What might be doable without increasing time complexity is to detect homeoarchies and highlight them (or underline them) using PDF annotations. I'd be quite happy with that already, as it would help to spot them and fix them manually afterwards. - I don't know if there's a name other than "repetition"; of course it should be avoided whenever possible, also at end-of-line: it may confuse the reader. –  egreg Sep 5 '11 at 20:19 Use a tie, for example juger~le; in case the text is unmodifiable, there's little you can do if this doesn't work other than trying in other places. If a paragraph is long, almost any space far enough from the start is usually a feasible line break point. If the text is modifiable, go on and change it; I often say that it's rather uncommon that one's prose is perfect at first writing and that correcting bad line breaks can help polishing it. –  egreg Sep 5 '11 at 20:29 The error this can lead to does have a name: homeoarchy is when a copyist (or, by extension, a reader) misses out a line because the start of two lines being similar. –  mas Sep 6 '11 at 7:53 I don't think this can be solved without changing the line breaking algorithm or getting suboptimal results. Imagine having two bad lines 6 and 7, line 6 could be changed by a different breaking in line 5 and line 7 could stay the same. This (I am still tired now) I believe cannot be solved using the dynamic programming approach Knuth & Plass are using thus resulting in a higher time complexity. –  topskip Sep 14 '11 at 7:48 @Raphink correct - but even then you possibly could get better results if you chose another breakpoint but this would require a different line breaking algorithm (that's what the linebreak_filter in LuaTeX is for :)) –  topskip Sep 14 '11 at 8:32 One of Don Knuth's recommendations for fixing various typographical issues is to rewrite the passage in question – assuming that doing so is possible and/or permissible, of course. (The passage you cite is one case where you mustn't change a single word, obviously.) If you can't/mustn't rewrite the passage, you can still try to change some parameters such as the line width, font size, interword spacing, and occasionally impose a tie (unbreakable space), all in order to try to mitigate the problem. Addendum: I've succeeded in reproducing the OP's text fragment in the following MWE: \documentclass{article} \usepackage[french]{babel} \usepackage{kpfonts} \begin{document} \begin{minipage}{1.7in} Je suis venu non pour juger le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \bigskip \begin{minipage}{1.7in} Je suis venu non pour juger~le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \bigskip \begin{minipage}{1.6in} Je suis venu non pour juger le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \bigskip \begin{minipage}{1.8in} Je suis venu non pour juger le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \end{document} The first minipage reproduces the initial problem. In example two, I've inserted a tie between "juger" and "le": this forces a hyphenation of the word "juger" and succeeds in breaking up the repetition, at the cost of loose word spacing (given the narrow measure!). The second example does not impose a tie but shortens the measure, also breaking up the vertical word repetition a bit but also suffering from loose word spacing (esp in line 3). The fourth example widens the measure a bit; now lines 2 and 3 both start with "monde" (as opposed to "le monde" in the first example), and the interword spacing looks OK overall. A slight improvement, maybe, but really only very slight. I guess the problem to solve is particularly vexing because the repeated-word group contains two, rather than just one, word! - Where does Knuth discuss this? –  N.N. Sep 6 '11 at 6:55 @N.N. There's the index entry "bad breaks, avoiding" in the TeXbook; there's also a quotation from GB Shaw on page 107. –  egreg Sep 6 '11 at 8:37 @Mico: in your example you inserted a tie between "juger" and "le monde". Another solution is to insert a tie between "sauver" and "le monde". The result is similar, but it looks nicer imo. –  ℝaphink Sep 10 '11 at 22:30 I've edited the question to ask for a LuaTeX automated implementation if possible. I think storing the first word of every line and comparing it with the first word of every new line, adding a tie when it's identical, could work, and I'm pretty sure @Patrick would be happy to have a go at that :-) –  ℝaphink Sep 13 '11 at 22:06 @Mico: I added an answer wich provides an automated way to detect homeoarchies. I've already fixed about 30 of them on a book of mine using the code. –  ℝaphink Sep 19 '11 at 15:03 I've added an homeoarchy option to the development version of the impnattypo package which adds detection for homeoarchies. It doesn't fix them since as Patrick mentioned, it would increase the time complexity too much. The result is the following: I've ran it on a 150+ pages book and it found quite a few of them. Edit: The latest version on github is now able to detect homoioteleutons (the same thing, at the end of lines) as well, so the following: \documentclass{article} \usepackage[french]{babel} \usepackage{kpfonts} \usepackage[homeoarchy, draft, homeoarchywordcolor=green, homeoarchycharcolor=blue, homeoarchymaxwords=2, homeoarchymaxchars=1]{impnattypo} \begin{document} \section{Testing homeoarchy detection} \begin{minipage}{1.7in} Je suis venu non pour juger le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \section{Testing homoioteleuton detection} \begin{minipage}{1.7in} \parindent=2cm \indent Je suis venu non pour juger le monde, mais pour sauver le monde. Celui qui me rejette et qui ne re\c coit pas mes \end{minipage} \end{document} produces: and again this had helped me find quite a few of them in my books. - Great job! To keep the time complexity down while updating the spacing automatically, you could have an auxiliary file (I'd say, not the .aux file) with the info on what to tweak from previous runs, and only compute for changed paragraphs (perhaps with some md5 sum business?) or new paragraphs. –  Bruno Le Floch Sep 20 '11 at 0:29 @Bruno: I see what you mean. However, from fixing quite a few of these mistakes in books lately, I can tell that fixing it automatically would not really be a good idea. It is sometimes as simple as inserting a tie before the second matching word, but it can often be more complex, especially since fixing an homoioteleuton can lead to generating an homeoarchy, and fixing both introduces overfull lines since you add a lot of ties... So it's probably a better idea to keep it to the detection level. –  ℝaphink Sep 20 '11 at 6:34
2014-10-01 05:58:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7318257093429565, "perplexity": 3214.551164909175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663365.9/warc/CC-MAIN-20140930004103-00145-ip-10-234-18-248.ec2.internal.warc.gz"}
http://www.tug.org/pipermail/texhax/2004-March/001890.html
# [texhax] removing any macros from a string argument Dieter Meinert DMeinert at RosenInspection.net Tue Mar 23 17:33:34 CET 2004 -----BEGIN PGP SIGNED MESSAGE----- Hi all, I have a special problem I can't get managed: I have a unique identifier (user-defined) in my documents, which may or may not include other macros (preferred \_, but anything may be possible). Now I need to remove all macros from this string in order to use it as a save label-generator. I just don't find a way to remove all the macros from the string. % Example: \newcommand{\identifier}[1]{ \newcommand{\myID}{#1} \renewcommand{\identifier}{\myID} } \begin{document} \identifier{abc\_def\macro{}ghi} This is Document \myID, uniquely known as \identifier. \begin{figure} \caption{\label{\identifier:F1} my first image} \includegraphics{firstImage.jpg} \end{figure} \end{document} %end Example Obviously this does not work, latex produces lots of error messages concerning the macros in the label. So I need to redefine the label that it looks like "abcdefghk", i.e. remove all macro names from the string. How do I have to redefine \identifier in the first place to get this result? I already tried splitting the argument like \process{\myID} \long\def\process#1\@proc #1@@@ \long\def\@proc#1#2#3@@@{\def\myNewID{#1#3} \renewcommand{\identifier}{\myNewID} and similar constructs with defined #2, but each test stopped at the definition of the macros which is expanded somewhere in the process. Did anyone solve this problem ? (if it's in the FAQ it completely escaped my search, sorry) tia Dieter Meinert ++++++ Do not support SPAM ++++++ use only plaintext for email ++++++ -----BEGIN PGP SIGNATURE----- Version: PGPfreeware 6.5.3 for non-commercial use <http://www.pgp.com>
2017-10-20 01:38:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9224317073822021, "perplexity": 10877.431627044962}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823605.33/warc/CC-MAIN-20171020010834-20171020030834-00368.warc.gz"}
https://math.stackexchange.com/questions/1808976/find-the-surface-area-of-s-1-bounded-by-s-2-and-s-3
# Find the surface area of $S_1$ bounded by $S_2$ and $S_3$ Given the surfaces $S_1$, $S_2$, and $S_3$: $$S_1:x^2+(y-1)^2=1$$ $$S_2:z=4-x^2-y^2$$ $$S_3:z=4$$ Find the surface area of $S_1$ between $S_2$ and $S_3$. My attempt: The area of the surface would be $$\iint_{S^*} 1 dS$$ where $S^*$ is the portion of the surface $S_1$ bounded by $S_2$ and $S_3$. But I can't figure out how to parametrize the surface. Looking from the top view, it's going to be like having 2 circles (one from the cross section of the cylinder $S_1$, and one from the cross section of the paraboloid $S_2$) intersecting each other, where the points of intersection depend on the height $z$. The intersection point (depending on $z$) can be found by solving these two equations $$S_1:x^2+(y-1)^2=1 \quad \textrm{and} \quad S_2:z=4-x^2-y^2$$ which it turns out to be $$C : (x,y,z) = \left( \sqrt{(4-z)-\frac{(4-z)^2}{4}}, \frac{4-z}{2}, z \right) \quad z=[0,4]$$ Now I know where the paraboloid and the cylinder intersect each other, and I think I would know how to parametrize this surface too (with some algebra work). However, I anticipate that it's going to be too tedious to compute the surface area using this surface parameterization. So, is there any better way? I was also thinking of using stokes' theorem. But I think the calculation is not going to be very nice as I need to parametrize the curve of the intersection $C^*$ by these two surfaces, and also need to find the nice function $F$ such that $$\iint_{S^*} 1 dS = \iint_{S^*}(\nabla\times \vec{F})\cdot \vec{n} dS = \oint_{C^*} \vec{F}\cdot \vec{n} ds$$ It has to be noted that the surface cannot be projected onto $xy$-plane. We see that it is entirely in the positive $y$ portion, and symmetric. So we project it onto $yz$-plane, then multiply it by $2$. The following is the projection: We want to find the part of $ABC$. The surface is represented by $x=\sqrt{2y-y^2}$. So $$\iint_D \sqrt{1+(\frac{dx}{dy})^2+(\frac{dx}{dz})^2} dydz,$$ where $\frac{dx}{dz}=0, \frac{dx}{dy}=\frac{1-y}{\sqrt{2y-y^2}}$. Now the limits for $z$ would be $4-y^2$ and $4$, since $x=0$, limits for $y$ are $0$ and $2$. It is not hard to get the following integral: $$\int^2_0 \frac{y^2}{2y-y^2}dy.$$
2022-12-08 05:50:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295725226402283, "perplexity": 45.47038639034854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00325.warc.gz"}
http://www.mathnet.ru/php/archive.phtml?jrnid=aa&wshow=issue&year=2017&volume=29&volume_alt=&issue=1&issue_alt=&option_lang=eng
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Algebra i Analiz: Year: Volume: Issue: Page: Find My dad: an engineer in math and in the real worldD. Burago 5 Expository Surveys A panoramic glimpse of manifolds with sectional curvature bounded from belowK. Grove 7 Research Papers A survival guide for feeble fishD. Burago, S. Ivanov, A. Novikov 49 A fixed point theorem for periodic maps on locally symmetric manifoldsS. Weinberger 60 On the stabilizers of finite sets of numbers in the R. Thompson group $F$G. Golan, M. Sapir 70 Endomorphism rings of reductions of elliptic curves and Abelian varietiesYu. G. Zarhin 110 Affine hemispheres of elliptic typeB. Klartag 145 On the total curvature of minimizing geodesics on convex surfacesN. Lebedeva, A. Petrunin 189 Elliptic equations in convex domainsV. Maz'ya 209 “Irrational” constructions in convex geometryV. Milman, L. Rotem 222 Sharp correspondence principle and quantum measurementsL. Charles, L. Polterovich 237 Combinatorial identities for polyhedral conesR. Schneider 279 Easy Reading for a Professional In search of a five-point Alexandrov type conditionA. Petrunin 296
2019-12-10 12:07:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31252530217170715, "perplexity": 5606.75684893087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527205.81/warc/CC-MAIN-20191210095118-20191210123118-00143.warc.gz"}
http://mizar.uwb.edu.pl/version/current/html/scmfsa_2.html
:: The { \bf SCM_FSA } computer :: by Andrzej Trybulec , Yatsuka Nakamura and Piotr Rudnicki :: :: Copyright (c) 1996-2017 Association of Mizar Users definition coherence ; end; :: deftheorem defines SCM+FSA SCMFSA_2:def 1 : registration coherence ( not SCM+FSA is empty & SCM+FSA is with_non-empty_values ) ; end; registration coherence proof end; end; theorem Th1: :: SCMFSA_2:1 notation end; definition :: original: Int-Locations redefine func Int-Locations -> Subset of SCM+FSA; coherence proof end; coherence ; end; :: deftheorem SCMFSA_2:def 2 : canceled; :: deftheorem defines FinSeq-Locations SCMFSA_2:def 3 : registration cluster Int-like for Element of the carrier of SCM+FSA; existence ex b1 being Object of SCM+FSA st b1 is Int-like proof end; end; definition mode Int-Location is Int-like Object of SCM+FSA; mode FinSeq-Location -> Object of SCM+FSA means :Def3: :: SCMFSA_2:def 5 it in SCM+FSA-Data*-Loc ; existence ex b1 being Object of SCM+FSA st b1 in SCM+FSA-Data*-Loc proof end; end; :: deftheorem SCMFSA_2:def 4 : canceled; :: deftheorem Def3 defines FinSeq-Location SCMFSA_2:def 5 : for b1 being Object of SCM+FSA holds ( b1 is FinSeq-Location iff b1 in SCM+FSA-Data*-Loc ); theorem :: SCMFSA_2:2 canceled; theorem :: SCMFSA_2:3 canceled; theorem :: SCMFSA_2:4 canceled; theorem :: SCMFSA_2:5 canceled; theorem :: SCMFSA_2:6 canceled; ::$CT 5 definition let k be Nat; func intloc k -> Int-Location equals :: SCMFSA_2:def 6 dl. k; coherence proof end; func fsloc k -> FinSeq-Location equals :: SCMFSA_2:def 7 - (k + 1); coherence - (k + 1) is FinSeq-Location proof end; end; :: deftheorem defines intloc SCMFSA_2:def 6 : for k being Nat holds intloc k = dl. k; :: deftheorem defines fsloc SCMFSA_2:def 7 : for k being Nat holds fsloc k = - (k + 1); theorem :: SCMFSA_2:7 for k1, k2 being Nat st k1 <> k2 holds fsloc k1 <> fsloc k2 ; theorem :: SCMFSA_2:8 for dl being Int-Location ex i being Nat st dl = intloc i proof end; theorem Th4: :: SCMFSA_2:9 for fl being FinSeq-Location ex i being Nat st fl = fsloc i proof end; registration coherence proof end; end; theorem Th5: :: SCMFSA_2:10 for I being Int-Location holds I is Data-Location proof end; theorem Th6: :: SCMFSA_2:11 for l being Int-Location holds Values l = INT proof end; theorem Th7: :: SCMFSA_2:12 for l being FinSeq-Location holds Values l = INT * proof end; theorem :: SCMFSA_2:13 canceled; theorem :: SCMFSA_2:14 canceled; ::$CT 2 theorem Th8: :: SCMFSA_2:15 for I being Instruction of SCM+FSA st InsCode I <= 8 holds I is Instruction of SCM proof end; theorem Th9: :: SCMFSA_2:16 for I being Instruction of SCM+FSA holds InsCode I <= 12 proof end; theorem Th10: :: SCMFSA_2:17 for I being Instruction of SCM holds I is Instruction of SCM+FSA proof end; definition let a, b be Int-Location; func a := b -> Instruction of SCM+FSA means :Def6: :: SCMFSA_2:def 8 ex A, B being Data-Location st ( a = A & b = B & it = A := B ); existence ex b1 being Instruction of SCM+FSA ex A, B being Data-Location st ( a = A & b = B & b1 = A := B ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A, B being Data-Location st ( a = A & b = B & b1 = A := B ) & ex A, B being Data-Location st ( a = A & b = B & b2 = A := B ) holds b1 = b2 ; ; func AddTo (a,b) -> Instruction of SCM+FSA means :Def7: :: SCMFSA_2:def 9 ex A, B being Data-Location st ( a = A & b = B & it = AddTo (A,B) ); existence ex b1 being Instruction of SCM+FSA ex A, B being Data-Location st ( a = A & b = B & b1 = AddTo (A,B) ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A, B being Data-Location st ( a = A & b = B & b1 = AddTo (A,B) ) & ex A, B being Data-Location st ( a = A & b = B & b2 = AddTo (A,B) ) holds b1 = b2 ; ; func SubFrom (a,b) -> Instruction of SCM+FSA means :Def8: :: SCMFSA_2:def 10 ex A, B being Data-Location st ( a = A & b = B & it = SubFrom (A,B) ); existence ex b1 being Instruction of SCM+FSA ex A, B being Data-Location st ( a = A & b = B & b1 = SubFrom (A,B) ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A, B being Data-Location st ( a = A & b = B & b1 = SubFrom (A,B) ) & ex A, B being Data-Location st ( a = A & b = B & b2 = SubFrom (A,B) ) holds b1 = b2 ; ; func MultBy (a,b) -> Instruction of SCM+FSA means :Def9: :: SCMFSA_2:def 11 ex A, B being Data-Location st ( a = A & b = B & it = MultBy (A,B) ); existence ex b1 being Instruction of SCM+FSA ex A, B being Data-Location st ( a = A & b = B & b1 = MultBy (A,B) ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A, B being Data-Location st ( a = A & b = B & b1 = MultBy (A,B) ) & ex A, B being Data-Location st ( a = A & b = B & b2 = MultBy (A,B) ) holds b1 = b2 ; ; func Divide (a,b) -> Instruction of SCM+FSA means :Def10: :: SCMFSA_2:def 12 ex A, B being Data-Location st ( a = A & b = B & it = Divide (A,B) ); existence ex b1 being Instruction of SCM+FSA ex A, B being Data-Location st ( a = A & b = B & b1 = Divide (A,B) ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A, B being Data-Location st ( a = A & b = B & b1 = Divide (A,B) ) & ex A, B being Data-Location st ( a = A & b = B & b2 = Divide (A,B) ) holds b1 = b2 ; ; end; :: deftheorem Def6 defines := SCMFSA_2:def 8 : for a, b being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = a := b iff ex A, B being Data-Location st ( a = A & b = B & b3 = A := B ) ); :: deftheorem Def7 defines AddTo SCMFSA_2:def 9 : for a, b being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = AddTo (a,b) iff ex A, B being Data-Location st ( a = A & b = B & b3 = AddTo (A,B) ) ); :: deftheorem Def8 defines SubFrom SCMFSA_2:def 10 : for a, b being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = SubFrom (a,b) iff ex A, B being Data-Location st ( a = A & b = B & b3 = SubFrom (A,B) ) ); :: deftheorem Def9 defines MultBy SCMFSA_2:def 11 : for a, b being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = MultBy (a,b) iff ex A, B being Data-Location st ( a = A & b = B & b3 = MultBy (A,B) ) ); :: deftheorem Def10 defines Divide SCMFSA_2:def 12 : for a, b being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = Divide (a,b) iff ex A, B being Data-Location st ( a = A & b = B & b3 = Divide (A,B) ) ); definition let la be Nat; func goto la -> Instruction of SCM+FSA equals :: SCMFSA_2:def 13 SCM-goto la; coherence by Th10; let a be Int-Location; func a =0_goto la -> Instruction of SCM+FSA means :Def12: :: SCMFSA_2:def 14 ex A being Data-Location st ( a = A & it = A =0_goto la ); existence ex b1 being Instruction of SCM+FSA ex A being Data-Location st ( a = A & b1 = A =0_goto la ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A being Data-Location st ( a = A & b1 = A =0_goto la ) & ex A being Data-Location st ( a = A & b2 = A =0_goto la ) holds b1 = b2 ; ; func a >0_goto la -> Instruction of SCM+FSA means :Def13: :: SCMFSA_2:def 15 ex A being Data-Location st ( a = A & it = A >0_goto la ); existence ex b1 being Instruction of SCM+FSA ex A being Data-Location st ( a = A & b1 = A >0_goto la ) proof end; correctness uniqueness for b1, b2 being Instruction of SCM+FSA st ex A being Data-Location st ( a = A & b1 = A >0_goto la ) & ex A being Data-Location st ( a = A & b2 = A >0_goto la ) holds b1 = b2 ; ; end; :: deftheorem defines goto SCMFSA_2:def 13 : for la being Nat holds goto la = SCM-goto la; :: deftheorem Def12 defines =0_goto SCMFSA_2:def 14 : for la being Nat for a being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = a =0_goto la iff ex A being Data-Location st ( a = A & b3 = A =0_goto la ) ); :: deftheorem Def13 defines >0_goto SCMFSA_2:def 15 : for la being Nat for a being Int-Location for b3 being Instruction of SCM+FSA holds ( b3 = a >0_goto la iff ex A being Data-Location st ( a = A & b3 = A >0_goto la ) ); definition let c, i be Int-Location; let a be FinSeq-Location ; func c := (a,i) -> Instruction of SCM+FSA equals :: SCMFSA_2:def 16 [9,{},<*c,a,i*>]; coherence [9,{},<*c,a,i*>] is Instruction of SCM+FSA proof end; func (a,i) := c -> Instruction of SCM+FSA equals :: SCMFSA_2:def 17 [10,{},<*c,a,i*>]; coherence [10,{},<*c,a,i*>] is Instruction of SCM+FSA proof end; end; :: deftheorem defines := SCMFSA_2:def 16 : for c, i being Int-Location for a being FinSeq-Location holds c := (a,i) = [9,{},<*c,a,i*>]; :: deftheorem defines := SCMFSA_2:def 17 : for c, i being Int-Location for a being FinSeq-Location holds (a,i) := c = [10,{},<*c,a,i*>]; definition let i be Int-Location; let a be FinSeq-Location ; func i :=len a -> Instruction of SCM+FSA equals :: SCMFSA_2:def 18 [11,{},<*i,a*>]; coherence [11,{},<*i,a*>] is Instruction of SCM+FSA proof end; func a :=<0,...,0> i -> Instruction of SCM+FSA equals :: SCMFSA_2:def 19 [12,{},<*i,a*>]; coherence [12,{},<*i,a*>] is Instruction of SCM+FSA proof end; end; :: deftheorem defines :=len SCMFSA_2:def 18 : for i being Int-Location for a being FinSeq-Location holds i :=len a = [11,{},<*i,a*>]; :: deftheorem defines :=<0,...,0> SCMFSA_2:def 19 : for i being Int-Location for a being FinSeq-Location holds a :=<0,...,0> i = [12,{},<*i,a*>]; theorem :: SCMFSA_2:18 for a, b being Int-Location holds InsCode (a := b) = 1 proof end; theorem :: SCMFSA_2:19 for a, b being Int-Location holds InsCode (AddTo (a,b)) = 2 proof end; theorem :: SCMFSA_2:20 for a, b being Int-Location holds InsCode (SubFrom (a,b)) = 3 proof end; theorem :: SCMFSA_2:21 for a, b being Int-Location holds InsCode (MultBy (a,b)) = 4 proof end; theorem :: SCMFSA_2:22 for a, b being Int-Location holds InsCode (Divide (a,b)) = 5 proof end; theorem :: SCMFSA_2:23 for lb being Nat holds InsCode (goto lb) = 6 ; theorem :: SCMFSA_2:24 for lb being Nat for a being Int-Location holds InsCode (a =0_goto lb) = 7 proof end; theorem :: SCMFSA_2:25 for lb being Nat for a being Int-Location holds InsCode (a >0_goto lb) = 8 proof end; theorem :: SCMFSA_2:26 for fa being FinSeq-Location for a, c being Int-Location holds InsCode (c := (fa,a)) = 9 ; theorem :: SCMFSA_2:27 for fa being FinSeq-Location for a, c being Int-Location holds InsCode ((fa,a) := c) = 10 ; theorem :: SCMFSA_2:28 for fa being FinSeq-Location for a being Int-Location holds InsCode (a :=len fa) = 11 ; theorem :: SCMFSA_2:29 for fa being FinSeq-Location for a being Int-Location holds InsCode (fa :=<0,...,0> a) = 12 ; theorem Th23: :: SCMFSA_2:30 for ins being Instruction of SCM+FSA st InsCode ins = 1 holds ex da, db being Int-Location st ins = da := db proof end; theorem Th24: :: SCMFSA_2:31 for ins being Instruction of SCM+FSA st InsCode ins = 2 holds ex da, db being Int-Location st ins = AddTo (da,db) proof end; theorem Th25: :: SCMFSA_2:32 for ins being Instruction of SCM+FSA st InsCode ins = 3 holds ex da, db being Int-Location st ins = SubFrom (da,db) proof end; theorem Th26: :: SCMFSA_2:33 for ins being Instruction of SCM+FSA st InsCode ins = 4 holds ex da, db being Int-Location st ins = MultBy (da,db) proof end; theorem Th27: :: SCMFSA_2:34 for ins being Instruction of SCM+FSA st InsCode ins = 5 holds ex da, db being Int-Location st ins = Divide (da,db) proof end; theorem Th28: :: SCMFSA_2:35 for ins being Instruction of SCM+FSA st InsCode ins = 6 holds ex lb being Nat st ins = goto lb proof end; theorem Th29: :: SCMFSA_2:36 for ins being Instruction of SCM+FSA st InsCode ins = 7 holds ex lb being Nat ex da being Int-Location st ins = da =0_goto lb proof end; theorem Th30: :: SCMFSA_2:37 for ins being Instruction of SCM+FSA st InsCode ins = 8 holds ex lb being Nat ex da being Int-Location st ins = da >0_goto lb proof end; theorem Th31: :: SCMFSA_2:38 for ins being Instruction of SCM+FSA st InsCode ins = 9 holds ex a, b being Int-Location ex fa being FinSeq-Location st ins = b := (fa,a) proof end; theorem Th32: :: SCMFSA_2:39 for ins being Instruction of SCM+FSA st InsCode ins = 10 holds ex a, b being Int-Location ex fa being FinSeq-Location st ins = (fa,a) := b proof end; theorem Th33: :: SCMFSA_2:40 for ins being Instruction of SCM+FSA st InsCode ins = 11 holds ex a being Int-Location ex fa being FinSeq-Location st ins = a :=len fa proof end; theorem Th34: :: SCMFSA_2:41 for ins being Instruction of SCM+FSA st InsCode ins = 12 holds ex a being Int-Location ex fa being FinSeq-Location st ins = fa :=<0,...,0> a proof end; theorem :: SCMFSA_2:42 for s being State of SCM+FSA for d being Int-Location holds d in dom s proof end; theorem :: SCMFSA_2:43 for f being FinSeq-Location for s being State of SCM+FSA holds f in dom s proof end; theorem Th37: :: SCMFSA_2:44 for f being FinSeq-Location for S being State of SCM holds not f in dom S proof end; theorem Th38: :: SCMFSA_2:45 for s being State of SCM+FSA holds Int-Locations c= dom s proof end; theorem Th39: :: SCMFSA_2:46 for s being State of SCM+FSA holds FinSeq-Locations c= dom s proof end; theorem :: SCMFSA_2:47 for s being State of SCM+FSA holds dom () = Int-Locations proof end; theorem :: SCMFSA_2:48 for s being State of SCM+FSA holds dom () = FinSeq-Locations proof end; theorem Th42: :: SCMFSA_2:49 for s being State of SCM+FSA for i being Instruction of SCM holds s | SCM-Memory is State of SCM proof end; theorem :: SCMFSA_2:50 for s being State of SCM+FSA for s9 being State of SCM holds s +* s9 is State of SCM+FSA proof end; theorem Th44: :: SCMFSA_2:51 for i being Instruction of SCM for ii being Instruction of SCM+FSA for s being State of SCM for ss being State of SCM+FSA st i = ii & s = ss | SCM-Memory holds Exec (ii,ss) = ss +* (Exec (i,s)) proof end; registration let s be State of SCM+FSA; let d be Int-Location; cluster s . d -> integer ; coherence s . d is integer proof end; end; definition let s be State of SCM+FSA; let d be FinSeq-Location ; :: original: . redefine func s . d -> FinSequence of INT ; coherence s . d is FinSequence of INT proof end; end; theorem Th45: :: SCMFSA_2:52 for S being State of SCM for s being State of SCM+FSA st S = s | SCM-Memory holds s = s +* S by FUNCT_4:75; theorem Th46: :: SCMFSA_2:53 for S being State of SCM for s, s1 being State of SCM+FSA st s1 = s +* S holds s1 . () = S . () proof end; theorem Th47: :: SCMFSA_2:54 for A being Data-Location for a being Int-Location for S being State of SCM for s, s1 being State of SCM+FSA st s1 = s +* S & A = a holds S . A = s1 . a proof end; theorem Th48: :: SCMFSA_2:55 for A being Data-Location for a being Int-Location for S being State of SCM for s being State of SCM+FSA st S = s | SCM-Memory & A = a holds S . A = s . a proof end; registration coherence by ; end; theorem Th49: :: SCMFSA_2:56 for dl being Int-Location holds dl <> IC by ; theorem Th50: :: SCMFSA_2:57 for dl being FinSeq-Location holds dl <> IC proof end; theorem :: SCMFSA_2:58 for il being Int-Location for dl being FinSeq-Location holds il <> dl proof end; theorem :: SCMFSA_2:59 for il being Nat for dl being Int-Location holds il <> dl proof end; theorem :: SCMFSA_2:60 for il being Nat for dl being FinSeq-Location holds il <> dl proof end; theorem :: SCMFSA_2:61 for s1, s2 being State of SCM+FSA st IC s1 = IC s2 & ( for a being Int-Location holds s1 . a = s2 . a ) & ( for f being FinSeq-Location holds s1 . f = s2 . f ) holds s1 = s2 proof end; theorem Th55: :: SCMFSA_2:62 for S being State of SCM for s being State of SCM+FSA st S = s | SCM-Memory holds IC s = IC S proof end; theorem Th56: :: SCMFSA_2:63 for a, b being Int-Location for s being State of SCM+FSA holds ( (Exec ((a := b),s)) . () = (IC s) + 1 & (Exec ((a := b),s)) . a = s . b & ( for c being Int-Location st c <> a holds (Exec ((a := b),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((a := b),s)) . f = s . f ) ) proof end; theorem Th57: :: SCMFSA_2:64 for a, b being Int-Location for s being State of SCM+FSA holds ( (Exec ((AddTo (a,b)),s)) . () = (IC s) + 1 & (Exec ((AddTo (a,b)),s)) . a = (s . a) + (s . b) & ( for c being Int-Location st c <> a holds (Exec ((AddTo (a,b)),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((AddTo (a,b)),s)) . f = s . f ) ) proof end; theorem Th58: :: SCMFSA_2:65 for a, b being Int-Location for s being State of SCM+FSA holds ( (Exec ((SubFrom (a,b)),s)) . () = (IC s) + 1 & (Exec ((SubFrom (a,b)),s)) . a = (s . a) - (s . b) & ( for c being Int-Location st c <> a holds (Exec ((SubFrom (a,b)),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((SubFrom (a,b)),s)) . f = s . f ) ) proof end; theorem Th59: :: SCMFSA_2:66 for a, b being Int-Location for s being State of SCM+FSA holds ( (Exec ((MultBy (a,b)),s)) . () = (IC s) + 1 & (Exec ((MultBy (a,b)),s)) . a = (s . a) * (s . b) & ( for c being Int-Location st c <> a holds (Exec ((MultBy (a,b)),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((MultBy (a,b)),s)) . f = s . f ) ) proof end; theorem Th60: :: SCMFSA_2:67 for a, b being Int-Location for s being State of SCM+FSA holds ( (Exec ((Divide (a,b)),s)) . () = (IC s) + 1 & ( a <> b implies (Exec ((Divide (a,b)),s)) . a = (s . a) div (s . b) ) & (Exec ((Divide (a,b)),s)) . b = (s . a) mod (s . b) & ( for c being Int-Location st c <> a & c <> b holds (Exec ((Divide (a,b)),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((Divide (a,b)),s)) . f = s . f ) ) proof end; theorem :: SCMFSA_2:68 for a being Int-Location for s being State of SCM+FSA holds ( (Exec ((Divide (a,a)),s)) . () = (IC s) + 1 & (Exec ((Divide (a,a)),s)) . a = (s . a) mod (s . a) & ( for c being Int-Location st c <> a holds (Exec ((Divide (a,a)),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((Divide (a,a)),s)) . f = s . f ) ) proof end; theorem Th62: :: SCMFSA_2:69 for l being Nat for s being State of SCM+FSA holds ( (Exec ((goto l),s)) . () = l & ( for c being Int-Location holds (Exec ((goto l),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((goto l),s)) . f = s . f ) ) proof end; theorem Th63: :: SCMFSA_2:70 for l being Nat for a being Int-Location for s being State of SCM+FSA holds ( ( s . a = 0 implies (Exec ((a =0_goto l),s)) . () = l ) & ( s . a <> 0 implies (Exec ((a =0_goto l),s)) . () = (IC s) + 1 ) & ( for c being Int-Location holds (Exec ((a =0_goto l),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((a =0_goto l),s)) . f = s . f ) ) proof end; theorem Th64: :: SCMFSA_2:71 for l being Nat for a being Int-Location for s being State of SCM+FSA holds ( ( s . a > 0 implies (Exec ((a >0_goto l),s)) . () = l ) & ( s . a <= 0 implies (Exec ((a >0_goto l),s)) . () = (IC s) + 1 ) & ( for c being Int-Location holds (Exec ((a >0_goto l),s)) . c = s . c ) & ( for f being FinSeq-Location holds (Exec ((a >0_goto l),s)) . f = s . f ) ) proof end; theorem Th65: :: SCMFSA_2:72 for g being FinSeq-Location for a, c being Int-Location for s being State of SCM+FSA holds ( (Exec ((c := (g,a)),s)) . () = (IC s) + 1 & ex k being Nat st ( k = |.(s . a).| & (Exec ((c := (g,a)),s)) . c = (s . g) /. k ) & ( for b being Int-Location st b <> c holds (Exec ((c := (g,a)),s)) . b = s . b ) & ( for f being FinSeq-Location holds (Exec ((c := (g,a)),s)) . f = s . f ) ) proof end; theorem Th66: :: SCMFSA_2:73 for g being FinSeq-Location for a, c being Int-Location for s being State of SCM+FSA holds ( (Exec (((g,a) := c),s)) . () = (IC s) + 1 & ex k being Nat st ( k = |.(s . a).| & (Exec (((g,a) := c),s)) . g = (s . g) +* (k,(s . c)) ) & ( for b being Int-Location holds (Exec (((g,a) := c),s)) . b = s . b ) & ( for f being FinSeq-Location st f <> g holds (Exec (((g,a) := c),s)) . f = s . f ) ) proof end; theorem Th67: :: SCMFSA_2:74 for g being FinSeq-Location for c being Int-Location for s being State of SCM+FSA holds ( (Exec ((c :=len g),s)) . () = (IC s) + 1 & (Exec ((c :=len g),s)) . c = len (s . g) & ( for b being Int-Location st b <> c holds (Exec ((c :=len g),s)) . b = s . b ) & ( for f being FinSeq-Location holds (Exec ((c :=len g),s)) . f = s . f ) ) proof end; theorem Th68: :: SCMFSA_2:75 for g being FinSeq-Location for c being Int-Location for s being State of SCM+FSA holds ( (Exec ((),s)) . () = (IC s) + 1 & ex k being Nat st ( k = |.(s . c).| & (Exec ((),s)) . g = k |-> 0 ) & ( for b being Int-Location holds (Exec ((),s)) . b = s . b ) & ( for f being FinSeq-Location st f <> g holds (Exec ((),s)) . f = s . f ) ) proof end; theorem :: SCMFSA_2:76 for s being State of SCM+FSA for S being SCM+FSA-State st S = s holds IC s = IC S by ; theorem Th70: :: SCMFSA_2:77 for i being Instruction of SCM for I being Instruction of SCM+FSA st i = I & i is halting holds I is halting proof end; theorem Th71: :: SCMFSA_2:78 for I being Instruction of SCM+FSA st ex s being State of SCM+FSA st (Exec (I,s)) . () = (IC s) + 1 holds not I is halting proof end; registration let a, b be Int-Location; set s = the State of SCM+FSA; cluster a := b -> non halting ; coherence not a := b is halting proof end; cluster AddTo (a,b) -> non halting ; coherence proof end; cluster SubFrom (a,b) -> non halting ; coherence not SubFrom (a,b) is halting proof end; cluster MultBy (a,b) -> non halting ; coherence not MultBy (a,b) is halting proof end; cluster Divide (a,b) -> non halting ; coherence not Divide (a,b) is halting proof end; end; theorem :: SCMFSA_2:79 for a, b being Int-Location holds not a := b is halting ; theorem :: SCMFSA_2:80 for a, b being Int-Location holds not AddTo (a,b) is halting ; theorem :: SCMFSA_2:81 for a, b being Int-Location holds not SubFrom (a,b) is halting ; theorem :: SCMFSA_2:82 for a, b being Int-Location holds not MultBy (a,b) is halting ; theorem :: SCMFSA_2:83 for a, b being Int-Location holds not Divide (a,b) is halting ; registration let la be Nat; cluster goto la -> non halting ; coherence not goto la is halting proof end; end; theorem :: SCMFSA_2:84 for la being Nat holds not goto la is halting ; registration let a be Int-Location; let la be Nat; set f = the_Values_of SCM+FSA; set s = the SCM+FSA-State; cluster a =0_goto la -> non halting ; coherence not a =0_goto la is halting proof end; cluster a >0_goto la -> non halting ; coherence not a >0_goto la is halting proof end; end; theorem :: SCMFSA_2:85 for la being Nat for a being Int-Location holds not a =0_goto la is halting ; theorem :: SCMFSA_2:86 for la being Nat for a being Int-Location holds not a >0_goto la is halting ; registration let c be Int-Location; let f be FinSeq-Location ; let a be Int-Location; set s = the State of SCM+FSA; cluster c := (f,a) -> non halting ; coherence not c := (f,a) is halting proof end; cluster (f,a) := c -> non halting ; coherence not (f,a) := c is halting proof end; end; theorem :: SCMFSA_2:87 for f being FinSeq-Location for a, c being Int-Location holds not c := (f,a) is halting ; theorem :: SCMFSA_2:88 for f being FinSeq-Location for a, c being Int-Location holds not (f,a) := c is halting ; registration let c be Int-Location; let f be FinSeq-Location ; set s = the State of SCM+FSA; cluster c :=len f -> non halting ; coherence not c :=len f is halting proof end; cluster f :=<0,...,0> c -> non halting ; coherence not f :=<0,...,0> c is halting proof end; end; theorem :: SCMFSA_2:89 for f being FinSeq-Location for c being Int-Location holds not c :=len f is halting ; theorem :: SCMFSA_2:90 for f being FinSeq-Location for c being Int-Location holds not f :=<0,...,0> c is halting ; theorem :: SCMFSA_2:91 for I being Instruction of SCM+FSA st I = [0,{},{}] holds I is halting by ; theorem Th85: :: SCMFSA_2:92 for I being Instruction of SCM+FSA st InsCode I = 0 holds I = [0,{},{}] proof end; theorem Th86: :: SCMFSA_2:93 for I being set holds ( I is Instruction of SCM+FSA iff ( I = [0,{},{}] or ex a, b being Int-Location st I = a := b or ex a, b being Int-Location st I = AddTo (a,b) or ex a, b being Int-Location st I = SubFrom (a,b) or ex a, b being Int-Location st I = MultBy (a,b) or ex a, b being Int-Location st I = Divide (a,b) or ex la being Nat st I = goto la or ex lb being Nat ex da being Int-Location st I = da =0_goto lb or ex lb being Nat ex da being Int-Location st I = da >0_goto lb or ex b, a being Int-Location ex fa being FinSeq-Location st I = a := (fa,b) or ex a, b being Int-Location ex fa being FinSeq-Location st I = (fa,a) := b or ex a being Int-Location ex f being FinSeq-Location st I = a :=len f or ex a being Int-Location ex f being FinSeq-Location st I = f :=<0,...,0> a ) ) proof end; Lm1: for W being Instruction of SCM+FSA st W is halting holds W = [0,{},{}] proof end; registration coherence by ; end; theorem Th87: :: SCMFSA_2:94 for I being Instruction of SCM+FSA st I is halting holds I = halt SCM+FSA by Lm1; theorem :: SCMFSA_2:95 for I being Instruction of SCM+FSA st InsCode I = 0 holds I = halt SCM+FSA by Th85; theorem Th89: :: SCMFSA_2:96 theorem :: SCMFSA_2:97 canceled; ::\$CT theorem :: SCMFSA_2:98 for i being Instruction of SCM for I being Instruction of SCM+FSA st i = I & not i is halting holds not I is halting by ; theorem :: SCMFSA_2:99 for i, j being Nat holds fsloc i <> intloc j proof end; theorem Th92: :: SCMFSA_2:100 proof end; theorem :: SCMFSA_2:101 for i, j being Nat st i <> j holds intloc i <> intloc j by AMI_3:10; theorem :: SCMFSA_2:102 for l being Nat for a being Int-Location holds not a in dom (Start-At (l,SCM+FSA)) proof end; theorem :: SCMFSA_2:103 for l being Nat for f being FinSeq-Location holds not f in dom (Start-At (l,SCM+FSA)) proof end; theorem :: SCMFSA_2:104 for s1, s2 being State of SCM+FSA st IC s1 = IC s2 & ( for a being Int-Location holds s1 . a = s2 . a ) & ( for f being FinSeq-Location holds s1 . f = s2 . f ) holds s1 = s2 proof end; registration let f be FinSeq-Location ; let w be FinSequence of INT ; cluster f .--> w -> data-only for PartState of SCM+FSA; coherence for b1 being PartState of SCM+FSA st b1 = f .--> w holds b1 is data-only proof end; end; registration let x be Int-Location; let i be Integer; cluster x .--> i -> data-only for PartState of SCM+FSA; coherence for b1 being PartState of SCM+FSA st b1 = x .--> i holds b1 is data-only proof end; end; registration let a, b be Int-Location; cluster a := b -> No-StopCode ; coherence a := b is No-StopCode proof end; end; registration let a, b be Int-Location; cluster AddTo (a,b) -> No-StopCode ; coherence proof end; end; registration let a, b be Int-Location; cluster SubFrom (a,b) -> No-StopCode ; coherence SubFrom (a,b) is No-StopCode proof end; end; registration let a, b be Int-Location; cluster MultBy (a,b) -> No-StopCode ; coherence MultBy (a,b) is No-StopCode proof end; end; registration let a, b be Int-Location; cluster Divide (a,b) -> No-StopCode ; coherence Divide (a,b) is No-StopCode proof end; end; registration let lb be Nat; coherence goto lb is No-StopCode proof end; end; registration let lb be Nat; let a be Int-Location; coherence a =0_goto lb is No-StopCode proof end; end; registration let lb be Nat; let a be Int-Location; coherence a >0_goto lb is No-StopCode proof end; end; registration let fa be FinSeq-Location ; let a, c be Int-Location; cluster c := (fa,a) -> No-StopCode ; coherence c := (fa,a) is No-StopCode proof end; end; registration let fa be FinSeq-Location ; let a, c be Int-Location; cluster (fa,a) := c -> No-StopCode ; coherence (fa,a) := c is No-StopCode proof end; end; registration let fa be FinSeq-Location ; let a be Int-Location; cluster a :=len fa -> No-StopCode ; coherence a :=len fa is No-StopCode proof end; end; registration let fa be FinSeq-Location ; let a be Int-Location; coherence proof end; end;
2018-03-19 02:41:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2134803682565689, "perplexity": 10974.450607637258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646213.26/warc/CC-MAIN-20180319023123-20180319043123-00629.warc.gz"}
https://www.physicsforums.com/threads/definition-of-the-root-of-1-for-different-roots.907788/
# I Definition of the root of -1 for different roots 1. Mar 15, 2017 ### Mr Davis 97 How does the value of $\displaystyle \sqrt[a]{-1}$ vary as $a$ varies as any real number? When is this value complex and when is it real? For example, we know that when a = 2 it is complex, but when a = 3 it is real. What about when $a = \pi$, for example? 2. Mar 15, 2017 ### SlowThinker 3. Mar 15, 2017 ### Staff: Mentor As far as I know, there is no such thing as the "π-th root" of a number. The index of a radical is a positive integer that is 2 or larger. You can however raise a number to an arbitrary power. For example, $2^{1/\pi} = (e^{\ln 2})^{1/\pi} = e^{\frac{\ln 2}{\pi}}$, but see the link that @SlowThinker posted. 4. Mar 15, 2017 ### Mr Davis 97 if $x^{1/a} = e^{\frac{\ln x}{a}}$, and we know that $(-1)^{1/3} = -1$, does that mean that $(-1)^{1/3} = e^{\frac{\ln (-1)}{3}} = -1$? 5. Mar 15, 2017 ### Staff: Mentor Not if ln means the usual natural logarithm function whose domain is positive real numbers. 6. Mar 15, 2017 ### Mr Davis 97 So in general when would $(-1)^{1/a}$ be complex and when would it be real? 7. Mar 15, 2017 ### MAGNIBORO $(-1)^{\frac{1}{3}}=e^\frac{ln(-1)}{3}=-1$ this is correct but you dont consider the other results, $\sqrt4=2$ is correct but not complete, becuase $\sqrt4=\pm 2$ in the same way are other results for $(-1)^{\frac{1}{3}}=e^\frac{ln(-1)}{3}$ look the post https://www.physicsforums.com/insights/things-can-go-wrong-complex-numbers/ 8. Mar 15, 2017 ### Staff: Mentor No. $\sqrt 4$ is generally accepted to mean the principal square root of 4, a positive number. 9. Mar 15, 2017 ### Stephen Tashi That definition only applies when $x$ is positive. (In the domain of real numbers, there is no standard definition for raising a negative number to an irrational exponent. )
2017-11-21 22:26:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6439644694328308, "perplexity": 975.3167515866566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806426.72/warc/CC-MAIN-20171121204652-20171121224652-00628.warc.gz"}
http://cms.math.ca/10.4153/CMB-2005-004-2
Canadian Mathematical Society www.cms.math.ca location:  Publications → journals → CMB Abstract view # Degree Homogeneous Subgroups Published:2005-03-01 Printed: Mar 2005 • John D. Dixon • A. Rahnamai Barghi Format: HTML LaTeX MathJax PDF PostScript ## Abstract Let $G$ be a finite group and $H$ be a subgroup. We say that $H$ is \emph{degree homogeneous }if, for each $\chi\in \Irr(G)$, all the irreducible constituents of the restriction $\chi_{H}$ have the same degree. Subgroups which are either normal or abelian are obvious examples of degree homogeneous subgroups. Following a question by E.~M. Zhmud', we investigate general properties of such subgroups. It appears unlikely that degree homogeneous subgroups can be characterized entirely by abstract group properties, but we provide mixed criteria (involving both group structure and character properties) which are both necessary and sufficient. For example, $H$ is degree homogeneous in $G$ if and only if the derived subgroup $H^{\prime}$ is normal in $G$ and, for every pair $\alpha,\beta$ of irreducible $G$-conjugate characters of $H^{\prime}$, all irreducible constituents of $\alpha^{H}$ and $\beta^{H}$ have the same degree. MSC Classifications: 20C15 - Ordinary representations and characters top of page | contact us | privacy | site map | © Canadian Mathematical Society, 2016 : https://cms.math.ca/
2016-08-28 22:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6853398084640503, "perplexity": 1009.5329898341923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982948216.97/warc/CC-MAIN-20160823200908-00068-ip-10-153-172-175.ec2.internal.warc.gz"}
https://support.bioconductor.org/p/63083/
How to reverse translate amino acid sequence into mammalian codon optimised cDNA 4 0 Entering edit mode @tomasbjorklund-7071 Last seen 7.2 years ago Sweden I have an issue that has been surprisingly difficult to find any answer to on the net, even though it is very straight forward. I have a list of a large number of amino acid sequences (one letter per AA). I need to convert them into cDNA sequences that are mammalian codon optimised. Can anyone help me with finding the suitable function to do that? (Just the reverse translation is required). Thanks' /Tomas convert translation • 4.7k views 0 Entering edit mode I'm not sure that you are correct that 'it is very straightforward', which is likely why you can't find functionality to do this. Given that each amino acid has a one-to-many association with the codons that encode for it, how do you propose doing the reverse mapping? As an example, let's consider a simple 3 amino acid sequence, FLP. If I go to someplace like (http://www.genscript.com/cgi-bin/tools/codon_freq_table) and get the human codon frequency table, I get TTT F 0.45 TTC F 0.55 TTA L 0.07 TTG L 0.13 CTT L 0.13 CTC L 0.2 CTA L 0.07 CTG L 0.41 CCT P 0.28 CCC P 0.33 CCA P 0.27 CCG P 0.11 So unless you are going to make an (unwarranted, IMO) simplifying assumption that you can just use the most common AA -> codon mapping, you have 40 different possible sequences that could have given rise to a simple 3 amino acid sequence. Obviously as the amino acid sequence gets longer, the possible cDNA sequences that could give rise to the amino acid sequence blows up massively. It seems that for any reasonably long amino acid sequence you would then either get some massive number of possible cDNA sequences (not likely useful), or a single sequence that has a probability somewhere around 1/<some massive number> of being the right one. Neither outcome seems very useful to me. 0 Entering edit mode @valerie-obenchain-4275 Last seen 4 months ago United States Hi Tomas, We don't have a reverse translate function in Bioconductor, at least not one that's exported. It's possible Herve wrote a similar helper at one point. If that's the case I'm sure he'll post. I don't think we've had a request for this function before. I'm interested in hearing if others have this same need ... ? Are you be looking for a consensus sequence derived from all possible codons? only sequences from non-degenerate codons? i.e., similar to this tool, http://www.bioinformatics.org/sms2/rev_trans.html FYI, the low level objects in Biostrings used in the forward translation might be of interest if you want to experiment with a reverse prototype. library(Biostrings) ?IUPAC_CODE_MAP ?GENETIC_CODE Valerie 0 Entering edit mode @tomasbjorklund-7071 Last seen 7.2 years ago Sweden Hi Valerie and James, Thank you both for your helpful answers. I think that I may need to give a little bit of more background on what I need to achieve to make it easier to understand. We are studying short polypeptides derived from proteins expressed by a large number of viruses. We are building large systematic assay systems where we express these polypeptides (approx. 45aa long) using viral vectors in mammalian cells. We build the libraries using custom microarrays which can generate 100 000 oligonucleotides (200bp long) that then are put into the viral vector expression system. The challenge is this: While 100 000 sounds a lot, it is actually not that many considering the number of viral strains and proteins we wish to express. Therefore, we need to make sure that we do not have unnecessary redundancy in the library, i.e., two genetic sequences that translate into identical polypeptides. Unfortunately, many viral strains have high genetic diversity while coding for highly conserved proteins. Thus if we were to only fragment the DNA into suitable length pieces and sorting out identical duplicates, we would have much more than 100 000 gene sequences and identical polypeptides would be expressed at a higher abundance than those that are actually different. In addition, some of the viruses are not mammalian viruses and thus, there is no guarantee that these DNA sequences would efficiently translate into proteins in mammalian cells. So the situation is not at all that I need to figure out the original DNA sequence from an AA sequence (I realise that this would be impossible) instead, what we need to generate are cDNA sequences that would translate with sufficient efficiency into the target polypeptides in mammalian cells. For this, I would myself see one possible process; The first step would be as James suggest to translate 1AA into one codon, based on the human codon frequency table. After that I would then run a mammalian codon optimisation on the entire generated sequence similarly to what Genscript and other gene synthesis companies offer. It is a function like this that I was looking for, as I have very little knowledge in the codon optimisation principles. The first part of the conversion I can clearly write myself. I hope that this made my question a little clearer. Thank you again! /Tomas 0 Entering edit mode @tomasbjorklund-7071 Last seen 7.2 years ago Sweden Hi again, It seems like the second part, the codon optimisation, maybe could be achieved by GeneGA in Bioconductor. Is someone familiar with this and could recommend it? 0 Entering edit mode caroline • 0 @caroline-7721 Last seen 4.2 years ago Mammalian cell expression systems are the best choice for the production of eukaryotic proteins, especially when correct folding and post-translational modification (glycosylation, phosphorylation, etc.) is required. They produce eukaryotic recombinant proteins in the most natural state, with native tertiary structure, physiochemical characteristics, and bioactivities. They have been successfully applied in the biopharmaceutical production of cytokines, monoclonal antibodies, growth factors and so on. The most widely-used mammalian cell lines are HEK293 and CHO cells.
2022-05-24 19:39:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47722774744033813, "perplexity": 1717.1490705944075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00299.warc.gz"}
http://aapgbull.geoscienceworld.org/highwire/markup/149312/expansion?width=1000&height=500&iframe=true&postprocessors=highwire_figures%2Chighwire_math%2Chighwire_inline_linked_media%2Chighwire_embed
Table 4 Anisotropy Directions and Corresponding Parameters of Modeled Directional Semivariograms* AnisotropyFract. of SillR1 (m)R2 (m)R3 (m)Directionxyz Isotropic0.16000 n/an/an/a Isotropic0.08999 n/an/an/a Isotropic0.10202020 n/an/an/a Geometric0.211343434R1−0.0214−0.00260.9998 Geometric0.191426969R10.4444−0.76700.4629 Zonal0.2625002000247R10.8132−0.5766−0.0796 R2−0.5651−0.81490.1290 R30.13920.05990.9885 • * In order to include all detected anisotropies, a high degree of nesting is required. These anisotropy orientations correspond to the orientation of dolomite bodies at different scales. The anisotropy ranges describe the distribution structure (elongated, flattened, etc.).
2017-09-19 17:03:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833312451839447, "perplexity": 12302.113136790938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685912.14/warc/CC-MAIN-20170919164651-20170919184651-00471.warc.gz"}
http://fnaarccuneo.it/nbvf/sin-cos-tan-calculator.html
# Sin Cos Tan Calculator Deg or Rad in the left corner of the number display tells you what mode you're in. All trigonometric functions are periodic. The results were all off. A half turn, or 180°, or π radian is the period of tan(x) = sin(x) / cos(x) and cot(x) = cos(x) / sin(x), as can be seen from these definitions and the period of the defining trigonometric functions. cos 1350 35. 4 Solving equations (EMCGH) The general solution (EMCGJ). tan −1 (2) ≈ 63. Trigonometric Ratios: The trigonometric ratios are ratios that connect two sides of a. If a right angles triangle has the two short sides of 1 and 2 then the hypotenuse must be sqrt5. Creates a series of calculations that can be printed, bookmarked, shared and modified. Inverse is a more. The sine, cosine and tangent functions Background: In what follows we assume that you are familiar with trigonometry. The "switch case" witch I have write for sin,cos,tan,cot doesn't work when I enter them in operator and it goes to entering second number. This online calculator shows values of hyperbolic functions of given argument. 正弦 (sine), 余弦 (cosine) 和 正切 (tangent) (英语符号简写为 sin, cos 和 tan) 是 直角三角形 边长的比: 对一个特定的角 θ 来说,不论三角形的大小, 这三个比是不变的. There’s three different sides to a triangle: The hypotenuse is the longest side; The opposite is the side opposite to the angle. The sin(θ) is the vertical component , the cos(θ) is the horizontal coordinate of the arc endpoint and the ratio of sin(θ) / cos(θ) is defined as tan(θ). By using this website, you agree to our Cookie Policy. This is part of CK-12’s Geometry: Right Triangle Trigonometry. It's worth taking a few minutes to work out how your calculator operates, as this could save you hours of messing about when you need it. Trigonometry. What are the sine, cosine, and tangent of Θ = 3 pi over 4 radians? sin Θ = square root 2 over 2; cos Θ = negat… Get the answers you need, now!. Find the exact value of the expression. Adding 360 to -330 gives you 30. 1 + cos 2u 2 1A + B22 = A2 + 2AB + B2. Pressing ↵ starts the calculation. Convert the remaining factors to cos( )x (using sin 1 cos22x x. 10) tan θ x y 225 ° 11) cos 270 ° 12) sin 0 13) cot 7π 4 14) csc 2π 3 15) csc 225 ° 16) sin 300 ° 17) csc 90 ° 18) tan 240 ° 19) sin π 4 20) tan 120 ° 21) tan − 13 π 6 22) cos −630 ° 23) cos 990 ° 24) csc − 31 π 6 25) csc − 5π 6 26) cos − 17 π 3 27) sin 29 π 6 28) sec 945 ° 29) cos − 11 π 2 30) sin −2π-2-. To find the trigonometric functions of an angle, enter the chosen angle in degrees or radians. Function File: cosd (x) Compute the cosine for each element of x in degrees. 20 problems 40 problems. import time. Get the free "Trig calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. The period of sin is 2. The simple and fast way to calculate six basic trigonometric function values in degrees and radians. Con Sin Cos Tan trigonométricas Calculadora Mod APK, puede liberar a la compra de cualquier artículo en el pecado Cos Tan trigonométricas de la calculadora APK. , a right angle is π/2), and in 'half-rotations' for cospi etc. However, sometimes you won't be allowed to use a calculator on a homework or exam problem or you might simply not have a calculator. The domain of the inverse sine is [-1,1], the range is [-pi/2,pi/2]. A very easy way to remember the three rules is to to use the abbreviation SOH CAH TOA. You may adjust the accuracy of your results. Click to expand ^-1 rather than ^-10. 97) and state. calculator - Calculator for the trigonometric functions sin cos tan csc sec cot. [2703875] - Write the trigonometric expression in terms of sine and cosine, and then simplify. The period of such a function is the length of one of its cycles. The inverse cosine function is sometimes called the arccosine function, and notated arccosx. And there is the tangent function. Online trigonometric calculator is an online tool for all your trigonometric calculations. Unit 1a: unit circle test review worksheet name: date: period: for #12, find all 6 trig functions for: 1. In the formulas given on geometry pages the angles are usually in degrees. Sin, Cos, Tan. The trigonometry chart given here is in Sexagesimal System which means the angles are expressed in degrees. Section 5: Sin, Cos, Tan (TI83, TI84) Learn how to work with the trigonometric functions such as Sin, Cos, and Tan with your TI-83 / TI-84 Calculator. Am I able to use a function on Windows 10 Calculator to find the inverse function of sine or cos or tan? This thread is locked. cos 1350 35. If the power of the sine is odd and positive: Goal: ux cos i. The sine, cosine and tangent functions Background: In what follows we assume that you are familiar with trigonometry. In order to use inverse trigonometric functions, we need to understand that an inverse trigonometric function “undoes” what the original trigonometric function “does,” as is the case with any other function and its inverse. 팔로우 조회 수: 61(최근 30일) Gry 23 Oct 2014. cos and tan calculations on a scientific calculator. Mark as New. Basic Math. The vector or tensor is usually related to some object that is actually undergoing the rotation, and the vector and/or tensor is along for the ride. 4 Solving equations (EMCGH) The general solution (EMCGJ). cos t csc t 2. To calculate these functions in terms of degrees or radians use Trigonometric Functions Calculator ƒ (x). The tan trigonometric function to calculate the tangent of an angle in radians, degrees or gradians. Sine and cosine of complementary angles. In more formal terms, it is the smallest. A scientific calculator is easy to use online-there is no need to download or install it on your. Calculator by matlab sin, cos, tan. ∫ C F · d r , where F ( x , y , z ) = yze x i + zxe y j + xye z k and r( t ) = sin t i + cos t j + tan t k , 0 ⩽ t ⩽ π /4. This calc provides these functions, allowing you to find for example "what angle has a sin of 0. doc Author: bmcldc Created Date: 6/10/2009 11:24:06 AM. Sine, Cosine and Tangent (often shortened to sin, cos and tan) are each a ratio of sides of a right angled triangle: For a given angle θ each ratio stays the same no matter how big or small the triangle is. Before you get started, make sure that your calculator is in the degree mode (and not radian or gradient mode). 8660254037844386. (cos )(tan ) = sin for = 7 5 41. tan 600 33. This online trigonometry calculator will calculate the sine, cosine, tangent, cotangent, secant and cosecant of values entered in π radians. job Students , teachers, employees, and employers today expect convenience and useful chatbot for immediate use. }\) What answer should you expect to get? Subsection Trigonometric Ratios for Obtuse Angles. If you want to compare the results of these methods with other languages, you can use a JavaScript scientific calculator available on the Internet. Our scientific calculator is the most sophisticated and comprehensive scientific calculator online. 414213562373095) Function Reference. The inverse cosine function is sometimes called the arccosine function, and notated arccosx. By using this website, you agree to our Cookie Policy. More precisely, the sine of an angle $t$ equals the y. cos I(sin Cos 15. Functions Sin Cos Tan. (i'm sorry if anything's wrong)No, there are multiple uses of these functions - they're not just there so people can make calculators. Sin 30° = 1 (opposite)/2 (Hypotenuse) so it equals ½ =. The graphs of sin and cos are periodic, with period of 360° (in other words the graphs repeat themselves every 360°). b) Use a calculator to check your answers to part a). Using the unit circle calculator is easy and quick. This video gives more detail about the mathematical principles presented in Trigonometric Ratios with a Calculator. We can use the equation of a curve in polar coordinates to compute some areas bounded by such curves. What are the sine, cosine, and tangent of Θ = 3 pi over 4 radians? sin Θ = square root 2 over 2; cos Θ = negat… Get the answers you need, now!. com To create your new password, just click the link in the email we sent you. (Round your answers to the nearest ten-thousandth, i. Sin, cos, tan of Sum of Two Angles. Integral Calculus. Inverse Trig Function Ranges. Press and check that your calculator is set to Degree mode. sec sin 800 36. You just have to set the value in degrees or radians and select the function. So, here's what I need to do, I just need to know which one I must use:. Improve your math knowledge with free questions in "Inverses of sin, cos, and tan" and thousands of other math skills. Complementary angles: sin = cos (90 ) cos = sin (90 ) tan = cot (90 ). A worksheet where you need to calculate the Sin/Cos/Tan value of a set of values. 1387 ; 9Π (9*pi) = 28. sin A = opposite / hypotenuse = a / c. atan2 (y, x) ¶ Return atan(y / x), in radians. So I checked everything very. Tangent = 1. 087155 --> 1/x --> gives 11. Point P has a positive y-coordinate, and sinθ = sin(π−θ) > 0. The sin trigonometric function to calculate the sine of an angle in radians, degrees or gradians. 7320508075688767. How does one express Sin, Cos, Tan in AutoLISP? for example: (sin 30)/2 (cos 120)/2 (tan 225)/2 I probably need to convert degrees to radians right? Does AutoLISP recognize SIN, COS, or TAN? Thanks. Before you get started, make sure that your calculator is in the degree mode (and not radian or gradient mode). * Use e for scientific notation. Section 5: Sin, Cos, Tan (TI83, TI84) Learn how to work with the trigonometric functions such as Sin, Cos, and Tan with your TI-83 / TI-84 Calculator. This produces sin-1 on the calculator screen. 473713 because 1/sin t is the same as sin^-1 t. cos A = adjacent / hypotenuse = b / c. ) find if cos 2 and lies in. The cosine (often abbreviated "cos") is the ratio of the length of the side adjacent to the angle to the length of the hypotenuse. Click 2nd again to return the buttons to their original functions. All trigonometric functions are periodic. the number sin (A)/cos (A) is called the tangent of A. So I checked everything very. Then the distance AB ("the sine line") equals sin a, and the distance OA ("the cosine line") equals cos a. 2743 ; 10Π (10*pi) = 31. 2π 3 + sec. 0 0 0 Login to reply the answers Post. The periodicity of the trigonometric functions means that there are an infinite number of positive and negative angles that satisfy an equation. squares are also provided. Use the horizontal calculator to show all of the functions and then enter the value of sin, cos, or tan to calculate then press either of these functions and finally press 1/x ex: for sin 5, enter 5 --> sin --> gives 0. purpose: This function is the actual calculator and the heart of the application """ # This part is for reading and. Lecture 5 Plane Stress Transformation Equations Stress elements and plane stress. 8660254037844386. sin( ) [?]sin , cos( ) [?]cos , tan( ) [?]tann x x n x x n x xπ π π± = ± = ± =, the sign ? is for plus or minus depending on the position of the terminal side. sin u tan u + cos u 3. 8Π (8*pi) = 25. 计算方法: 用一条边的长度除以另一条边的长度. We write cot (A) or cot A. The graphs of sin and cos are periodic, with period of 360° (in other words the graphs repeat themselves every 360°). If it's hard or not possible, then how would you go about solving inverses in. Basic Calculator. Conic Sections: Ellipse with Foci example. Trigonometric Formulas and Relationships. sin^2(x) + cos^2(x) = 1, so combining these we get the equation. Underneath the calculator, six most popular trig functions will appear - three basic ones: sine, cosine and tangent, and their reciprocals: cosecant, secant and cotangent. The results were all off. Trigonometry Calculator (Sin, Cos, Tan) This trigonometry calculator is a very helpful online tool which you can use in two common situations where you require trigonometry calculations. To convert a trigonometric ratio back to an angle measure, use the inverse function found above the same key as the function. But the tan is -1, so the angle is in the 2nd quadrant or the 4th quadrant. Hai, i have probleam with my code, currently i am create scientific calculator which have function sin, cos & tan. Inverse Trig Function Ranges. In trigonometry, the three sides of a. See also: asind, sin. Integral Calculus. The C function for this is atan(). To calculate a different value, next to "Solve for, " click the Down arrow. a) Sin is the function that computes the value of sine of an angle in radian. 414213562373095) Function Reference. To get replies by our experts at nominal charges, follow this link to buy points and post your thread in our Commercial Services forum! Here is the FAQ for this forum. A calculator or computer program is not reading off of a list, but is using an algorithm that gives an approximate value for the sine of a given angle. COs 843 21. Google Classroom Facebook Twitter. The common trigonometric functions in Visual Basic 6 are Sin, Cos, Tan and Atn. Question 818283: "compute the exact values of sin 2x, cos 2x, tan 2x without a calculator. Graphing sin, cos and tan. Deg or Rad in the left corner of the number display tells you what mode you're in. Examples: Use the triangle below to find sin, cos, tan. - π /2 <= y <= π /2. The trigonometric functions are also known as the circular functions. Many properties of the cosine and sine functions can easily be derived from these expansions, such as. tan 2165 Use an inverse trigonometric function to write O as a function of x. The button is outlined in black when active. tan A = opposite / adjacent = a / b. Along with the sum-of-angles formulae, it is one of the basic relations between the sine and cosine functions. Code to add this calci to your website. I designed this web site and wrote all the lessons, formulas and calculators. The calculator or Algebra Coach gives θ PV = arctan(−3. Find the value of 4 sin 30 2 without using a calculator. Cosine = 0. Here is how they are derived. This gives us: hypotenuse = 5. Solved exercises of Trigonometric integrals. 2934x = Solution: x =≈sin (0. Scientific Calculator for Your Site. 1) sin C 20 21 29 C B A 2) sin C 40 30 50 C B A 3) cos C 36 15 39 C B A 4) cos C 8 17 15 C B A 5) tan A 35 12 37 A B C 6) tan X 27 36 45 X Y Z-1-. Related Topics. However, when you visualise the Tan function in the 3rd Quadrant, intuitively it feels like it should be negative. cos 67° 17. We know that the values of tan repeat themselves every. Am I able to use a function on Windows 10 Calculator to find the inverse function of sine or cos or tan? This thread is locked. The display initially shows 0. The same is true for the appearance of the other inverse trigonometric functions. Trig Cheat Sheet Definition of the Trig Functions Right triangle definition For this definition we assume that 0 2 p < provides a type-generic macro version of this function. A worksheet where you need to calculate the Sin/Cos/Tan value of a set of values. sin 27° 16. Cos[x_] :> Inactive[Sin][x + Pi/2] which leaves the original Sin functions active. So -330 is really the same as 30 degrees. This video shows you how to do sin, cos and tan calculations on a scientific calculator. cos 360 0 25. Therefore, the cosine would be zero as well. Often remembered by: soh cah toa. We hope it will be very helpful for you and it will help you to understand the solving process. 3 Double-Angle, Power-Reducing, and Half-Angle Formulas 611 Solution We will apply the formula for twice. Pressing ↵ starts the calculation. π \displaystyle \pi. We write cot (A) or cot A. Solution: A review of the sine, cosine and tangent functions. Sine of an angle = Opposite side / Hypotenuse. cot = tan 750 39. Byju's Sine Cosine Tangent Calculator is a tool. Use this trigonometry calculator to easily calculate trigonometry functions in degrees or radians. The relationships between the graphs (in rectangular coordinates) of sin(x), cos(x) and tan(x) and the coordinates of a point on a unit circle are explored using an applet. Use the horizontal calculator to show all of the functions and then enter the value of sin, cos, or tan to calculate then press either of these functions and finally press 1/x ex: for sin 5, enter 5 --> sin --> gives 0. import time. Cos 45° = 1/root 2 =. (non-calculator) Evaluate each of the following (draw a quick reference triangle if necessary). Trigonometry. The results were all off. Sine Ratio = 0. Here is a relatively simple proof using the unit circle: We construct angles BOA = alpha and AOP = beta as shown. Intuitively, the period is a measure of a function "repeating" itself. ° ' ″ mean stdev stdevp sin⁻¹ cos⁻¹. Section 5: Sin, Cos, Tan (TI83, TI84) Learn how to work with the trigonometric functions such as Sin, Cos, and Tan with your TI-83 / TI-84 Calculator. sin 3000 31. Integral Calculus. b) Cos is the function that computes the value of cosine of an angle in radian. A tan of 1 means an angle of 45 degrees or π/4. The abbreviations "sin," "cos," "tan," "csc," "sec" and "cot" stand for the six trigonometric functions: sine, cosine, tangent, cosecant, secant and cotangent. For graph, see graphing calculator. sin 45 0 13. Graphing sin, cos and tan. sin sec 700 In Chapter 3, we shall prove the following formulas. Tiny Calculator with support of +, -, *, /, ^, sin, cos, tan calculator calculator-application calculators calculatorapp calculator-cpp cpp 16 commits. % 10x log x ex ln x 7 8 9 / x2 √x sin sin-1 4 5 6 × ! x3 3√x cos cos-1 1 2 3 - mod xy y√x tan tan-1 0. View Set 3 from ACCT 2121 at CUHK. Cosine = 0. By using this website, you agree to our Cookie Policy. This produces sin-1 on the calculator screen. Each letter of the Chief's name represents the name of one of the trig ratios or the name of a side of a right triangle. csc A = hypotenuse / opposite = c / a. Tangent = 1. calculator - Calculator for the trigonometric functions sin cos tan csc sec cot. Supported functions: - Sin / sine / sinus - Cos / cosine / cosinus - Tan / tangent - Csc / cosecant - Sec / secant - Cot / cotangentIn mathematics, the trigonometric functions are real functions which relate an angle. cos(tan^-1 x) = 1/sqrt(1+x^2) If you need to prove the identity, then here are the steps. 89, tan = 1. Special Angles in Trigonometry. After having chosen an identity, you may choose which function is given and its value. Find more Mathematics widgets in Wolfram|Alpha. Second, most scientific calculators do trigonometry calculations by receiving the number first, then the function. Unit 1a: unit circle test review worksheet name: date: period: for #12, find all 6 trig functions for: 1. 999624217 but cos (pi/2 radians) is zero. So we have an equation that gives cos^2(x) in a nicer form which we can easily integrate using the reverse chain rule. 7320508075688767. Round to the nearest hundredth. tan (1) 21. HOME; CONTACT; 1/x e π ← CE % 10 x log x e x ln x 7 8 9 / x 2 √x sin sin-1 4 5 6 ×! x 3 3 √x cos cos-1 1 2 3-mod x y y √x tan tan-1 0. cot = tan 750 39. For the similarity measure, see Cosine similarity. Choose which function you want to include. Positive: sin, csc Negative: cos, tan, The Unit Circle sec, cot 2Tt 900 Tt 3Tt 2 2700 Positive: sin, cos, tan, sec, csc, cot Negative: none 600 450 300 2 2 1500 1800 21 (-43, 1200 1350 2Tt 3600 300 1 ITC 3150 2250 2400 2 2) Positive: tan, cot 3000 2 Positive: cos, sec Negative: sin, tan, csc, cot com -1 2 Negative: sin, cos, sec, csc EmbeddedMath. Download free in Windows Store. Also, the calculator will show you a step by step explanation. tan M 13 12. ang = float (input ('Escolha um angulo qualquer: ')) print ('Calculando') time. = 13 cos 15 A _____ _____ 3. Sin, cos and tan calculator. To calculate them: Divide the length of one side by another side. cos(pi/2 degrees) really is. 284 radians in a full circle (actually two times Pi ). Pin TI-84 Calculator - 05 - Finding the Sin, Cos, and Tan of an Angle on Pinterest Email TI-84 Calculator - 05 - Finding the Sin, Cos, and Tan of an Angle to a friend Read More. tan 900 17. More precisely, the sine of an angle $t$ equals the y. The sine of 30. The relationships between the graphs (in rectangular coordinates) of sin(x), cos(x) and tan(x) and the coordinates of a point on a unit circle are explored using an applet. Use the inverse cosine key on your calculator to find $$\phi\text{. This is the basics of the sine cos and tan graphs and how sine and cos relate to give you tan. There are several such algorithms that only use the four basic operations (+, −, ×, /) to find the sine, cosine, or tangent of a given angle. The sine function, along with cosine and tangent, is one of the three most common trigonometric functions. Sine Cosine Tangent Chart Download this chart that shows the values of sine, cosine and tangent for integer angles between 0 -90 = the tangent ratio. COs 843 21. So shifting the arguments of tan(x) and cot(x) by any multiple of π, does not change their function values. To find the trigonometric functions of an angle, enter the chosen angle in degrees or radians. com To create your new password, just click the link in the email we sent you. Hyperbolic sine is increasing function passing through zero -. The results were all off. Learn about the relationship between the sine & cosine of complementary angles, which are angles who together sum up to 90°. Scientific Calculator. 5, thus Cos-1 (-. Conic Sections: Parabola and Focus example. The sine function, along with cosine and tangent, is one of the three most common trigonometric functions. Sine, Cosine and Tangent (often shortened to sin, cos and tan) are each a ratio of sides of a right angled triangle: For a given angle θ each ratio stays the same no matter how big or small the triangle is. This is an online trigonometry calculator to find out the equivalent values of radians and degrees for the given number. csc A = hypotenuse / opposite = c / a. Number of problems 10 problems. Get the free "Trig calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. Use the exact values of the sin, cos and tan of pi/3 and pi/6, and the symmetry of the graphs of sin, cos and tan, to find the exact values of sin -pi/6, cos 5/3pi and tan 4pi/3. Summary The rotation matrix, \({\bf R}$$, is used in the rotation of vectors and tensors while the coordinate system remains fixed. sin = cos 250 38. For instance how would I work out, two sides of a triangle are 2. Steps to Use Unit Circle Calculator. Precalculus Mathematics. 284 radians in a full circle (actually two times Pi ). }\) What answer should you expect to get? Subsection Trigonometric Ratios for Obtuse Angles. The final value of $\text{cos}\frac{u}{2}$ is $\frac{3\sqrt{13}}{13}$. The copyright holder makes no representation about the accuracy, correctness, or. The abbreviations "sin," "cos," "tan," "csc," "sec" and "cot" stand for the six trigonometric functions: sine, cosine, tangent, cosecant, secant and cotangent. tan 1 (0) 20. Trigonometry Calculator The Trigonometry calculator is used to determine the trigonometric functions of the input parameters. Mark as New. cos(sin I(cos0)) COS — arcsin(tan0) 12 arccos sinŒ 16. sin 2250 7. 1/x e π ( ) ← CE. cos(sin I(cos0)) COS — arcsin(tan0) 12 arccos sinŒ 16. One may remember the four-quadrant rule: ( A ll. sin 45 0 13. Trigonometric integrals Calculator online with solution and steps. This calculator uses the Law of Sines: $~~ \frac{\sin\alpha}{a} = \frac{\cos\beta}{b} = \frac{cos\gamma}{c}~~$ and the Law of Cosines: $~~ c^2 = a^2 + b^2 - 2ab \cos\gamma ~~$ to solve oblique triangle i. 國中的時候有學過三角函數,sin, cos 和 tan,不過最近竟然需要用來算座標,結果 Google 計算機 輸入 sin(45),怎麼算都不對,後來才知道要加上「度」,這邊列舉了幾種方法。 為什麼沒加上度就不行呢? 因為要這樣算. A very easy way to remember the three rules is to to use the abbreviation SOH CAH TOA. For math, science, nutrition, history. Regards, Sven. Get the free "Unit Circle Exact Values" widget for your website, blog, Wordpress, Blogger, or iGoogle. import time. Trigonometry functions calculator that finds the values of Sin, Cos and Tan based on the known values. In trigonometry, the three sides of a. You're probably familiar with T-pain and autotune. Sine, Cosine and Tangent (often shortened to sin, cos and tan) are each a ratio of sides of a right angled triangle: For a given angle θ each ratio stays the same no matter how big or small the triangle is. So the length of YZ is 5. Sum Difference Identity Tutorial Without Given Value The sum differene identity can also be used to find the exact value of normal trig functions. The identity is ⁡ + ⁡ = As usual, sin 2 θ means (⁡). sin^2(x) + cos^2(x) = 1, so combining these we get the equation. A very easy way to remember the three rules is to to use the abbreviation SOH CAH TOA. Calculator by matlab sin, cos, tan. 999624217 but cos (pi/2 radians) is zero. By using this website, you agree to our Cookie Policy. Arcsec x or sec -1 x. cos 58° 15. Compute $$180\degree-\phi\text{. The moment of inertia relative to centroidal axis x-x, can be found by application of the Parallel Axes Theorem (see below). To calculate a function like 'sine' with an argument like 90, input the corresponding function name followed by the argument 90 in parentheses. asin (x) ¶ Return the arc sine of x, in radians. You can click the buttons or type to perform calculations as you would on a physical calculator. The following are graphs of sin, cos & tan. sin(45*PI/180) 平常可以這樣搜尋. tan 15° Find each length. Introduction: Find the following trigonometric ratios by using the definitions of sin(x), cos(x), and tan(x) -- using the mnemonic SOH-CAH-TOA-- and then use your calculator to change each fraction to a decimal. Examples: Use the triangle below to find sin, cos, tan. This eventually gives us an answer of x/2 + sin(2x)/4 +c. S - Select sin in Scientific mode. Search Google for a formula, like: Area of a circle. 3 Double-Angle, Power-Reducing, and Half-Angle Formulas 611 Solution We will apply the formula for twice. tan 2 θ + 1 = sec 2. Very interesting question! A similar question is, how does the calculator figure out the value of sin, cos, etc. The Sine, Cosine and Tangent functions express the ratios of sides of a right triangle. Important note: There is a big difference between csc θ and sin-1 θ. % 10x log x ex ln x 7 8 9 / x2 √x sin sin-1 4 5 6 × ! x3 3√x cos cos-1 1 2 3 - mod xy y√x tan tan-1 0. The periodicity of the trigonometric functions means that there are an infinite number of positive and negative angles that satisfy an equation. By using this website, you agree to our Cookie Policy. 828 radians; Because we want answers between 0 and 2π we 'correct' θ PV by adding 2π (to get 4. Mark as New. a guest May 5th, 2019 72 Never Not a member of Pastebin yet? Sign Up, it unlocks many cool features! raw download. The use of these trigonometric sin and cos has been rapidly increased in resolving engineering, navigation and. Code to add this calci to your website. 计算方法: 用一条边的长度除以另一条边的长度. Sin/Cos/Tan is a very basic form of trigonometry that allows you to find the lengths and angles of right-angled triangles. This formula which connects these three is: cos (angle) = adjacent / hypotenuse. Find the exact value of the expression. By continuing to use this site you consent to the use of cookies on your device as described in our cookie policy unless you have disabled them. The period of sin is 2. 7071067811865476) Type in 2/sqrt(2) (=1. This example uses the Cos method of the Math class to return the cosine of an angle. A half turn, or 180°, or π radian is the period of tan(x) = sin(x) / cos(x) and cot(x) = cos(x) / sin(x), as can be seen from these definitions and the period of the defining trigonometric functions. The calculator will find the inverse sine of the given value in radians and degrees. Sin 30° = 1 (opposite)/2 (Hypotenuse) so it equals ½ =. Adding 360 to -330 gives you 30. Google Classroom Facebook Twitter. Derivative of sin(cos(sin(x))). therefore, cos60 = x / 13. If you want to contact me, probably have some question write me using the contact form or email me on Send Me A Comment. For areas in rectangular coordinates, we approximated the region using rectangles; in polar coordinates, we use sectors. This is an online javascript scientific calculator. The range of the function is [-1,1]. Do not use a calculator. The tangent of an angle is the ratio of the opposite side and adjacent side. Using the unit circle calculator is easy and quick. 75 11 =≈− D We will now use inverse trigonometric functions to find missing angle measures of right. For both series, the ratio of the nth to the (n-1)th term tends to zero for all x. y √x 3 √x √x ln log. It is possible to alter the color, font, and certain parts of the code to best fit your website. All that you need to know are any two sides as well as how to use SOHCAHTOA. Returns zero for elements where (x-90)/180 is an integer. I noticed there was a sin, cos and tan function in Python. Solved exercises of Trigonometric integrals. (non-calculator) Evaluate each of the following (draw a quick reference triangle if necessary). Very interesting question! A similar question is, how does the calculator figure out the value of sin, cos, etc. Learn vocabulary, terms, and more with flashcards, games, and other study tools. 0 <= y < π /2 or π /2 < y <= π. You may adjust the accuracy of your results. Free Online Calculator! Don't forget your sohcahtoa. Example: sin (90) Complex Numbers. The sin(θ) is the vertical component , the cos(θ) is the horizontal coordinate of the arc endpoint and the ratio of sin(θ) / cos(θ) is defined as tan(θ). Your calculator has buttons for sin, cos, and tan so to find values of the remaining 3 trigonometric functions we use:. This is an online trigonometry calculator to find out the equivalent values of radians and degrees for the given number. Table of Trigonometric Ratios ANGLE SINE COSINE TANGENT ANGLE SINE COSINE TANGENT 1°. Fx Calculator 350es 84+ calculator sin cos tan — app for Android devices, which is a calculator with many features that will make the process mathematical calculations much easier and faster. 1) sin C 20 21 29 C B A 2) sin C 40 30 50 C B A 3) cos C 36 15 39 C B A 4) cos C 8 17 15 C B A 5) tan A 35 12 37 A B C 6) tan X 27 36 45 X Y Z-1-. Post a small Excel sheet (not a picture) showing realistic & representative. - π /2 < y < π /2. The calculator allows to use most of the trigonometric functions, it is possible to calculate the sine, the cosine and the tangent of an angle through the functions of the same name. tan 1200 21. Sine, Cosine and Tangent. 0 0 0 Login to reply the answers Post. Creates a series of calculations that can be printed, bookmarked, shared and modified. The result is between -pi and pi. Sobel, Nobert Lerner. In trigonometry one often needs to evaluate the reverse of sin, cos, and tan functions. One may remember the four-quadrant rule: ( A ll. ( sin ) tan cos 1 − = Microsoft Word - 25Integration by Parts. [2703875] - Write the trigonometric expression in terms of sine and cosine, and then simplify. As a bonus, consider a right-angled triangle with two 45 degree angles, and short side length of 1. In any right triangle , the sine of an angle x is the length of the opposite side (O) divided by the length of the hypotenuse (H). purpose: This function is the actual calculator and the heart of the application """ # This part is for reading and. Use the triangles above to state the EXACT VALUE of the trig functions WITHOUT using a calculator. Click to expand ^-1 rather than ^-10. We know that the values of tan repeat themselves every. Code to add this calci to your website. Therefore, the cosine would be zero as well. This is a formula calculator. The sine of a certain angle is exactly 0. If a right angles triangle has the two short sides of 1 and 2 then the hypotenuse must be sqrt5. Download free in Windows Store. cos 1 p 3 2! 19. Identities involving trig functions are listed below. π \displaystyle \pi. x, all real numbers. Question Details SPreCalc6 7. ? Or you could ask, what did people do before the calculator was invented, i. Online trigonometric tangent calculator. To use the inverse buttons, typically you will need to press a button labeled 2 nd on your calculator and then the sin, cos or tan button. 98, calculate any other trigonometric values that are available Calculate csc(x) given sin(x):. Sobel, Nobert Lerner. I noticed there was a sin, cos and tan function in Python. Precalculus. By using this website, you agree to our Cookie Policy. Trigonometry calculator solving for secant sec given angle in radians or degrees cosine - cos: sine - sin: tangent - tan: secant - sec: cosecant - csc: cotangent - cot: inverse cosine - arccos: inverse sine - arcsin: inverse tangent - arctan: References - Books: Max A. This video shows you how to do sin, cos and tan calculations on a scientific calculator. In the box that says "Enter value," type the values you know. * SIN (sine) * COS (cosine) * TAN (tangent) * CSC (cosecant) * SEC (secant) * COT (contangent) Other Features: * Simple UI * Send calculation details * Send feedback This app is very usefull for students and teachers of mathematics, trigonometry and mathematical engineering. 2 1 3 2 π 9. Arcsec x or sec -1 x. We write tan (A) or tan A; the number cos (A)/sin (A) is called the cotangent of A. Summary The rotation matrix, \({\bf R}$$, is used in the rotation of vectors and tensors while the coordinate system remains fixed. The graphs of sin and cos are periodic, with period of 360° (in other words the graphs repeat themselves every 360°). ° ' ″ mean stdev stdevp sin⁻¹ cos⁻¹. Use the inverse tan function, tan −1, on your calculator. Online trigonometric tangent calculator. In trigonometry, the three sides of a. 3 ' Define angle in radians. Often remembered by: soh cah toa. com To create your new password, just click the link in the email we sent you. Returns zero for elements where x/180 is an integer and Inf for elements where (x-90)/180. 7071067811865476) Type in 2/sqrt(2) (=1. sin cos tan π 4 5 6 + − ln log 10 1 2 3 % ans , ( ) Free online scientific calculator from GeoGebra: perform calculations with fractions, statistics and. This eventually gives us an answer of x/2 + sin(2x)/4 +c. The abbreviations "sin," "cos," "tan," "csc," "sec" and "cot" stand for the six trigonometric functions: sine, cosine, tangent, cosecant, secant and cotangent. #N#Trigonometric functions. The line CD, cut off the tangent to the circle by the extension of OB, is the "tangent line," it equals tan a and explains why the name "tangent" was given to this quantity. Hence, I put the widget on my website. 1/x e π ( ) ← CE. Unregistered Fast answers need clear examples. NET Framework. Additional overloads are provided in this header ( ) for the integral types: These overloads effectively cast x to a double. Function File: tand (x) Compute the tangent for each element of x in degrees. 4) do i need to include the angle 90, what am i doing wrong and. Tangent θ can be written as tan θ. sin-1 cos-1 tan-1. For the similarity measure, see Cosine similarity. sin = o/h cos = a/h tan = o/a. Calculate value of Sin, Cos, Tan, Cot, Cosec, Sec, Sinh, Cosh, Tanh, Coth, Cosech, Sech, Asin, Acos, ATan, ACot, ACosec, ASec and other trigonometry function. }\) Use your calculator to verify the values of $$\sin \phi,~ \cos \phi\text{,}$$ and $$\tan \phi$$ that you found in part (7). 1 4 1 2 1 4 =+cos 2x+ cos2 2x We can reduce the power of cos2 2x using cos2 u = with u = 2x. There are also formulas that consist of sine and cosine and make calculations in arbitrary triangles possible. Right Triangle Trig Calculator Fill in two values and press Calculate. ©2005 BE Shapiro Page 3 This document may not be reproduced, posted or published without permission. In the figure, the point P has a negative x-coordinate, and is appropriately given by x = cosθ, which is a negative number: cosθ = −cos(π−θ). Many properties of the cosine and sine functions can easily be derived from these expansions, such as. the number sin (A)/cos (A) is called the tangent of A. There is the sine function. You can use the rad2deg and deg2rad functions to convert between radians and degrees, or functions like cart2pol to convert between coordinate systems. Tiny Calculator with support of +, -, *, /, ^, sin, cos, tan calculator calculator-application calculators calculatorapp calculator-cpp cpp 16 commits. G o t a d i f f e r e n t a n s w e r? C h e c k i f i t ′ s c o r r e c t. Sine & cosine of complementary angles. Creates a series of calculations that can be printed, bookmarked, shared and modified. Cos(number)" "Cos Function Example This example uses the Cos function to return the cosine of an angle. 1 cosec sin 1 sec cos 1 cot tan 4. So its the answer of Sin, divided by the answer of Cos, equals the tan. sin sec 700 In Chapter 3, we shall prove the following formulas. Home Math Worksheets > Geometry > Using a Calculator (sin, cos, and tan) When we are working with right triangles, we often need to use the three main functions of trigonometry to determine the length of the hypotenuse, adjacent, and opposite sides. This is the currently selected item. Trigonometric functions are available in the. The relationships between the graphs (in rectangular coordinates) of sin(x), cos(x) and tan(x) and the coordinates of a point on a unit circle are explored using an applet. So, I thought I would use these to make a way of aiming in my game, unfortunately, the word description of sin,cos,tan,asin,acos and atan are very confusing. Unregistered Fast answers need clear examples. The vector in the plane from the origin to point (x, y) makes this angle with the positive X axis. b) Cos is the function that computes the value of cosine of an angle in radian. The copyright holder makes no representation about the accuracy, correctness, or. The second one involves finding an angle whose sine is θ. tan 600 33. See also: acosd, cos. 正弦sine (sin),餘弦cosine (cos)和正切tangent (tan)是常用的三角函數,說簡單點就是直角三角形三條邊之間的比例。 如直角三角形之底為a,高為b,斜邊為c,底與斜邊之間的夾角為x,按定義:. The sine function. The range of the function is [-1,1]. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Use this trigonometry calculator to easily calculate trigonometry functions in degrees or radians. }\) Use your calculator to verify the values of $$\sin \phi,~ \cos \phi\text{,}$$ and $$\tan \phi$$ that you found in part (7). The sin(θ) is the vertical component , the cos(θ) is the horizontal coordinate of the arc endpoint and the ratio of sin(θ) / cos(θ) is defined as tan(θ). , Et Binary, Pen, Logic operators, 9 Number of memories, 6 Regression types (EL-531WBBK EL 531WBBK). Pre-Calculus Parametrics Worksheet Name Show work on separate paper. y = sin−1xhas domain[−1,1]and range[−π 2, π 2] The inverse cosine function y = cos−1x means x = cosy. Now, with that out of the way, let's learn a little bit of trigonometry. Trigonometric sin and cos are ratios of two specific sides in right angle triangle and useful in relating angles and sides of triangles. This trigonometry calculator finds the radiant and degrees of Sine (Sin) Cosine (Cos) Tangent (Tan) Cotangent (Cot) Secant (Sec) Cosecant (Cosec) Arc Sine (ASin) Arc Cosine (ACos) Arc Tangent (ATan) Arc Cotangent (ACot) Arc Secant (ASec) or Arc Cosecant. It is possible to alter the color, font, and certain parts of the code to best fit your website. In my Sin, Cos, and Tan calculator (Interactive version) the "CALCULATOR" sprite won't calculate the edges and therefore can't calculate Sin, Cos and Tan. We will be checking how to find the values of Sin 18, cos 18, tan 18, sin 36,cos 36 degrees with the normal known values of the trigonometric ratios i. Sine, cosine and tangent ratio; Sine, cosine and tangent ratio Miloš Petrović. tan 60 Use a calculator to find each trigonometric ratio. Free math problem solver answers your trigonometry homework questions with step-by-step explanations. There are also formulas that consist of sine and cosine and make calculations in arbitrary triangles possible. sin 64° 14. Online calculator, based on the cosine Law , to solve triangle problems. Your calculator has buttons for sin, cos, and tan so to find values of the remaining 3 trigonometric functions we use:. There are also formulas that consist of sine and cosine and make calculations in arbitrary triangles possible. (1) calculator Using sine, cosine and tangent with a calculator to solve problems Jordan is trying to find the values of a and b. (O is the origin of the system of axis used). Arcsec x or sec -1 x. This is a very powerful Scientific Calculator You can use it like a normal calculator, or you can type formulas like (3+7^2)*2 It has many functions you can type in. Sobel, Nobert Lerner. To use the inverse buttons, typically you will need to press a button labeled 2 nd on your calculator and then the sin, cos or tan button. 5, thus Cos-1 (-. Hence, I put the widget on my website. Trigonometry involves calculating angles and functions of angles, such as the sine, cosine and tangent. This online calculator shows values of hyperbolic functions of given argument. So, I thought I would use these to make a way of aiming in my game, unfortunately, the word description of sin,cos,tan,asin,acos and atan are very confusing. 7) cos 250 8) tan 300 9) sin 300 (705 f. This produces sin-1 on the calculator screen. Använd minnesregeln SOH CAH TOA. The inverse sine y=sin^(-1)(x) or y=asin(x) or y=arcsin(x) is such a function that sin(y)=x. whole code is here !doctype htmlhtmlheadmeta charset=utf-8titleUntitled Document/titlestyle. This clip is just a few minutes of a multi-hour course. Free online scientific calculator from GeoGebra: perform calculations with fractions, statistics and exponential functions, logarithms, trigonometry and much more! sin cos tan π 4 5 6 + − ln log 10 1 2 3 % ans , ( ) 0. Proportionality constants are written within the image: sin θ, cos θ, tan θ, where θ is the common measure of five acute angles. image/svg+xml. See more at:. The cosine function. The result is between -pi and pi. - π /2 < y < π /2. Now that we have our unit circle labeled, we can learn how the $\left(x,y\right)$ coordinates relate to the arc length and angle. # 1 4 sin 30 2 4 2 2 2 2 0 = Find the value of tan 60 2 cos 30 without using a calculator. = 22 Use the given trigonometric ratio to determine which angle of the triangle is ∠A. Underneath the calculator, six most popular trig functions will appear - three basic ones: sine, cosine and tangent, and their reciprocals: cosecant, secant and cotangent. For math, science, nutrition, history. Unit 1a: unit circle test review worksheet name: date: period: for #12, find all 6 trig functions for: 1. This online tool helps you to find values for all trigonometric functions sine, cosine, tan and so on in terms of both radian and degrees. The trigonometric functions in MATLAB ® calculate standard trigonometric values in radians or degrees, hyperbolic trigonometric values in radians, and inverse variants of each function. The calculator is a widget from Wolfram Alpha. Deg or Rad in the left corner of the number display tells you what mode you're in. It also changes ln to log 2, and e x to 2 x. The trigonometry chart given here is in Sexagesimal System which means the angles are expressed in degrees. Example: sin (90) Complex Numbers. A tan of 1 means an angle of 45 degrees or π/4. ただし, はsinの係数 を 成分,cosの係数 を 成分とする点Pと原点Oを結ぶ線分OPと 軸のなす角を一般角で表したものである.. Sine and cosine of complementary angles. Arccos x or cos -1 x. π 4 – cos π 3 tan π 6 ii) 2π 3π sin cos 34 5π 5π cos sin 43 + − iii) tan. sin sec 700 In Chapter 3, we shall prove the following formulas. four decimal places. If your angle is 90 degrees, for example, the point will end up right on the y-axis, having an x-coordinate of zero. If your calculator is on the degree mode, it is computing the sine, cosine or tangent of an angle the size of the number you put in. This produces sin-1 on the calculator screen. The other two values will be filled in. Think for a moment about a right-angled triangle. We have angle and in. sin-1 cos-1 tan-1. NET Framework. Get the free "Unit Circle Exact Values" widget for your website, blog, Wordpress, Blogger, or iGoogle. Save worksheet. I know how to do all the sin, cos and tan rules from school, I just need to apply them to the code. cos A = adjacent / hypotenuse = b / c. Definitions 1- Let x be a real number and P(x) a point on a unit circle such that the angle in standard position whose terminal side is segment OP is equal to x radians. # - Select x 3 in Scientific mode. where k can be any integer; that is, the solutions for x consist of sin-1 (y) plus all even multiples of π, together with minus sin-1 (y) plus all odd multiples of π. 8Π (8*pi) = 25. sin cos tan π 4 5 6 + − ln log 10 1 2 3 % ans , ( ) Free online scientific calculator from GeoGebra: perform calculations with fractions, statistics and. Skip navigation Calculator Tutorial 13. 4 Solving equations (EMCGH) The general solution (EMCGJ). Angles are in radians, not degrees, for the standard versions (i. y √x 3 √x √x ln log. wvv9lgs93wklj, hu681gau3rfyl9, w7f1amlvme, ca4sx5fsoftnfcx, nrqs70zetol, 4va921mvfa4b, jeitueasbfpzj5, nee5ei0f1b, xj85pwoebnu, g37y1e21thjb1, 1gygi9prav6u, uqp02r0fw8qbbp3, bbjhjg6g80z, 1e34fbpooo4, frhg7tl9rra, yk5dpj4m9x, zdvaobryyxham, 0qiw5kvgths, e8mqcaif9kg0l1x, o4i7y1xrxaz8v33, 74zx9ctlnf3xovi, x9iaypxysw, 8emxwzlp05uvs, 5it2u561kjs, vb56s2ve5bxbkhv, ujya1zjqf92c16, z4sqrdj9fzb, l80h3th4szmv
2020-06-01 04:15:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.795171320438385, "perplexity": 1701.4823532595963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00591.warc.gz"}
https://proofwiki.org/wiki/Group_Presentation_of_Dihedral_Group_D4
Dihedral Group D4/Group Presentation Group Presentation of Dihedral Group $D_4$ The group presentation of the dihedral group $D_4$ is given by: $D_4 = \gen {a, b: a^4 = b^2 = e, a b = b a^{-1} }$ Proof We have that the group presentation of the dihedral group $D_n$ is: $D_n = \gen {\alpha, \beta: \alpha^n = \beta^2 = e, \beta \alpha \beta = \alpha^{−1} }$ Setting $n = 4, \alpha = a, \beta = b$, we get: $D_4 = \gen {a, b: a^4 = b^2 = e, b a b = a^{−1} }$ from which the result follows. $\blacksquare$
2021-04-13 01:56:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4312171936035156, "perplexity": 542.2036139438328}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00391.warc.gz"}
https://socratic.org/questions/what-is-the-mass-of-7-50-moles-of-sulfur-dloxide-so-2
# What is the mass of 7.50 moles of sulfur dloxide (SO_2)? Well, sulfur dioxide has a molar mass $64.07 \cdot g \cdot m o {l}^{-} 1$...... And to get the mass of $7.5 \cdot m o l$ we simply multiply $\text{number of moles}$ $\times$ $\text{molar mass}$, i.e. 7.50*cancel(mol)xx64.07*g*cancel(mol^-1)=??g
2020-09-24 02:07:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8388620615005493, "perplexity": 1659.4585596659044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00094.warc.gz"}
http://codeforces.com/blog/semiexp
Reminder: in case of any technical issues, you can use the lightweight website m1.codeforces.com, m2.codeforces.com, m3.codeforces.com. × ### semiexp's blog By semiexp, history, 3 years ago, , During our team training on Codeforces Gym, we noticed that a code which can easily be compiled in our computer gets CE (even the details for CE wasn't shown) on the Codeforces judge system. After the training, I've investigated the cause of CE and finally I've found that the following code produces CE if compiled with GNU C++11: #include <algorithm> #include <cstdio> using namespace std; struct segtree { pair<int, int> data[1<<20]; }; segtree S; int main() { return 0; } You may write codes like this when you implement Segment Trees or like. This code can easily be compiled with GCC 4.8.4. However, if it is compiled with GCC 5.3.0, the compilation didn't finish at all. I suppose that compiling it takes much time with GCC 5.1.0 (which is used in Codeforces), too and very long compilation time caused CE without any detail. Apparently, this problem is a bug of GCC; related problem is reported to GCC Bugzilla. The bug fix has already been released, so I expect that this problem will be fixed in GCC of later version. But currently we can't avoid this problem if we use GNU C++11. My suggestions are: - Avoid using GNU C++11. The code above could be compiled with other compilers (such as GNU C++). - Avoid using array of std::pair in structs or classes, e.g. define a new struct instead of using std::pair for array members in structs. One of the cause of this problem is that std::pair has a constexpr default constructor.
2019-10-16 20:29:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3801651895046234, "perplexity": 4033.1425801037303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00264.warc.gz"}
https://www.r-bloggers.com/2019/02/i-just-wanted-the-data-turning-tableau-tidyverse-tears-into-smiles-with-base-r-an-encoding-detective-story/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Those outside the Colonies may not know that Payless—a national chain that made footwear affordable for millions of ‘Muricans who can’t spare 100.00 USD for a pair of shoes their 7 year old will outgrow in a year— is closing. CNBC also had a story that featured a choropleth with a tiny button at the bottom that indicated one could get the data: I should have known this would turn out to be a chore since they used Tableau—the platform of choice when you want to take advantage of all the free software libraries they use to power their premier platform which, in turn, locks up all the data for you so others can’t adopt, adapt and improve. Go. Egregious. Predatory. Capitalism. Anyway. I wanted the data to do some real analysis vs produce a fairly unhelpful visualization (TLDR: layer in Census data for areas impacted, estimate job losses, compute nearest similar Payless stores to see impact on transportation-challenged homes, etc. Y’now, citizen data journalism-y things) so I pressed the button and watched for the URL in Chrome (aye, for those that remember I moved to Firefox et al in 2018, I switched back; more on that in March) and copied it to try to make this post actually reproducible (a novel concept for Tableau fanbois): library(tibble) library(readr) # https://www.cnbc.com/2019/02/19/heres-a-map-of-where-payless-shoesource-is-closing-2500-stores.html tfil <- "~/Data/Sheet_3_data.csv" download.file( "https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true", tfil ) ## trying URL 'https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true' ## Error in download.file("https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true", : ## cannot open URL 'https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true' ## In addition: Warning message: ## In download.file("https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true", : ## cannot open URL 'https://public.tableau.com/vizql/w/PAYLESSSTORECLOSINGS/v/Dashboard2/vud/sessions/6A678928620645FF99C7EF6353426CE8-0:0/views/10625182665948828489_7202546092381496425?csv=true&showall=true': HTTP status was '410 Gone' WAT Truth be told I expected a time-boxed URL of some sort (prior experience FTW). Selenium or Splash were potential alternatives but I didn’t want to research the legality of more forceful scraping (I just wanted the data) so I manually downloaded the file (*the horror*) and proceeded to read it in. Well, try to read it in: read_csv(tfil) ## Parsed with column specification: ## cols( ## A = col_logical() ## ) ## Warning: 2092 parsing failures. ## row col expected actual file ## 1 A 1/0/T/F/TRUE/FALSE '~/Data/Sheet_3_data.csv' ## 2 A 1/0/T/F/TRUE/FALSE '~/Data/Sheet_3_data.csv' ## 3 A 1/0/T/F/TRUE/FALSE '~/Data/Sheet_3_data.csv' ## 4 A 1/0/T/F/TRUE/FALSE '~/Data/Sheet_3_data.csv' ## 5 A 1/0/T/F/TRUE/FALSE '~/Data/Sheet_3_data.csv' ## ... ... .................. ...... ......................... ## See problems(...) for more details. ## ## # A tibble: 2,090 x 1 ## A ## ## 1 NA ## 2 NA ## 3 NA ## 4 NA ## 5 NA ## 6 NA ## 7 NA ## 8 NA ## 9 NA ## 10 NA ## # … with 2,080 more rows WAT Getting a single column back from readr::read_[ct]sv() is (generally) a tell-tale sign that the file format is amiss. Before donning a deerstalker (I just wanted the data!) I tried to just use good ol’ read.csv(): read.csv(tfil, stringsAsFactors=FALSE) ## Error in make.names(col.names, unique = TRUE) : ## invalid multibyte string at 'A' ## In addition: Warning messages: ## 1: In read.table(file = file, header = header, sep = sep, quote = quote, : ## line 1 appears to contain embedded nulls ## 2: In read.table(file = file, header = header, sep = sep, quote = quote, : ## line 2 appears to contain embedded nulls ## 3: In read.table(file = file, header = header, sep = sep, quote = quote, : ## line 3 appears to contain embedded nulls ## 4: In read.table(file = file, header = header, sep = sep, quote = quote, : ## line 4 appears to contain embedded nulls ## 5: In read.table(file = file, header = header, sep = sep, quote = quote, : ## line 5 appears to contain embedded nulls WAT Actually the “WAT” isn’t really warranted since read.csv() gave us some super-valuable info via invalid multibyte string at 'A'. FF FE is a big signal1 2 we’re working with a file in another encoding as that’s a common “magic” sequence at the start of such files. But, I didn’t want to delve into my Columbo persona… I. Just. Wanted. The. Data. So, I tried the mind-bendingly fast and flexible helper from data.table: data.table::fread(tfil) ## Error in data.table::fread(tfil) : ## File is encoded in UTF-16, this encoding is not supported by fread(). Please recode the file to UTF-8. AHA. UTF-16 (maybe). Let’s poke at the raw file: x <- readBin(tfil, "raw", file.size(tfil)) ## also: read_file_raw(tfil) x[1:100] ## [1] ff fe 41 00 64 00 64 00 72 00 65 00 73 00 73 00 09 00 43 00 ## [21] 69 00 74 00 79 00 09 00 43 00 6f 00 75 00 6e 00 74 00 72 00 ## [41] 79 00 09 00 49 00 6e 00 64 00 65 00 78 00 09 00 4c 00 61 00 ## [61] 62 00 65 00 6c 00 09 00 4c 00 61 00 74 00 69 00 74 00 75 00 ## [81] 64 00 65 00 09 00 4c 00 6f 00 6e 00 67 00 69 00 74 00 75 00 There’s our ff fe (which is the beginning of the possibility it’s UTF-16) but that 41 00 harkens back to UTF-16’s older sibling UCS-2. The 0x00‘s are embedded nuls (likely to get bytes aligned). And, there are alot of 09s. Y’know what they are? They’re s. That’s right. Tableau named file full of TSV records in an unnecessary elaborate encoding CSV. Perhaps they broke the “T” on all their keyboards typing their product name so much. ### Living A Boy’s [Data] Adventure Tale At this point we have: • no way to support an automated, reproducible workflow • an ill-named file for what it contains • an overly-encoded file for what it contains • many wasted minutes (which is likely by design to have us give up and just use Tableau. No. Way.) At this point I’m in full-on Rockford Files (pun intended) mode and delved down to the command line to use a old, trusted sidekick enca: enca -L none Sheet_3_data.csv ## Universal character set 2 bytes; UCS-2; BMP ## LF line terminators ## Byte order reversed in pairs (1,2 -> 2,1) Now, all we have to do is specify the encoding! read_tsv(tfil, locale = locale(encoding = "UCS-2LE")) ## Error in guess_header_(datasource, tokenizer, locale) : ## Incomplete multibyte sequence WAT Unlike the other 99% of the time (mebbe 99.9%) you use it, the tidyverse doesn’t have your back in this situation (but it does have your backlog in that it’s on the TODO). Y’know who does have your back? Base R!: read.csv(tfil, sep="\t", fileEncoding = "UCS-2LE", stringsAsFactors=FALSE) %>% as_tibble() ## # A tibble: 2,089 x 14 ## Address City Country Index Label Latitude Longitude ## ## 1 1627 O… Aubu… United… 1 Payl… 32.6 -85.4 ## 2 900 Co… Doth… United… 2 Payl… 31.3 -85.4 ## 3 301 Co… Flor… United… 3 Payl… 34.8 -87.6 ## 4 304 Ox… Home… United… 4 Payl… 33.5 -86.8 ## 5 2000 R… Hoov… United… 5 Payl… 33.4 -86.8 ## 6 6140 U… Hunt… United… 6 Payl… 34.7 -86.7 ## 7 312 Sc… Mobi… United… 7 Payl… 30.7 -88.2 ## 8 3402 B… Mobi… United… 8 Payl… 30.7 -88.1 ## 9 5300 H… Mobi… United… 9 Payl… 30.6 -88.2 ## 10 6641 A… Mont… United… 10 Payl… 32.4 -86.2 ## # … with 2,079 more rows, and 7 more variables: ## # Number.of.Records , State , Store.Number , ## # Store.count , Zip.code , State.Usps , ## # statename WAT WOOT! Note that read.csv(tfil, sep="\t", fileEncoding = "UTF-16LE", stringsAsFactors=FALSE) would have worked equally as well. ### The Road Not [Originally] Taken Since this activity decimated productivity, for giggles I turned to another trusted R sidekick, the stringi package, to see what it said: library(stringi) stri_enc_detect(x) ## [[1]] ## Encoding Language Confidence ## 1 UTF-16LE 1.00 ## 2 ISO-8859-1 pt 0.61 ## 3 ISO-8859-2 cs 0.39 ## 4 UTF-16BE 0.10 ## 5 Shift_JIS ja 0.10 ## 6 GB18030 zh 0.10 ## 7 EUC-JP ja 0.10 ## 8 EUC-KR ko 0.10 ## 9 Big5 zh 0.10 ## 10 ISO-8859-9 tr 0.01 And, just so it’s primed in the Google caches for future searchers, another way to get this data (and other data that’s even gnarlier but similar in form) into R would have been: stri_read_lines(tfil) %>% paste0(collapse="\n") %>% as_tibble() ## # A tibble: 2,089 x 14 ## Address City Country Index Label Latitude Longitude ## ## 1 1627 O… Aubu… United… 1 Payl… 32.6 -85.4 ## 2 900 Co… Doth… United… 2 Payl… 31.3 -85.4 ## 3 301 Co… Flor… United… 3 Payl… 34.8 -87.6 ## 4 304 Ox… Home… United… 4 Payl… 33.5 -86.8 ## 5 2000 R… Hoov… United… 5 Payl… 33.4 -86.8 ## 6 6140 U… Hunt… United… 6 Payl… 34.7 -86.7 ## 7 312 Sc… Mobi… United… 7 Payl… 30.7 -88.2 ## 8 3402 B… Mobi… United… 8 Payl… 30.7 -88.1 ## 9 5300 H… Mobi… United… 9 Payl… 30.6 -88.2 ## 10 6641 A… Mont… United… 10 Payl… 32.4 -86.2 ## # … with 2,079 more rows, and 7 more variables: Number of ## # Records , State , Store Number , Store ## # count , Zip code , State Usps , ## # statename (with similar dances to use read_csv() or fread()). ### FIN The night’s quest to do some real work with the data was DoS’d by what I’ll brazenly call a deliberate attempt to dissuade doing exactly that in anything but a commercial program. But, understanding the impact of yet-another massive retail store closing is super-important and it looks like it may be up to us (since the media is too distracted by incompetent leaders and inexperienced junior NY representatives) to do the work. Folks who’d like to do the same can grab the UTF-8 encoded actual CSV from this site which has also been run through janitor::clean_names() so there’s proper column types and names to work with. Speaking of which, here’s the cols spec for that CSV: cols( city = col_character(), country = col_character(), index = col_double(), label = col_character(), latitude = col_double(), longitude = col_double(), number_of_records = col_double(), state = col_character(), store_number = col_double(), store_count = col_double(), zip_code = col_character(), state_usps = col_character(), statename = col_character() ) If you do anything with the data blog about it and post a link in the comments so I and others can learn from what you’ve discovered! It’s already kinda scary that one doesn’t even need a basemap to see just how much apart of ‘Murica Payless was:
2021-04-15 05:28:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31643664836883545, "perplexity": 13048.817611782733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038083007.51/warc/CC-MAIN-20210415035637-20210415065637-00267.warc.gz"}
https://dickimaw-books.com/faq.php?itemlabel=shorthyper
Latest news 2021-06-05: New blog post: Dickimaw Books Site Account. glossaries package FAQ How can I make just the short form a hyperlink? 🔗 If you switch off the hyperlink on first use (using the hyperfirst=false package option), you can create a custom acronym style that inserts the link around just the short form: \documentclass{article} \usepackage[hyperfirst=false]{glossaries} \makeglossaries {% % use the "long-short" display style: \GlsUseAcrEntryDispStyle{long-short}% } {% % use the "long-short" style definitions: \GlsUseAcrStyleDefs{long-short}% % adjust the full form so that it has a hyperlink for the short part: \renewcommand*{\genacrfullformat}[2]{% \glsentrylong{##1}##2\space }% % same for the plural form: \renewcommand*{\genplacrfullformat}[2]{% \glsentrylongpl{##1}##2\space }% } % apply this new style % now define the acronyms \newacronym{gnu}{GNU}{Gnu is Not Unix} \begin{document} First use: \gls{gnu}. Next use: \gls{gnu}. Full form: \acrfull*{gnu}. \printglossaries \end{document} Note that if you are using glossaries-extra, it uses a completely different abbreviation mechanism so the above won’t work. However, you can simply use an abbreviation style such as short-postlong-user. 2020-06-27 16:45:13
2021-06-15 21:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2659442722797394, "perplexity": 6886.799887245635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00627.warc.gz"}
https://backendtea.com/
Biography Gert de Pagter is a software engineer at YourSurprise. His software engineering interests include testing and code quality. Interests • Software engineering • Mathematics • Magic Education • HBO-ICT, 2020 HZ University of Applied Sciences 90% 30% 0% Experience YourSurprise September 2019 – Present Zierikzee My daily job includes improving the website, and improving software quality along the way. YourSurprise September 2019 – February 2020 Zierikzee An internship where i research the relation between software testing and the quality of code. I help the team increase software quality across the board, and introduce more tests in the code base. Ibuildings January 2018 – January 2020 Vlissingen Responsibilities include: • Software development Ibuildings September 2017 – January 2018 Vlissingen During my internship I worked on the call for papers application for the Dutch PHP conference Accomplish­ments Shortest Paths Revisited, NP-Complete Problems and What To Do About Them The fourth of a four part series on algorithms, focusing on NP complete problems. See certificate Greedy Algorithms, Minimum Spanning Trees, and Dynamic Programming The third of a four part series on algorithms, focusing on greedy algorithms and dynamic programming. See certificate Graph Search, Shortest Paths, and Data Structures The second of a four part series on algorithms, focusing on graphs algorithms and data structures. See certificate Divide and Conquer, Sorting and Searching, and Randomized Algorithms The first of a four part series on algorithms, focusing on divide and conquer algorithms See certificate Recent Posts Testing code that generates warnings Our code base has a lot of code that looks like this: <?php try { $this->doScaryThing(); } catch(Exception$e) { trigger_error("Downgraded: " . get_class($e) . ":" .$e->getMessage(), E_USER_WARNING); } Or sometimes trigger_error is used as a way to log other thigns. This makes it rather difficult to test. Thankfully PHPUnit 8.4 has the expectWarning method, that allows us to check this: <?php public function testItTriggersWarning(): void { $object = new Danger();$this->expectWarning(); \$object->doWarningThing(); } This does mean that the execution stops after the warning is triggered, so we can’t assert anything after that. What is the boy scout rule Our code base is a lot like a camp site, and we can learn a thing or two from the boy scouts. Most code bases have far too much comments, while only a few are really usefull. PHP is already strictly typed, in the same way that JavaScript is. Not through the language itself, but with the help of tooling. JavaScript achieves this through tools like Typescript. (I use typescript as an example, as that is what i normally use.) Typescript adds a lot of new syntax to tha language, which allows for type checking. The transpiler then simply won’t transpile if there are type errors( depending on your configuration). The modern PHP developers toolbox The tools a modern PHP developer needs to strive. Projects Packages that I either maintain, or i contribute a lot to. Webmozart Assert An assertion library for PHP Infection A mutation testing framework for PHP Recent & Upcoming Talks Finding bugs in seconds Using static analysis to find bugs, and it only takes a few seconds. Finding bugs in seconds Using static analysis to find bugs, and it only takes a few seconds. Finding bugs in seconds Using static analysis to find bugs, and it only takes a few seconds. Finding bugs in seconds Using static analysis to find bugs, and it only takes a few seconds. Contact • KvK number: 78147468
2020-10-20 08:41:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22063559293746948, "perplexity": 4568.806713937898}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00000.warc.gz"}
https://cndota.github.io/Qatten/
# Qatten: A General Framework for Cooperative MARL 2020-02-17 Qatten is a novel Q-value Attention network for the multiagent Q-value decomposition problem. Qatten provides a theoretic linear decomposition formula of $Q_{tot}$ and $Q^{i}$ which covers previous methods and achieves state-of-the-art performance on the StarCraft II micro-management tasks across different scenarios. Paper # Introduction In many real-world settings, a team of cooperative agents must learn to coordinate their behavior with private observations and communication constraints. Deep multiagent reinforcement learning algorithms (Deep-MARL) have shown superior performance in these realistic and difficult problems but still suffer from challenges. One branch is the multiagent value decomposition, which decomposes the global shared multiagent Q-value $Q_{tot}$ into individual Q-values $Q^{i}$ to guide individuals’ behaviors. There are few related works, but they either lack the theoretical depth or perform poorly in realistic and complex tasks. To overcome these issues, we proposed the Qatten Attention (Qatten) network, which consist of a theoretical linear decomposing formation from $Q_{tot}$ to each $Q^{i}$ and a theoretically accountable multi-head attention implement. Combining the decomposing theory and tactful practice, Qatten achieves state-of-the-art performance in the challenging and widely adopted Startcraft Multiagent Challenge (SMAC) testbed. # Motivation There are several previous methods which are related to our work. Value Decomposition Network (VDN) (Sunehag et al., 2018) is proposed to learn a centralized but factored $Q_{tot}$, where $Q_{tot}(s, \vec{a})= \sum_{i} Q^{i}(s, a^{i})$. VDN assumes that the additivity exists when $Q^{i}$ is evaluated based on $o^{i}$, which indeed makes an approximation and brings inaccuracy. Besides, VDN severely limits the complexity of centralized action-value functions and ignores any extra state information available during training. Different from VDN, QMIX learns a monotonic multiagent Q-value approximation $Q_{tot}$ (Rashid et al., 2018). QMIX factors the joint action-value $Q_{tot}$ into a monotonic non-linear combination of individual Q-value $Q^{i}$ of each agent which learns via a mixing network. The mixing network with non-negative weights produced by a hynernetwork is responsible for combing the agent’s utilities for the chosen actions into $Q_{tot}(s, \vec{a})$. This nonnegativity ensures that $\frac{\partial Q_{tot}}{\partial Q^{i}} \ge 0$, which in turn guarantees the IGM property (Son et al., 2019). However, QMIX adopts an implicit inexplicable mixing method which lacks of the theoretical insights. Recently, QTRAN (Son et al., 2019) is proposed to guarantee optimal decentralization by using linear constraints between agent utilities and joint action values. However, the constraints on the optimization problem involved is computationally intractable and the corresponding relaxations make QTRAN perform poorly in complex tasks (Mahajan et al., 2019). ## Qatten In this paper, for the first time, we theoretically derive a linear decomposing formation from $Q_{tot}$ to each $Q^{i}$. Based on this theoretical finding, we introduce the multi-head attention mechanism to approximate each term in the decomposing formula with theoretical explanations. In one word, when we investigate the global Q-value $Q_{tot}$ near maximum point in action space, the dependence of $Q_{tot}$ on individual Q-value $Q^{i}$ is approximately linear. Below we explain this theory. The $Q_{tot}$ could be viewed as a function in terms of $Q^{i}$. We could prove (see details in our paper) that • Theorem 1. There exist constants $c(s),\lambda_i(s)$ (depending on state $s$), such that when we neglect higher order terms $o(|| \vec{a}- \vec{a}_{o} ||^2)$, the local expansion of $Q_{tot}$ admits the following form And in an cooperative setting, the constants $\lambda_i(s) \ge 0$. • Theorem 2. The functional relation between $Q_{tot}$ and $Q^{i}$ appears tobe linear in action space, yet contains all the non-linear information. We have the following finer structure of $\lambda_{i}$. Then we have where $\lambda_{i,h}$ is a linear functional of all partial derivatives $\frac{\partial^{h}Q_{tot}}{\partial Q^{i_1}…\partial Q^{i_h}}$ of order $h$, and decays super-exponentially fast in $h.$ Based on above findings, we introduce the multi-head attention to realize the deep version implement (Qatten). The overall architecture consists of agents’ recurrent Q-value networks representing each agent’s individual value function $Q^{i}(\tau^{i}, a^{i})$ and the refined attention based value-mixing network to model the relation between $Q_{tot}$ and individual Q-values. The attention-based mixing network takes individual agents’ Q-values and local information as input and mixes them with global state to produce the values of $Q_{tot}$. Qatten’s mixing network perfectly implement the theorems. # Demonstration Like previous works, we test Qatten in the SMAC (Samvelyan et al., 2019) platform. Here are some video demonstrations. We also give the median win rate table on all maps. Qatten beats other popular MARL methods across almost scenarios, which validates its effectiveness. Senario Qatten QMIX COMA VDN IQL QTRAN 2s_vs_1sc 100 100 97 100 100 100 2s3z 97 97 34 97 75 83 3s5z 94 94 0 84 9 13 1c3s5z 97 94 23 84 11 67 5m_vs_6m 74 63 0 63 49 57 3s_vs_5z 96 85 0 87 43 0 bane_vs_bane 97 62 40 90 97 100 2c_vs_64zg 65 45 0 19 2 10 MMM2 79 61 0 0 0 0 3s5z_vs_3s6z 16 1 0 0 0 0
2022-01-20 20:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8294879794120789, "perplexity": 1699.1948857374389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302622.39/warc/CC-MAIN-20220120190514-20220120220514-00237.warc.gz"}
https://answers.ros.org/question/378869/tuning-robot_localization-parameters/
ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org # tuning robot_localization parameters Hi, I would like to receive some advice about my robot localization setup and configuration parameters. I have a working repo for reproduce my results: https://gitlab.com/elgarbe/rl_yaguaro... The hardware is a icm20602 IMU with a HMC5883 magnetometer connected to a custom made low level computer (STM32F722). This is connected to a raspberry pi through a serial connection. I'm using rosserial for the ROS communication between the computers. There are two GPS, a M8N and a F9P (ardusimple with RTK corrections), connected to raspberry. M8N use nmea_navsat_driver and F9P ublox ros driver from kummar robotics. So I have follow topics: /m8n/fix /f9p/ublox/fix /chori/imu/data_raw : This is gyro, accel and orientation obtained just from magnetometer on low level computer /chori/imu/mag : hard iron corrected ,agnetometer readings /chori/imu/mag_raw : un-corrected magnetometer readings I have launch file, localize.launch, that runs all the nodes involved in RL, including rosbag and rviz. There is two main bagfiles (both terminated with _ on the name). The one from 180521 is with RTK corrections on f9p/fix and 230521_ don't have corrections. The idea is to tune RL parameters with IMU and m8n GPS and compare it with "ground truth" provided by f9p (with rtk corrections). Finally I have imu_madwick filter configured so there is a /imu/data topic with orientation obtained from gyro, accel and mag readings. I have several problems, but right now I think I have some interesting results. When I use m8n GPS and madgwick filter I obtain very bad results. In the first seconds (the imu is stationary on a table) I'v notice that position estimate drift a lot between 2 GPS readings: is this normal? I'm expecting less movement here. Then, as I start moving, the position estimate between 2 gps signal seems to be not aligned with the movement: RL orientation is aligned with the orientation from the output of the madwick filter. Then I remove the use of magnetometer on madgwick filter and things get better: but not perfect... what bother me is that madgwick filter ouput with magnetometer activated seems to work fine witout RL. It gives 0 when facing EAST and 90º facing NORTH. So, I'm not sure why get worse result using it. Can some one gives some advise? I think that anyone can reproduce my results cloning my repo. Thank edit retag close merge delete Sort by » oldest newest most voted What imu_filter_madgwick does is to integrate the angular velocities, linear accelerations and magnetometer readings to obtain an orientation estimate. Your robot_localization config file takes into account angular velocities and linear accel from the imu, so you are double counting them. Try setting imu angular vel and linear accel to false. Also, remove the accelerations unless you have a very, very nice IMU that is well calibrated. That is a likely candidate source of your issues. more
2023-03-26 19:17:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17289207875728607, "perplexity": 6086.746828850979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00563.warc.gz"}
https://www.codesdope.com/blog/article/css-display/
# CSS display Jan. 10, 2018 1606 The display property is used to control the layout of an element. It specifies how an element gets displayed and behaves. CSS display: value; Every element on a web page is a rectangular box (has a box model) and has a default display value, which decides its behaviour. For example, p has a default display value block, and therefore takes the entire space of its parent element that is available to it. The display property is used to change the default display value. ## Values inline : Displays an element as an inline element i.e. generates an inline-level box for the element. This is the default value. An inline element occupies the space that is necessary, and not the whole width. It may or may not start from a new line. It does not accept width and height values. block : Displays an element as a block element i.e. generates a block-level box for the element. A block element starts from a new line and occupies the entire width that is available to it. It accepts width and height values. inline-block : Generates an inline-block box for the element. An inline-block element is similar to an inline element, except that it takes the width and height values. The inside of this element acts as a block-level element (its width and height will push the neighbouring elements horizontally and vertically), but this element acts as an inline element as a whole. flex : Displays an element as a block-level box whose content is laid according to the Flexbox model. inline-flex : Displays an element as an inline box whose content is laid according to the Flexbox model. grid : Displays an element as a block-level box whose content is laid according to the Grid model. inline-grid : Displays an element as an inline box whose content is laid according to the Grid model. run-in : Displays an element as a run-in box, which can be either block or inline depending on the surrounding elements. list-item : Element behaves like a <list> item. table : Element behaves like a <table> element. table-caption : Element behaves like a <caption> element. table-header-group : Element behaves like a <thead> element. table-footer-group : Element behaves like a <tfoot> element. table-row-group : Element behaves like a <tbody> element. table-column-group : Element behaves like a <colgroup> element. table-cell : Element behaves like a <td> element. table-row : Element behaves like a <tr> element. table-column : Element behaves like a <col> element. none : Element is removed from the page flow such that it neither occupies any space nor affects the flow of the layout. initial : Sets the default value of the property. inherit : Inherits the value from parent element. ## Examples HTML <!-- first para --> <p>Lorem ipsum dolor sit amet, <span id="inline">consectetur adipiscing</span> elit. Curabitur nec ... finibus gravida.</p> <!-- second para --> <p>Lorem ipsum dolor sit amet, <span id="block">consectetur adipiscing</span> elit. Curabitur nec ... finibus gravida.</p> <!-- third para --> <p>Lorem ipsum dolor sit amet, <span id="inline-block">consectetur adipiscing</span> elit. Curabitur nec ... finibus gravida.</p> CSS p { width: 700px; background-color: #AACCCC; } span { width: 174px; height: 70px; background-color: blue; } /* Giving different display values to different span */ #inline { display: inline; } #block { display: block; } #inline-block { display: inline-block; } See the Pen Giving values to display property by Aakhya Singh (@aakhya) on CodePen. In the above demo, the span elements in all the three paragraphs are given blue background and some width and height values. The span element in the first paragraph is given display: inline which made it an inline element (a span element is by default an inline element), and so, it does not cause the line to break and does not take the explicitly given width and height values. In the second paragraph, the span element is given display: block which made it a block element. Since it became a block element, it takes a whole new line and takes the width and height explicitly assigned to it. The span in the third paragraph is given display: inline-block, which made it an inline-block element. This element does not cause the line to break but takes the width and height explicitly assigned to it. Look at some other examples. HTML <p>This paragraph contains a <span>span with green background</span> and a <div>div with red background</div>. It also contains an image <img src="football.png"> which is aligned with the text.</p> CSS p { display: inline; } /* Giving different display values to different elements */ span { background-color: green; width: 110px; display: inline-block; } div { background-color: red; display: inline; } img { display: inline; } See the Pen Giving values to display property by Aakhya Singh (@aakhya) on CodePen. In the paragaph in the above demo, the span is made inline-block and the divand the image are made inline using the display property. The span takes the width (110px) explicitly given to it which is smaller than the length of its text. See the Pen display: none by Aakhya Singh (@aakhya) on CodePen. Since display is set to none for h2 in the above demo, it got completely removed from the document and doesn't occupy any space. If you don't want to remove the element from document and just want to make it invisible, then set the visibility property to hidden. See the Pen display: flex by Aakhya Singh (@aakhya) on CodePen. The above demo shows an example of Flexible boxes, about which you can learn from the chapter Flexible Boxes. The parent container div with id wrap has four children div elements having different background colors and some width and height dimensions, the first one having green and the last one blue. The parent container is made flexible container and its children flexible items by giving the following code to the parent element. CSS #wrap { display: flex; display: -webkit-flex; /* Safari */ } Safari supports the value with prefix -webkit-. The direction in which the flex items are placed in the flex container is reversed by giving the value row-reverse to the direction property for the flex container as shown below. Safari supports the property with prefix -webkit-. CSS #wrap { flex-direction: row-reverse; -webkit-flex-direction: row-reverse; /* Safari */ } ## Browser Support This property is supported in all the major browsers. The values flex and inline-flex requires the prefix -webkit- to get support from Safari. These values are partially supported in IE. The value run-in is supported only in Opera and IE 8 and above versions. The grid values are not supported in Opera and IE. Liked the post? Inquisitive and passionate Front-end Developer and Web Designer and Co-Founder of CodesDope. Editor's Picks 0 COMMENT
2019-10-14 13:45:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37378546595573425, "perplexity": 1997.0487126282005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00080.warc.gz"}
http://www.madregister.com/translations-of-exponential-functions-template/
# translations of exponential functions template translations of exponential functions template is a translations of exponential functions template sample that gives infomration on translations of exponential functions template doc. When designing translations of exponential functions template, it is important to consider different translations of exponential functions template format such as translations of exponential functions template word, translations of exponential functions template pdf. You may add related information such as transformations of exponential functions notes, translations of exponential functions quizlet, transformations of exponential functions practice, transformations of exponential functions rules. just as with other parent functions, we can apply the four types of transformations—shifts, reflections, stretches, and compressions—to the parent function $f\left(x\right)={b}^{x}$ without loss of shape. for example, if we begin by graphing a parent function, $f\left(x\right)={2}^{x}$, we can then graph two vertical shifts alongside it using $d=3$: the upward shift, $g\left(x\right)={2}^{x}+3$ and the downward shift, $h\left(x\right)={2}^{x}-3$. the next transformation occurs when we add a constant c to the input of the parent function $f\left(x\right)={b}^{x}$ giving us a horizontal shift c units in the opposite direction of the sign. for any constants c and d, the function $f\left(x\right)={b}^{x+c}+d$ shifts the parent function $f\left(x\right)={b}^{x}$ we have an exponential equation of the form $f\left(x\right)={b}^{x+c}+d$, with $b=2$, $c=1$, and $d=-3$. the domain is $\left(-\infty ,\infty \right)$, the range is $\left(3,\infty \right)$, and the horizontal asymptote is y = 3. in the following video, we show more examples of the difference between horizontal and vertical shifts of exponential functions and the resulting graphs and equations. for example,$42=1.2{\left(5\right)}^{x}+2.8$ can be solved to find the specific value for x that makes it a true statement. for a window, use the values –3 to 3 for$x$ and –5 to 55 for$y$.press [graph]. the x-coordinate of the point of intersection is displayed as 2.1661943. to the nearest thousandth,x≈2.166. for example, if we begin by graphing a parent function, f(x)=2x f ( x ) = 2 x , we can then graph two vertical shifts for example, if we begin by graphing a parent function, f ( x ) = 2 x \displaystyle f\ left(x\right)={2}^{x} f(x)=2​x​​, rules of transformations; horizontal shifts and the y-intercept; vertical shifts , transformations of exponential functions notes, transformations of exponential functions notes, translations of exponential functions quizlet, transformations of exponential functions practice, transformations of exponential functions rules. a translation of an exponential example : writing a function from a description. write the equation for the function different transformations of an exponential function will result in a different graph from the basic graph. different browse translation exponential function resources on teachers example problems are solved on., transformations of exponential functions khan academy, transformations of exponential functions calculator, transformations of exponential functions calculator, graphing exponential functions, how to move exponential function left and right desmos A translations of exponential functions template Word can contain formatting, styles, boilerplate text, headers and footers, as well as autotext entries. It is important to define the document styles beforehand in the sample document as styles define the appearance of Word text elements throughout your document. You may design other styles and format such as translations of exponential functions template pdf, translations of exponential functions template powerpoint, translations of exponential functions template form. When designing translations of exponential functions template, you may add related content, transformations of exponential functions khan academy, transformations of exponential functions calculator, graphing exponential functions, how to move exponential function left and right desmos. how do you translate an exponential function? what are the transformations of exponential functions? how do you translate exponential equations horizontally? how do you graph exponential transformations?
2020-12-01 11:29:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37175503373146057, "perplexity": 941.2164047072572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00271.warc.gz"}
https://1055theking.com/lsf4c/parametric-equations-calculus-8696b6
In order to identify just how much of the ellipse the parametric curve will cover let’s go back to the parametric equations and see what they tell us about any limits on $$x$$ and $$y$$. A table of values of the parametric equations in Example 10.2.7 along with a sketch of their graph.. Eliminate the Parameter, Set up the parametric equation for to solve the equation for . Before we proceed with the rest of the example be careful to not always just assume we will get the full graph of the algebraic equation. All we need to be able to do is solve a (usually) fairly basic equation which by this point in time shouldn’t be too difficult. Exercise. We’ll start by eliminating the parameter as we did in the previous section. However, the parametric equations have defined both $$x$$ and $$y$$ in terms of sine and cosine and we know that the ranges of these are limited and so we won’t get all possible values of $$x$$ and $$y$$ here. All “fully traced out” means, in general, is that whatever portion of the ellipse that is described by the set of parametric curves will be completely traced out. Parametric Equations and Polar Coordinates. Each formula gives a portion of the circle. Example. We just had a lot to discuss in this one so we could get a couple of important ideas out of the way. This is why the table gives the wrong impression. Now, from this work we can see that if we use $$t = - \frac{1}{2}$$ we will get the vertex and so we included that value of $$t$$ in the table in Example 1. In the above formula, f(t) and g(t) refer to x and y, respectively. It can only be used in this example because the “starting” point and “ending” point of the curves are in different places. Observe that the curve is at the point $(3,0)$ when $t_1=-\sqrt{3}$ and $t_2=\sqrt{3}$, so the curve crosses itself at the point $(3,0).$, Since $\frac{dx}{dt}=3t^2-3$ and $$\frac{dy}{dt}=\frac{3(t^2-1)}{2t},$$ at the first point of intersection, $$m_1=\left.\frac{dy}{dx}\right|{t=-\sqrt{3}} =\frac{3(3-1)}{2(-\sqrt{3})} =-\sqrt{3}$$ and an equation of the tangent line is $y-0=-\sqrt{3}(x-3)$ or $y=-\sqrt{3}(x-3).$ At the second point, $$m_2 = \left.\frac{dy}{dx}\right|{t=\sqrt{3}}=\frac{3(3-1)}{2(\sqrt{3})}=\sqrt{3}$$ and an equation of the tangent line is $y-0=\sqrt{3}(x-3)$ or $y=\sqrt{3}(x-3).$. Before we move on to other problems let’s briefly acknowledge what happens by changing the $$t$$ to an nt in these kinds of parametric equations. Calculus with Parametric equations Let Cbe a parametric curve described by the parametric equations x= f(t);y= g(t). However, in order for $$x$$ to decrease, as we know it does in this quadrant, the direction must still be moving a counter-clockwise rotation. x, equals, 8, e, start superscript, 3, t, end superscript. We also put in a few values of $$t$$ just to help illustrate the direction of motion. Consider the cardioid $r= 1 + \cos \theta$. As $t$ increases from $t=a$ to $t =b$, the particle traverses the curve in a specific direction called the orientation of a curve, eventually ending up at the terminal point $(f(b), g(b))$ of the curve. Exercise. Do this by sketching the path, determining limits on $$x$$ and $$y$$ and giving a range of $$t$$’s for which the path will be traced out exactly once (provide it traces out more than once of course). The area between a parametric curve and the x -axis can be determined by using the formula It is more than possible to have a set of parametric equations which will continuously trace out just a portion of the curve. To finish the problem then all we need to do is determine a range of $$t$$’s for one trace. Recall we said that these tables of values can be misleading when used to determine direction and that’s why we don’t use them. For, plugging in some values of $$n$$ we get that the curve will be at the top point at. Unfortunately, we usually are working on the whole circle, or simply can’t say that we’re going to be working only on one portion of it. Example. If the function f and g are dierentiable and y is also a … This, in turn means that both $$x$$ and $$y$$ will oscillate as well. Here is that work. First we find the derivatives of $x$ and $y$ with respect to $t$: $\frac{dx}{dt}=3t^2-3$ and $\frac{dy}{dt}=2t.$ To find the point(s) where the tangent line is horizontal, set $\frac{dy}{dt}=0$ obtaining $t=0.$ Since $\frac{dx}{dt} \neq 0$ at this $t$ value, the required point is $(0,0).$ To find the point(s) where the tangent line is vertical, set $\frac{dx}{dt}=0$ obtaining $t=\pm 1.$ Since $\frac{dy}{dt}\neq 0$ at either of these $t$-values, the required points are $(2,1)$ and $(-2,1).$, Example. To see this effect let’s look a slight variation of the previous example. In Example 10.2.5, if we let $$t$$ vary over all real numbers, we'd obtain the entire parabola. We’ll see an example of this later. However, what we can say is that there will be a value(s) of $$t$$ that occurs in both sets of solutions and that is the $$t$$ that we want for that point. Here are a few of them. Any of them would be acceptable answers for this problem. So, we will be at the right end point at $$t = \ldots , - 2\pi , - \pi ,0,\pi ,2\pi , \ldots$$ and we’ll be at the left end point at $$t = \ldots , - \frac{3}{2}\pi , - \frac{1}{2}\pi ,\frac{1}{2}\pi ,\frac{3}{2}\pi , \ldots$$ . There are also a great many curves out there that we can’t even write down as a single equation in terms of only $$x$$ and $$y$$. For example, we could do the following. Show the orientation of the curve. Given the ellipse. Now that we can describe curves using parametric equations, we can analyze many more curves than we could when we were restricted to simple functions. The table seems to suggest that between each pair of values of $$t$$ a quarter of the ellipse is traced out in the clockwise direction when in reality it is tracing out three quarters of the ellipse in the counter-clockwise direction. Therefore, in this case, we now know that we get a full ellipse from the parametric equations. Here is the sketch of this parametric curve. So, because the $$x$$ coordinate of five will only occur at this point we can simply use the $$x$$ parametric equation to determine the values of $$t$$ that will put us at this point. Find $d^2y/dx^2$ given $x=\sqrt{t}$, $y=1/t$. Before addressing a much easier way to sketch this graph let’s first address the issue of limits on the parameter. Applications of Parametric Equations. Note as well that any limits on $$t$$ given in the problem statement can also affect how much of the graph of the algebraic equation we get. Calculus with Parametric Curves . Use the equation for arc length of a parametric curve. Assign any one of the variable equal to t . Outside of that the tables are rarely useful and will generally not be dealt with in further examples. Section 9.3 Calculus and Parametric Equations ¶ permalink. The position of a particle at time $t$ is $(x,y)$ where $x=\sin t$ and $y=\sin^2 t.$ Describe the motion of the particle as $t$ varies over the time interval $[a,b].$, Solution. We can stop here as all further values of $$t$$ will be outside the range of $$t$$’s given in this problem. We begin by sketching the graph of a few parametric equations. Explain how to find velocity, speed, and acceleration from parametric equations. CALCULUS BC WORKSHEET ON PARAMETRIC EQUATIONS AND GRAPHING Work these on notebook paper. The collection of points that we get by letting t t be all possible values is the graph of the parametric equations and is called the parametric curve. We should give a small warning at this point. From a quick glance at the values in this table it would look like the curve, in this case, is moving in a clockwise direction. Let’s see if our first impression is correct. The curve starts at $(1,0)$ and follows the upper part of the unit circle until it reaches the other endpoint of $(-1,0).$ Can you think of another set of parametric equations that give the same graph? Before we end this example there is a somewhat important and subtle point that we need to discuss first. Namely. To graph the equations, first we construct a table of values like that in the table below. From this analysis we can get two more ranges of $$t$$ for one trace. Suppose that $$x′(t)$$ and $$y′(t)$$ exist, and assume that $$x′(t)≠0$$. Notice that we made sure to include a portion of the sketch to the right of the points corresponding to $$t = - 2$$ and $$t = 1$$ to indicate that there are portions of the sketch there. For … A reader pointed out that nearly every parametric equation tutorial uses time as its example parameter. OK, so that's our first parametric equation of a line in this class. Solution. The point $$\left( {x,y} \right) = \left( {f\left( t \right),g\left( t \right)} \right)$$ will then represent the location of the ping pong ball in the tank at time $$t$$ and the parametric curve will be a trace of all the locations of the ping pong ball. Now that we have introduced the concept of a parameterized curve, our next step is to learn how to work with this concept in the context of calculus. Exercise. To finish the sketch of the parametric curve we also need the direction of motion for the curve. It’s starting to look like changing the $$t$$ into a 3$$t$$ in the trig equations will not change the parametric curve in any way. Tangent lines to parametric curves and motion along a curve is discussed. We can usually determine if this will happen by looking for limits on $$x$$ and $$y$$ that are imposed up us by the parametric equation. How do you find the parametric equations for a line segment? So, we are now at the point $$\left( {0,2} \right)$$ and we will increase $$t$$ from $$t = \frac{\pi }{2}$$ to $$t = \pi$$. Exercise. It is fairly simple however as this example has shown. The set of points obtained as t varies over the interval I is called the graph of the parametric equations. are called parametric equations and t is called the parameter. David Smith is the CEO and founder of Dave4Math. Calculus. \end{eqnarray*} Here, the parameter $\theta$ represents the polar angle of the position on a circle of radius $3$ centered at the origin and oriented counterclockwise. Well recall that we mentioned earlier that the 3$$t$$ will lead to a small but important change to the curve versus just a $$t$$? In this section we'll employ the techniques of calculus to study these curves. In practice however, this example is often done first. Start by setting the independent variables x and t equal to one another, and then you can write two parametric equations in terms of t: x = t. y = -3t +1.5 Exercise. The derivative from the $$y$$ parametric equation on the other hand will help us. As noted already however, there are two small problems with this method. Getting a sketch of the parametric curve once we’ve eliminated the parameter seems fairly simple. up the path. Notice that with this sketch we started and stopped the sketch right on the points originating from the end points of the range of $$t$$’s. Now, we could continue to look at what happens as we further increase $$t$$, but when dealing with a parametric curve that is a full ellipse (as this one is) and the argument of the trig functions is of the form nt for any constant $$n$$ the direction will not change so once we know the initial direction we know that it will always move in that direction. Parametric curves have a direction of motion. Nothing actually says unequivocally that the parametric curve is an ellipse just from those five points. In this case, we would guess (and yes that is all it is – a guess) that the curve traces out in a counter-clockwise direction. Let’s increase $$t$$ from $$t = 0$$ to $$t = \frac{\pi }{2}$$. Each value of t t defines a point (x,y) = (f (t),g(t)) ( x, y) = ( f ( t), g ( t)) that we can plot. We are now at $$\left( { - 5,0} \right)$$ and we will increase $$t$$ from $$t = \pi$$ to $$t = \frac{{3\pi }}{2}$$. There are many more parameterizations of an ellipse of course, but you get the idea. It is however probably the most important choice of $$t$$ as it is the one that gives the vertex. We won’t bother with a sketch for this one as we’ve already sketched this once and the point here was more to eliminate the parameter anyway. Parametric equation, a type of equation that employs an independent variable called a parameter (often denoted by t) and in which dependent variables are defined as continuous functions of the parameter and are not dependent on another existing variable. When we parameterize a curve, we are translating a single equation in two variables, such as $x$ and $y$, into an equivalent pair of equations in three variables, $x,y$, and $t$. The only way for that to happen on this particular this curve will be for the curve to be traced out in both directions. y = cos ⁡ ( 4 t) y=\cos (4t) y = cos(4t) y, equals, cosine, left parenthesis, 4, t, right parenthesis. Because of the ideas involved in them we concentrated on parametric curves that retraced portions of the curve more than once. 10 Curves in the Plane 10.1 Arc Length and Surface Area 10.3 Calculus and Parametric Equations 10.2 Parametric Equations We are familiar with sketching shapes, such as … In the range $$0 \le t \le \pi$$ we had to travel downwards along the curve to get from the top point at $$t = 0$$ to the bottom point at $$t = \pi$$. For the 4th quadrant we will start at $$\left( {0, - 2} \right)$$ and increase $$t$$ from $$t = \frac{{3\pi }}{2}$$ to $$t = 2\pi$$. Calculus and Vectors – How to get an A+ 8.3 Vector, Parametric, and Symmetric Equations of a Line in R3 ©2010 Iulia & Teodoru Gugoiu - Page 2 of 2 D Symmetric Equations The parametric equations of a line may be written as: t R z z tu y y tu In this section we'll employ the techniques of calculus to study these curves. Now, all we need to do is recall our Calculus I knowledge. The line segments between (x0,y0) and (x1,y1) can be expressed as: x(t) = (1− t)x0 + tx1. Most of these types of problems aren’t as long. Yet, because they traced out the graph a different number of times we really do need to think of them as different parametric curves at least in some manner. Do not use your calculator. This is a fairly important set of parametric equations as it used continually in some subjects with dealing with ellipses and/or circles. We begin by sketching the graph of a few parametric equations. We will NOT get the whole parabola. Also note that they won’t all start at the same place (if we think of $$t = 0$$ as the starting point that is). Now, let’s take a look at another example that will illustrate an important idea about parametric equations. Section 10.3 Calculus and Parametric Equations. In a parametric equation, t is the independent variable, and x and y are both dependent variables. Use the equations in the preceding problem to find a set of parametric equations for a circle whose radius is 5 and whose center is (−2, 3). Exercise. Apply the formula for surface area to a volume generated by a parametric curve. To correctly determine the direction of motion we’ll use the same method of determining the direction that we discussed after Example 3. Before we leave this example let’s address one quick issue. So, by starting with sine/cosine and “building up” the equation for $$x$$ and $$y$$ using basic algebraic manipulations we get that the parametric equations enforce the above limits on $$x$$ and $$y$$. This precalculus video provides a basic introduction into parametric equations. The first few values of $$t$$ are then. and. Step-by-Step Examples. Exercise. Eliminate the parameter and find the corresponding rectangular equation. y(t) = (1 −t)y0 +ty1, where 0 ≤ t ≤ 1. So, it is clear from this that we will only get a portion of the parabola that is defined by the algebraic equation. Note that the only difference here is the presence of the limits on $$t$$. Now, let’s write down a couple of other important parameterizations and all the comments about direction of motion, starting point, and range of $$t$$’s for one trace (if applicable) are still true. , f ( t = - \frac { 1 } { 2 } \ ) lasts approximately 365.25,. ) with respect to \ ( t\ ) and \ ( t\ ) is clearly always positive however we! X, equals, 8, e, start superscript, 3, t, end.. Reason for choosing \ ( y\ ) rule to compute dy dx assuming it exists be true this., can only happen if we do have a parabola that opens upward $. By Rectangular equations does show a graph going through the given points example 5 real numbers, we get the. We see that the only way for this problem out, below is a good example that... Parameter to determine the direction of motion is given by increasing parametric equations calculus t\... Convert it to the second potential issue alluded to above time we going... Which will continuously trace out just a portion of the ellipse -axis can be parameterized in more than possible have... Of lines always look like that in some values of \ ( t\ and... Are attempting to do is recall a very nice trig identity from above these. Two examples ) for \ ( y\ ) will not always a correct analogy but it will be small! Cosine will be a little different at is a quick look at another that! And did the first quadrant we must be traced out in both directions interested in lines to. It more complicated also need the direction of motion is given by \... Determine the direction of motion of the parametric curve and parametric equations$ x=t^2 and... 5 as examples we can solve for \ ( t\ ) and (. This curve will be for the curve did we get the same ellipse that won. Weird, since they take a different path to get the same ellipse that we no. Are some sketches of some possible graphs of the particle ’ s just with... Example let ’ s for one trace exactly once s first address the of. Construct a table of values of \ ( t\ ) are then &... Already however, there is one final topic to be discussed in this section we 'll employ techniques... ) equation for \ ( t\ ) on it Model motion in the previous sketch is tracing in... In fact, parametric equations it to the top point at is important to note that! To do means that we will restrict the values of the tangent lines the. Will affect the sketch above that would be needed to determine the direction vector (... Important choice of \ ( t\ ) just to help visualize just a. The other equation back down the equation that we will continue in both directions as shown in parametric equations calculus. Now time to take a look at an easier method of sketching this curve. Some of the curve at that point the direction of motion we ’ ll use and at other we. And at other times we won ’ t be unusual to get multiple values \. Let ’ s first address the issue of limits on \ ( y\ ) as we did in previous. Separate times sets of parametric equations handful of points in their personal and professional lives line in this range \!, in turn means that we will trace out a portion of the portion of parametric equations calculus. To use x ( t ) = ( x1, y1 − y0 ) = parametric equations calculus,! 3\Cos\Theta\\ y & = & 3\sin\theta this class, again we only trace out once we this! These limits do is graph the … d=Va * t, end superscript there will be at this.... We could get a full ellipse from the \ ( t ) and \ ( y\ ) as need! Will cover a parametrization for the curve to parametric equations calculus very, very careful however in sketching this parametric curve in. In previous examples independent variables called parameters x1 −x0, y1 − y0 ) = (,! Fact tracing out three separate times curve may have more than once a is. You find the slope of the parametric curve same, the curves are different \ ) will the! Set up the parametric equations defined by the equations after completing this unit you will be this... T ≤ 1 ) as we did in the parametric equations from the \ ( r\ ),... Parabola is the second quadrant can be rewritten as y = t + 5 y t. We should give a small but important way which we want to write down the equation for solve... ( n\ ) starting at \ ( y\ ) for \ ( y\ ) as it is important to however! An ellipse this that we found by eliminating the parameter however and this will give!, since they take a perfectly fine, easy equation and convert it to parametric! That we will often be dependent on the problem is that when writing this material we. You will be able to... Model motion in the plane is defined using the identity. Fall easily into this form, can only happen if we do a. And this will often be dependent on the parameter as we did in the sketch of the examples in section... One quick issue these cases we parameterize the same method of sketching parametric curves nature of forces. Of a parametric curve example parameter with in further examples between a curve. A curve to fix however = - \frac { 1 } { 2 } \ ) us a! From those five points as long equations involve time ” that we can think of another set of equations... Had a lot to discuss first dx assuming it exists \le t \le \pi \ ) not... Variables called parameters dave4math » calculus 1 » parametric equations and solve \! The table below determining the direction of your graph t varies over the interval I called! Example parameter b ) we know that cosine will be two small problems with this,. Used continually in some \ ( x\ ) gives the vertex as that traced! Out once lasts approximately 365.25 days, but it does show a graph going the. Address one quick issue superscript, 3, t, end superscript had a lot to discuss we... Often we can solve for \ ( t\ ) takes us back the! 5 as examples we can eliminate the parameter an important idea about parametric equations and GRAPHING work these notebook! Differentiable functions and x and y = t 2 things down to only one these! Dealt with in further examples, can only happen if we are still interested in lines to. An easier method of sketching parametric curves and motion along a curve may have more than.. Going to need a curve that is defined parametrically by the equations for curves defined by parametric equations will! Rectangular equation, using the trig identity from above and these equations we that... Identity and the parameter to determine direction circle centered at the origin with radius \ ( x\ gives. Most important point on the problem and just what a parametric curve cover...
2021-04-14 11:55:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8536406755447388, "perplexity": 230.6030395108239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077810.20/warc/CC-MAIN-20210414095300-20210414125300-00627.warc.gz"}
https://stackoverflow.com/questions/18158930/why-doesnt-grep-give-the-matching-line
# Why doesn't grep give the matching line? I've just noticed that grep -rni 'a2}' * does not give all documents that have a string a2} the matching line. Why is this the case? I've tried to create a minimal example, but when I create a new file and paste the content, it fails. So I've uploaded the file to a Git repository. Perhaps it's a encoding problem. The content of the file is: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \chapter{KV-Diagramme} \label{chap:a2} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \PsTexAbbildungOhneCaption{figures/a2-1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Local Variables: %%% mode: latex %%% TeX-master: "skript" %%% End: The result of grep -rni 'a2}' * is moose@pc08 ~/Downloads/algorithms/grep $grep -rni "a2}" * %%% End:master: "skript"%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% but I expected moose@pc08 ~/Downloads/algorithms/grep$ grep -rni "a2}" * \label{chap:a2} Why do I get this result? • I get the following output: tmp.txt:3:\label{chap:a2} – alfasin Aug 10 '13 at 5:29 • That's odd: I get the result you were expecting. – jrd1 Aug 10 '13 at 5:29 • Me too ... same result as expected – woofmeow Aug 10 '13 at 5:39 The file has CR line terminators so it looks a a single-line file: #> file anhang-2.tex anhang-2.tex: LaTeX document, ASCII text, with CR line terminators convert it to Linux format: #> mac2unix anhang-2.tex mac2unix: converting file anhang-2.tex to Unix format ... #> grep -rni 'a2}' anhang-2.tex 3:\label{chap:a2} • How can I check for all .tex files in a folder if they have CR line terminators? – Martin Thoma Aug 10 '13 at 5:50 • – cyberwombat Aug 10 '13 at 5:53 • @moose, file *tex | grep CR – perreal Aug 10 '13 at 5:56 It's because your file is using Mac OS 9 line endings. You will need to first translate to UNIX line endings. How you do so depends on your scenario but you can do one file with this: tr '\r' '\n' < anhang-2.tex > anhang-2.txt Then you will be able to grep that new file. • DOS/Mac OS9 ending... the above code cleans it – cyberwombat Aug 10 '13 at 5:59 • @perreal you are correct about the line termination. I just made a quick assumption. Edited answer. Thanks. – cyberwombat Aug 10 '13 at 6:02
2019-09-19 22:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4221222698688507, "perplexity": 7313.927609348696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573735.34/warc/CC-MAIN-20190919204548-20190919230548-00140.warc.gz"}
http://www.theinfolist.com/html/ALL/s/Self-adjoint_operator
TheInfoList In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and their changes (cal ... , a self-adjoint operator on an infinite-dimensional complex vector space ''V'' with inner product In mathematics, an inner product space or a Hausdorff space, Hausdorff pre-Hilbert space is a vector space with a binary operation called an inner product. This operation associates each pair of vectors in the space with a Scalar (mathematics), ... $\langle\cdot,\cdot\rangle$ (equivalently, a Hermitian operator in the finite-dimensional case) is a linear map In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... ''A'' (from ''V'' to itself) that is its own adjoint. If ''V'' is finite-dimensional with a given orthonormal basisIn linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of u ... , this is equivalent to the condition that the matrix Matrix or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols, or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the material in between a eukaryoti ... of ''A'' is a Hermitian matrix{{short description, Wikipedia list article Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussi ... , i.e., equal to its conjugate transpose In mathematics, the conjugate transpose (or Hermitian transpose) of an ''m''-by-''n'' matrix (mathematics), matrix \boldsymbol with complex number, complex entries is the ''n''-by-''m'' matrix obtained from \boldsymbol by taking the transpose and ... ''A''. By the finite-dimensional spectral theorem, ''V'' has an orthonormal basisIn linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of u ... such that the matrix of ''A'' relative to this basis is a diagonal matrix In linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and t ... with entries in the real number In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no g ... generalization A generalization is a form of abstraction Abstraction in its main sense is a conceptual process where general rules and concept Concepts are defined as abstract ideas or general notions that occur in the mind, in speech, or in thought. They ... s of this concept Concepts are defined as abstract ideas A mental representation (or cognitive representation), in philosophy of mind Philosophy of mind is a branch of philosophy that studies the ontology and nature of the mind and its relationship with the bo ... to operators on Hilbert space In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ge ... s of arbitrary dimension. Self-adjoint operators are used in functional analysis 200px, One of the possible modes of vibration of an idealized circular drum head. These modes are eigenfunctions of a linear operator on a function space, a common construction in functional analysis. Functional analysis is a branch of mathemat ... and quantum mechanics Quantum mechanics is a fundamental theory A theory is a reason, rational type of abstraction, abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with ... . In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observable In physics Physics (from grc, φυσική (ἐπιστήμη), physikḗ (epistḗmē), knowledge of nature, from ''phýsis'' 'nature'), , is the natural science that studies matter, its Motion (physics), motion and behavior through ... s such as position, momentum In Newtonian mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass Mass is the quantity Quantity is a property that can exist as a multitude or magnitude, which illustrate discontinui ... , angular momentum In , angular momentum (rarely, moment of momentum or rotational momentum) is the rotational equivalent of . It is an important quantity in physics because it is a —the total angular momentum of a closed system remains constant. In three , the ... and spin Spin or spinning may refer to: Businesses * or South Pacific Island Network * , an American scooter-sharing system * , a chain of table tennis lounges Computing * , 's tool for formal verification of distributed software systems * , a Mach-like ... are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian operator $\hat$ defined by :$\hat \psi = -\frac \nabla^2 \psi + V \psi,$ which as an observable corresponds to the total energy In physics Physics is the that studies , its , its and behavior through , and the related entities of and . "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regula ... of a particle of mass ''m'' in a real potential field ''V''. Differential operator 300px, A harmonic function defined on an annulus. Harmonic functions are exactly those functions which lie in the kernel of the Laplace operator, an important differential operator. In mathematics, a differential operator is an Operator (mathe ... s are an important class of unbounded operatorIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... s. The structure of self-adjoint operators on infinite-dimensional Hilbert spaces essentially resembles the finite-dimensional case. That is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operator In operator theoryIn mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded ... s. With suitable modifications, this result can be extended to possibly unbounded operators on infinite-dimensional spaces. Since an everywhere-defined self-adjoint operator is necessarily bounded, one needs be more attentive to the domain issue in the unbounded case. This is explained below in more detail. # Definitions Let $A$ be an unbounded (i.e. not necessarily bounded) operator with a dense domain $\mathopA \subseteq H.$ This condition holds automatically when $H$ is finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e. the number of vectors) of a Basis (linear algebra), basis of ''V'' over its base Field (mathematics), field. p. 44, §2.36 It is sometimes called Hamel dimension (after ... since $\mathopA = H$ for every linear operator on a finite-dimensional space. Let the inner product $\langle \cdot, \cdot\rangle$ be conjugate-linear on the ''second'' argument. This applies to complex Hilbert spaces only. By definition, the adjoint operatorIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... $A^*$ acts on the subspace $\mathop A^* \subseteq H$ consisting of the elements $y$ for which there is a $z \in H$ such that $\langle Ax,y \rangle = \langle x,z \rangle,$ for every $x \in \mathop A.$ Setting $A^*y = z$ defines the linear operator $A^*.$ The graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discret ... of an (arbitrary) operator $A$ is the set $G\left(A\right) = \.$ An operator $B$ is said to extend $A$ if $G\left(A\right) \subseteq G\left(B\right).$ This is written as $A \subseteq B.$ The densely defined operator $A$ is called symmetric if : $\langle Ax , y \rangle = \lang x , Ay \rangle,$ for all $x,y\in \mathopA.$ As shown below, $A$ is symmetric if and only if $A \subseteq A^*.$ The unbounded densely defined operator $A$ is called self-adjoint if $G\left(A\right)= G\left(A^*\right).$ Explicitly, $\mathopA = \mathopA^*$ and $A = A^*.$ Every self-adjoint operator is symmetric. Conversely, a symmetric operator $A$ for which $\mathopA = \mathopA^*$ is self-adjoint. In physics, the term Hermitian refers to symmetric as well as self-adjoint operators alike. The subtle difference between the two is generally overlooked. A subset $\rho\left(A\right) \subseteq \C$ is called the resolvent set (or regular set) if for every $\lambda \in \rho\left(A\right),$ the (not-necessarily-bounded) operator $A - \lambda I$ has a ''bounded everywhere-defined'' inverse. The complement $\sigma\left(A\right) = \C \setminus \rho\left(A\right)$ is called spectrum A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum Continuum may refer to: * Continuum (measurement) Continuum theories or models expla ... . In finite dimensions, $\sigma\left(A\right)$ consists exclusively of eigenvalue In linear algebra, an eigenvector () or characteristic vector of a Linear map, linear transformation is a nonzero Vector space, vector that changes at most by a Scalar (mathematics), scalar factor when that linear transformation is applied to i ... s. A bounded operator ''A'' is self-adjoint if :$\langle Ax\mid y\rangle = \langle x\mid Ay\rangle$ for all $x$ and $y$ in ''H''. If ''A'' is symmetric and $\mathrm\left(A\right)=H$, then, by Hellinger–Toeplitz theoremIn functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product \langle \cdot , \cdot \rangle is bounded operator, bounded. By definition, ... , ''A'' is necessarily bounded. Every bounded linear operator ''T'' : ''H'' → ''H'' on a Hilbert space ''H'' can be written in the form $T = A + i B$ where ''A'' : ''H'' → ''H'' and ''B'' : ''H'' → ''H'' are bounded self-adjoint operators. ## Properties of bounded self-adjoint operators Let ''H'' be a Hilbert space and let $A : H \to H$ be a bounded self-adjoint linear operator defined on $\operatorname\left\left( A \right\right) = H$. * $\left\langle h, A h \right\rangle$ is real for all $h \in H$. * $\left\, A \right\, = \sup \left\$ if $\operatorname H \neq 0.$ * If the image of ''A'', denoted by $\operatorname A$, is dense in ''H'' then $A : H \to \operatorname A$ is invertible. * The eigenvalues of ''A'' are real and eigenvectors belonging to different eigenvalues are orthogonal. * If $\lambda$ is an eigenvalue of ''A'' then $, \lambda , \leq \, A \,$; in particular, $, \lambda , \leq \sup \left\$. ** In general, there may not exist any eigenvalue $\lambda$ such that $, \lambda , = \sup \left\$, but if in addition ''A'' is compact then there necessarily exists an eigenvalue $\lambda$, equal to either $\, A \,$ or $- \, A \,$, such that $, \lambda , = \sup \left\$, * If a sequence of bounded self-adjoint linear operators is convergent then the limit is self-adjoint. * There exists a number $\lambda$, equal to either $\, A \,$ or $- \, A \,$, and a sequence $\left\left( x_i \right\right)_^ \subseteq H$ such that $\lim_ A x_i - \lambda x_i = 0$ and $\, x_i \, = 1$ for all ''i''. # Symmetric operators ''NOTE: symmetric operators are defined above.'' ## ''A'' is symmetric ⇔ ''A''⊆''A'' An unbounded, densely defined operator $A$ is symmetric if and only if $A \subseteq A^*.$ Indeed, the if-part follows directly from the definition of the adjoint operator. For the only-if-part, assuming that $A$ is symmetric, the inclusion $\mathop\left(A\right) \subseteq \mathop\left(A^*\right)$ follows from the Cauchy–Bunyakovsky–Schwarz inequality: for every $x,y \in \mathop\left(A\right),$ :$, \langle Ax,y\rangle, = , \langle x,Ay\rangle, \leq \, x\, \cdot \, Ay\, .$ The equality $A=A^*, _$ holds due to the equality :$\langle x,A^*y\rangle = \langle Ax,y\rangle = \langle x,Ay\rangle,$ for every $x,y \in \mathopA \subseteq \mathopA^*,$ the density of $\mathop A,$ and non-degeneracy of the inner product. The Hellinger–Toeplitz theoremIn functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product \langle \cdot , \cdot \rangle is bounded operator, bounded. By definition, ... says that an everywhere-defined symmetric operator is bounded and self-adjoint. ## ''A'' is symmetric ⇔ ''∀x ∈ R'' The only-if part follows directly from the definition (see above). To prove the if-part, assume without loss of generality that the inner product $\langle \cdot, \cdot \rangle$ is anti-linear on the ''first'' argument and linear on the second. (In the reverse scenario, we work with $\langle x,y\rangle_\text \stackrel \ \langle y, x \rangle$ instead). The symmetry of $A$ follows from the polarization identity In linear algebra, a branch of mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathem ... : which holds for every $x,y \in \mathopA.$ ## , , (A-λ)x, , ≥ d(λ)⋅, , x, , This property is used in the proof that the spectrum of a self-adjoint operator is real. Define $S=\,$ $\textstyle m=\inf_ \langle Ax,x \rangle,$ and $\textstyle M=\sup_ \langle Ax,x \rangle.$ The values $m,M \in \mathbb \cup \$ are properly defined since $S \neq \emptyset,$ and $\langle Ax,x\rangle \in \mathbb,$ due to symmetry. Then, for every $\lambda \in \C$ and every $x \in \mathopA,$ :$\Vert A - \lambda x\Vert \geq d\left(\lambda\right)\cdot \Vert x\Vert,$ where $\textstyle d\left(\lambda\right) = \inf_ , r - \lambda, .$ Indeed, let $x \in \mathopA \setminus \.$ By Cauchy-Schwarz inequality, :$\Vert A - \lambda x\Vert \geq \frac =\left, \left\langle A\frac,\frac\right\rangle - \lambda\ \cdot \Vert x\Vert \geq d\left(\lambda\right)\cdot \Vert x\Vert.$ If then $d\left(\lambda\right) > 0,$ and $A - \lambda I$ is called ''bounded below''. ## A simple example As noted above, the spectral theorem applies only to self-adjoint operators, and not in general to symmetric operators. Nevertheless, we can at this point give a simple example of a symmetric operator that has an orthonormal basis of eigenvectors. (This operator is actually "essentially self-adjoint.") The operator ''A'' below can be seen to have a compact Compact as used in politics may refer broadly to a pact A pact, from Latin ''pactum'' ("something agreed upon"), is a formal agreement. In international relations International relations (IR), international affairs (IA) or internationa ... inverse, meaning that the corresponding differential equation ''Af'' = ''g'' is solved by some integral, therefore compact, operator ''G''. The compact symmetric operator ''G'' then has a countable family of eigenvectors which are complete in . The same can then be said for ''A''. Consider the complex Hilbert space L2 ,1and the differential operator 300px, A harmonic function defined on an annulus. Harmonic functions are exactly those functions which lie in the kernel of the Laplace operator, an important differential operator. In mathematics, a differential operator is an Operator (mathe ... : $A = -\frac$ with $\mathrm\left(A\right)$ consisting of all complex-valued infinitely differentiable In calculus (a branch of mathematics), a differentiable function of one Real number, real variable is a function whose derivative exists at each point in its Domain of a function, domain. In other words, the Graph of a function, graph of a differen ... functions ''f'' on , 1 The comma is a punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, ... satisfying the boundary conditions :$f\left(0\right) = f\left(1\right) = 0.$ Then integration by parts In calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of ... of the inner product shows that ''A'' is symmetric. The reader is invited to perform integration by parts twice and verify that the given boundary conditions for $\operatorname\left(A\right)$ ensure that the boundary terms in the integration by parts vanish. The eigenfunctions of ''A'' are the sinusoids : $f_n\left(x\right) = \sin\left(n \pi x\right) \qquad n= 1, 2, \ldots$ with the real eigenvalues ''n''2π2; the well-known orthogonality of the sine functions follows as a consequence of the property of being symmetric. We consider generalizations of this operator below. Let $A$ be an unbounded symmetric operator. $A$ is self-adjoint if and only if $\sigma\left(A\right) \subseteq \mathbb.$ A symmetric operator ''A'' is always closable; that is, the closure of the graph of ''A'' is the graph of an operator. A symmetric operator ''A'' is said to be essentially self-adjoint if the closure of ''A'' is self-adjoint. Equivalently, ''A'' is essentially self-adjoint if it has a ''unique'' self-adjoint extension. In practical terms, having an essentially self-adjoint operator is almost as good as having a self-adjoint operator, since we merely need to take the closure to obtain self-adjoint operator. # Example: f(x) → x·f(x) Consider the complex Hilbert space ''L''2(R), and the operator which multiplies a given function by ''x'': :$A f\left(x\right) = xf\left(x\right)$ The domain of ''A'' is the space of all ''L''2 functions $f\left(x\right)$ for which $xf\left(x\right)$ is also square-integrable. Then ''A'' is self-adjoint. On the other hand, ''A'' does not have any eigenfunctions. (More precisely, ''A'' does not have any ''normalizable'' eigenvectors, that is, eigenvectors that are actually in the Hilbert space on which ''A'' is defined.) As we will see later, self-adjoint operators have very important spectral properties; they are in fact multiplication operators on general measure spaces. As has been discussed above, although the distinction between a symmetric operator and a self-adjoint (or essentially self-adjoint) operator is a subtle one, it is important since self-adjointness is the hypothesis in the spectral theorem. Here we discuss some concrete examples of the distinction; see the section below on extensions of symmetric operators for the general theory. ## A note regarding domains Every self-adjoint operator is symmetric. Conversely, every symmetric operator for which $\mathop\left(A^*\right) \subseteq \mathop\left(A\right)$ is self-adjoint. Symmetric operators for which $\mathop\left(A^*\right)$ is strictly greater than $\mathop\left(A\right)$ cannot be self-adjoint. ## Boundary conditions In the case where the Hilbert space is a space of functions on a bounded domain, these distinctions have to do with a familiar issue in quantum physics: One cannot define an operator—such as the momentum or Hamiltonian operator—on a bounded domain without specifying ''boundary conditions''. In mathematical terms, choosing the boundary conditions amounts to choosing an appropriate domain for the operator. Consider, for example, the Hilbert space $L^2\left($ , 1 The comma is a punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, ... (the space of square-integrable functions on the interval ,1. Let us define a "momentum" operator ''A'' on this space by the usual formula, setting Planck's constant equal to 1: : $Af = -i\frac.$ We must now specify a domain for ''A'', which amounts to choosing boundary conditions. If we choose : $\operatorname\left(A\right) = \left\,$ then ''A'' is not symmetric (because the boundary terms in the integration by parts do not vanish). If we choose : $\operatorname\left(A\right) = \left\,$ then using integration by parts, one can easily verify that ''A'' is symmetric. This operator is not essentially self-adjoint, however, basically because we have specified too many boundary conditions on the domain of ''A'', which makes the domain of the adjoint too big. (This example is discussed also in the "Examples" section below.) Specifically, with the above choice of domain for ''A'', the domain of the closure $A^$ of ''A'' is :$\operatorname\left\left(A^\right\right) = \left\,$ whereas the domain of the adjoint $A^*$ of ''A'' is :$\operatorname\left\left(A^*\right\right) = \left\.$ That is to say, the domain of the closure has the same boundary conditions as the domain of ''A'' itself, just a less stringent smoothness assumption. Meanwhile, since there are "too many" boundary conditions on ''A'', there are "too few" (actually, none at all in this case) for $A^*$. If we compute $\langle g, Af\rangle$ for $f \in \operatorname\left(A\right)$ using integration by parts, then since $f$ vanishes at both ends of the interval, no boundary conditions on $g$ are needed to cancel out the boundary terms in the integration by parts. Thus, any sufficiently smooth function $g$ is in the domain of $A^*$, with $A^*g = -i\,dg/dx$. Since the domain of the closure and the domain of the adjoint do not agree, ''A'' is not essentially self-adjoint. After all, a general result says that the domain of the adjoint of $A^\mathrm$ is the same as the domain of the adjoint of ''A''. Thus, in this case, the domain of the adjoint of $A^\mathrm$ is bigger than the domain of $A^\mathrm$ itself, showing that $A^\mathrm$ is not self-adjoint, which by definition means that ''A'' is not essentially self-adjoint. The problem with the preceding example is that we imposed too many boundary conditions on the domain of ''A''. A better choice of domain would be to use periodic boundary conditions: :$\operatorname\left(A\right) = \.$ With this domain, ''A'' is essentially self-adjoint. In this case, we can understand the implications of the domain issues for the spectral theorem. If we use the first choice of domain (with no boundary conditions), all functions $f_\beta\left(x\right) = e^$ for $\beta \in \mathbb C$ are eigenvectors, with eigenvalues $-i \beta$, and so the spectrum is the whole complex plane. If we use the second choice of domain (with Dirichlet boundary conditions), ''A'' has no eigenvectors at all. If we use the third choice of domain (with periodic boundary conditions), we can find an orthonormal basis of eigenvectors for ''A'', the functions $f_n\left(x\right) := e^$. Thus, in this case finding a domain such that ''A'' is self-adjoint is a compromise: the domain has to be small enough so that ''A'' is symmetric, but large enough so that $D\left(A^*\right)=D\left(A\right)$. ## Schrödinger operators with singular potentials A more subtle example of the distinction between symmetric and (essentially) self-adjoint operators comes from Schrödinger operators in quantum mechanics. If the potential energy is singular—particularly if the potential is unbounded below—the associated Schrödinger operator may fail to be essentially self-adjoint. In one dimension, for example, the operator :$\hat := \frac - X^4$ is not essentially self-adjoint on the space of smooth, rapidly decaying functions. In this case, the failure of essential self-adjointness reflects a pathology in the underlying classical system: A classical particle with a $-x^4$ potential escapes to infinity in finite time. This operator does not have a ''unique'' self-adjoint, but it does admit self-adjoint extensions obtained by specifying "boundary conditions at infinity". (Since $\hat$ is a real operator, it commutes with complex conjugation. Thus, the deficiency indices are automatically equal, which is the condition for having a self-adjoint extension. See the discussion of extensions of symmetric operators below.) In this case, if we initially define $\hat$ on the space of smooth, rapidly decaying functions, the adjoint will be "the same" operator (i.e., given by the same formula) but on the largest possible domain, namely :$\operatorname\left\left(\hat^*\right\right) = \left\.$ It is then possible to show that $\hat^*$ is not a symmetric operator, which certainly implies that $\hat$ is not essentially self-adjoint. Indeed, $\hat^*$ has eigenvectors with pure imaginary eigenvalues, which is impossible for a symmetric operator. This strange occurrence is possible because of a cancellation between the two terms in $\hat^*$: There are functions $f$ in the domain of $\hat^*$ for which neither $d^2 f/dx^2$ nor $x^4f\left(x\right)$ is separately in $L^2\left(\mathbb\right)$, but the combination of them occurring in $\hat^*$ is in $L^2\left(\mathbb\right)$. This allows for $\hat^*$ to be nonsymmetric, even though both $d^2/dx^2$ and $X^4$ are symmetric operators. This sort of cancellation does not occur if we replace the repelling potential $-x^4$ with the confining potential $x^4$. Conditions for Schrödinger operators to be self-adjoint or essentially self-adjoint can be found in various textbooks, such as those by Berezin and Shubin, Hall, and Reed and Simon listed in the references. # Spectral theorem In the physics literature, the spectral theorem is often stated by saying that a self-adjoint operator has an orthonormal basis of eigenvectors. Physicists are well aware, however, of the phenomenon of "continuous spectrum"; thus, when they speak of an "orthonormal basis" they mean either an orthonormal basis in the classic sense ''or'' some continuous analog thereof. In the case of the momentum operator $P = -i\frac$, for example, physicists would say that the eigenvectors are the functions $f_p\left(x\right) := e^$, which are clearly not in the Hilbert space $L^2\left(\mathbb\right)$. (Physicists would say that the eigenvectors are "non-normalizable.") Physicists would then go on to say that these "eigenvectors" are orthonormal in a continuous sense, where the usual Kronecker delta $\delta_$ is replaced by a Dirac delta function $\delta\left\left(p - p\text{'}\right\right)$. Although these statements may seem disconcerting to mathematicians, they can be made rigorous by use of the Fourier transform, which allows a general $L^2$ function to be expressed as a "superposition" (i.e., integral) of the functions $e^$, even though these functions are not in $L^2$. The Fourier transform "diagonalizes" the momentum operator; that is, it converts it into the operator of multiplication by $p$, where $p$ is the variable of the Fourier transform. The spectral theorem in general can be expressed similarly as the possibility of "diagonalizing" an operator by showing it is unitarily equivalent to a multiplication operator. Other versions of the spectral theorem are similarly intended to capture the idea that a self-adjoint operator can have "eigenvectors" that are not actually in the Hilbert space in question. ## Statement of the spectral theorem Partially defined operators ''A'', ''B'' on Hilbert spaces ''H'', ''K'' are unitarily equivalent if and only if there is a unitary transformation In mathematics, a unitary transformation is a transformation Transformation may refer to: Science and mathematics In biology and medicine * Metamorphosis, the biological process of changing physical form after birth or hatching * Malignant tr ... ''U'' : ''H'' → ''K'' such that * ''U'' maps dom ''A'' bijective In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and ... ly onto dom ''B'', * $B U \xi = U A \xi ,\qquad \forall \xi \in \operatornameA.$ A multiplication operator In operator theoryIn mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded ... is defined as follows: Let (''X'', Σ, μ) be a countably additive measure space A measure space is a basic object of measure theory Measure is a fundamental concept of mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures ... and ''f'' a real-valued measurable function on ''X''. An operator ''T'' of the form : whose domain is the space of ψ for which the right-hand side above is in ''L''2 is called a multiplication operator. One version of the spectral theorem can be stated as follows. Other versions of the spectral theorem can be found in the spectral theorem article linked to above. The spectral theorem for unbounded self-adjoint operators can be proved by reduction to the spectral theorem for unitary (hence bounded) operators. This reduction uses the '' Cayley transformCayley may refer to: __NOTOC__ People * Cayley (surname) * Cayley Illingworth (1759–1823), Anglican Archdeacon of Stow * Cayley Mercer (born 1994), Canadian women's ice hockey player Places * Cayley, Alberta, Canada, a hamlet * Mount Cayley, a vol ... '' for self-adjoint operators which is defined in the next section. We might note that if T is multiplication by f, then the spectrum of T is just the essential rangeIn mathematics, particularly measure theory, the essential range of a Function (mathematics), function is intuitively the 'non-negligible' range of the function: It does not change between two functions that are equal almost everywhere. One way of th ... of f. ## Functional calculus One important application of the spectral theorem is to define a " functional calculusIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... ." That is to say, if $h$ is a function on the real line and $T$ is a self-adjoint operator, we wish to define the operator $h\left(T\right)$. If $T$ has a true orthonormal basis of eigenvectors $e_j$ with eigenvalues $\lambda_j$, then $h\left(T\right)$ is the operator with eigenvectors $e_j$ and eigenvalues $h\left\left(\lambda_j\right\right)$. The goal of functional calculus is to extend this idea to the case where $T$ has continuous spectrum. Of particular importance in quantum physics is the case in which $T$ is the Hamiltonian operator $\hat$ and $h\left(x\right) := e^$ is an exponential. In this case, the functional calculus should allow us to define the operator :$U\left(t\right) := h\left\left(\hat\right\right) = e^\frac,$ which is the operator defining the time-evolution in quantum mechanics. Given the representation of ''T'' as the operator of multiplication by $f$—as guaranteed by the spectral theorem—it is easy to characterize the functional calculus: If ''h'' is a bounded real-valued Borel function on R, then ''h''(''T'') is the operator of multiplication by the composition $h \circ f$. ## Resolution of the identity It has been customary to introduce the following notation :$\operatorname_T\left(\lambda\right) = \mathbf_ \left(T\right)$ where $\mathbf_$ is the characteristic functionIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... (indicator function)of the interval $\left(-\infty, \lambda\right]$. The family of projection operators E''T''(λ) is called resolution of the identity for ''T''. Moreover, the following Stieltjes integral representation for ''T'' can be proved: :$T = \int_^ \lambda d \operatorname_T\left(\lambda\right).$ The definition of the operator integral above can be reduced to that of a scalar valued Stieltjes integral using the weak operator topology. In more modern treatments however, this representation is usually avoided, since most technical problems can be dealt with by the functional calculus. ## Formulation in the physics literature In physics, particularly in quantum mechanics, the spectral theorem is expressed in a way which combines the spectral theorem as stated above and the Borel functional calculus In functional analysis Image:Drum vibration mode12.gif, 200px, One of the possible modes of vibration of an idealized circular drum head. These modes are eigenfunctions of a linear operator on a function space, a common construction in functional a ... using Dirac notation Paul Adrien Maurice Dirac (; 8 August 1902 – 20 October 1984) was an English theoretical physicist who is regarded as one of the most significant physicists of the 20th century. Dirac made fundamental contributions to the early developme ... as follows: If ''H'' is self-adjoint and ''f'' is a Borel function In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ... , :$f\left(H\right) = \int dE \left, \Psi_E \rangle f\left(E\right) \langle \Psi_E \$ with :$H \left, \Psi_E\right\rangle = E \left, \Psi_E\right\rangle$ where the integral runs over the whole spectrum of ''H''. The notation suggests that ''H'' is diagonalized by the eigenvectors Ψ''E''. Such a notation is purely formal Formal, formality, informal or informality imply the complying with, or not complying with, some set theory, set of requirements (substantial form, forms, in Ancient Greek). They may refer to: Dress code and events * Formal wear, attire for forma ... . One can see the similarity between Dirac's notation and the previous section. The resolution of the identity (sometimes called projection valued measures) formally resembles the rank-1 projections $\left, \Psi_E\right\rangle \left\langle\Psi_E\$. In the Dirac notation, (projective) measurements are described via eigenvalues In linear algebra, an eigenvector () or characteristic vector of a Linear map, linear transformation is a nonzero Vector space, vector that changes at most by a Scalar (mathematics), scalar factor when that linear transformation is applied to it ... and eigenstates In quantum physics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including qua ... , both purely formal objects. As one would expect, this does not survive passage to the resolution of the identity. In the latter formulation, measurements are described using the spectral measure of $, \Psi \rangle$, if the system is prepared in $, \Psi \rangle$ prior to the measurement. Alternatively, if one would like to preserve the notion of eigenstates and make it rigorous, rather than merely formal, one can replace the state space by a suitable rigged Hilbert space In mathematics, a rigged Hilbert space (Gelfand triple, nested Hilbert space, equipped Hilbert space) is a construction designed to link the distribution (mathematics), distribution and square-integrable aspects of functional analysis. Such spaces ... . If , the theorem is referred to as resolution of unity: :$I = \int dE \left, \Psi_E\right\rangle \left\langle\Psi_E\$ In the case $H_\text = H - i\Gamma$ is the sum of an Hermitian ''H'' and a skew-Hermitian (see skew-Hermitian matrix __NOTOC__ In linear algebra, a square matrix In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus ... ) operator $-i\Gamma$, one defines the biorthogonal basis set :$H^*_\text \left, \Psi_E^*\right\rangle = E^* \left, \Psi_E^*\right\rangle$ and write the spectral theorem as: :$f\left\left(H_\text\right\right) = \int dE \left, \Psi_E\right\rangle f\left(E\right) \left\langle\Psi_E^*\$ (See Feshbach–Fano partitioning method for the context where such operators appear in scattering theory In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities and t ... ). # Extensions of symmetric operators The following question arises in several contexts: if an operator ''A'' on the Hilbert space ''H'' is symmetric, when does it have self-adjoint extensions? An operator that has a unique self-adjoint extension is said to be essentially self-adjoint; equivalently, an operator is essentially self-adjoint if its closure (the operator whose graph is the closure of the graph of ''A'') is self-adjoint. In general, a symmetric operator could have many self-adjoint extensions or none at all. Thus, we would like a classification of its self-adjoint extensions. The first basic criterion for essential self-adjointness is the following: Equivalently, ''A'' is essentially self-adjoint if and only if the operators $A^* - i$ and $A^* + i$ have trivial kernels. That is to say, ''A'' ''fails to be'' self-adjoint if and only if $A^*$ has an eigenvector with eigenvalue $i$ or $-i$. Another way of looking at the issue is provided by the Cayley transformCayley may refer to: __NOTOC__ People * Cayley (surname) * Cayley Illingworth (1759–1823), Anglican Archdeacon of Stow * Cayley Mercer (born 1994), Canadian women's ice hockey player Places * Cayley, Alberta, Canada, a hamlet * Mount Cayley, a vol ... of a self-adjoint operator and the deficiency indices. (It is often of technical convenience to deal with closed operatorIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... s. In the symmetric case, the closedness requirement poses no obstacles, since it is known that all symmetric operators are closable.) Here, ''ran'' and ''dom'' denote the image An image (from la, imago) is an artifact that depicts visual perception Visual perception is the ability to interpret the surrounding environment (biophysical), environment through photopic vision (daytime vision), color vision, sco ... (in other words, range) and the domain Domain may refer to: Mathematics *Domain of a function In mathematics, the domain of a Function (mathematics), function is the Set (mathematics), set of inputs accepted by the function. It is sometimes denoted by \operatorname(f), where is th ... , respectively. W(''A'') is isometric The term ''isometric'' comes from the Greek for "having equal measurement". isometric may mean: * Cubic crystal system, also called isometric crystal system * Isometre, a rhythmic technique in music. * "Isometric (Intro)", a song by Madeon from t ... on its domain. Moreover, the range of 1 − W(''A'') is dense The density (more precisely, the volumetric mass density; also known as specific mass), of a substance is its mass Mass is both a property Property (''latin: Res Privata'') in the Abstract and concrete, abstract is what belongs to or ... in ''H''. Conversely, given any partially defined operator ''U'' which is isometric on its domain (which is not necessarily closed) and such that 1 − ''U'' is dense, there is a (unique) operator S(''U'') : $\operatorname\left(U\right) : \operatorname\left(1 - U\right) \to \operatorname\left(1 + U\right)$ such that : $\operatorname\left(U\right)\left(x - Ux\right) = i\left(x + U x\right) \qquad x \in \operatorname\left(U\right).$ The operator S(''U'') is densely defined and symmetric. The mappings W and S are inverses of each other. The mapping W is called the Cayley transform. It associates a partially defined isometry to any symmetric densely defined operator. Note that the mappings W and S are monotone: This means that if ''B'' is a symmetric operator that extends the densely defined symmetric operator ''A'', then W(''B'') extends W(''A''), and similarly for S. This immediately gives us a necessary and sufficient condition for ''A'' to have a self-adjoint extension, as follows: A partially defined isometric operator ''V'' on a Hilbert space ''H'' has a unique isometric extension to the norm closure of dom(''V''). A partially defined isometric operator with closed domain is called a partial isometryIn functional analysis Image:Drum vibration mode12.gif, 200px, One of the possible modes of vibration of an idealized circular drum head. These modes are eigenfunctions of a linear operator on a function space, a common construction in functional an ... . Given a partial isometry ''V'', the deficiency indices of ''V'' are defined as the dimension of the orthogonal complementIn the mathematical Mathematics (from Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population is ... s of the domain and range: :$\begin n_+\left(V\right) &= \dim \operatorname\left(V\right)^\perp \\ n_-\left(V\right) &= \dim \operatorname\left(V\right)^\perp \end$ We see that there is a bijection between symmetric extensions of an operator and isometric extensions of its Cayley transform. The symmetric extension is self-adjoint if and only if the corresponding isometric extension is unitary. A symmetric operator has a unique self-adjoint extension if and only if both its deficiency indices are zero. Such an operator is said to be essentially self-adjoint. Symmetric operators which are not essentially self-adjoint may still have a canonical Canonical may refer to: Science and technology * Canonical form In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geo ... self-adjoint extension. Such is the case for ''non-negative'' symmetric operators (or more generally, operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers (arithmetic and number theory), formulas and related structures (algebra), shapes and spaces in which they are contained (geometry), and quantities an ... operator), so the issue of essential adjointness for these operators is less critical. ## Self-adjoint extensions in quantum mechanics In quantum mechanics, observables correspond to self-adjoint operators. By Stone's theorem on one-parameter unitary groups In mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ... , self-adjoint operators are precisely the infinitesimal generators of unitary groups of time evolution Time evolution is the change of state brought about by the passage of time Time is the continued of and that occurs in an apparently succession from the , through the , into the . It is a component quantity of various s used to events ... operators. However, many physical problems are formulated as a time-evolution equation involving differential operators for which the Hamiltonian is only symmetric. In such cases, either the Hamiltonian is essentially self-adjoint, in which case the physical problem has unique solutions or one attempts to find self-adjoint extensions of the Hamiltonian corresponding to different types of boundary conditions or conditions at infinity. Example. The one-dimensional Schrödinger operator with the potential $V\left(x\right) = -\left(1 + , x, \right)^\alpha$, defined initially on smooth compactly supported functions, is essentially self-adjoint (that is, has a self-adjoint closure) for but not for . See Berezin and Schubin, pages 55 and 86, or Section 9.10 in Hall. The failure of essential self-adjointness for $\alpha > 2$ has a counterpart in the classical dynamics of a particle with potential $V\left(x\right)$: The classical particle escapes to infinity in finite time. Example. There is no self-adjoint momentum operator ''p'' for a particle moving on a half-line. Nevertheless, the Hamiltonian $p^2$ of a "free" particle on a half-line has several self-adjoint extensions corresponding to different types of boundary conditions. Physically, these boundary conditions are related to reflections of the particle at the origin (see Reed and Simon, vol.2). # Von Neumann's formulas Suppose ''A'' is symmetric densely defined. Then any symmetric extension of ''A'' is a restriction of ''A''*. Indeed, ''A'' ⊆ ''B'' and ''B'' symmetric yields ''B'' ⊆ ''A''* by applying the definition of dom(''A''*). These are referred to as von Neumann's formulas in the Akhiezer and Glazman reference. # Examples ## A symmetric operator that is not essentially self-adjoint We first consider the Hilbert space $L^2$ , 1 The comma is a punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, ... /math> and the differential operator : $D: \phi \mapsto \frac \phi\text{'}$ defined on the space of continuously differentiable complex-valued functions on ,1 satisfying the boundary conditions :$\phi\left(0\right) = \phi\left(1\right) = 0.$ Then ''D'' is a symmetric operator as can be shown by integration by parts In calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of ... . The spaces ''N''+, ''N'' (defined below) are given respectively by the distributionDistribution may refer to: Mathematics *Distribution (mathematics) Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distr ... al solutions to the equation :$\begin -i u\text{'} &= i u \\ -i u\text{'} &= -i u \end$ which are in ''L''2 , 1 The comma is a punctuation Punctuation (or sometimes interpunction) is the use of spacing, conventional signs (called punctuation marks), and certain typographical devices as aids to the understanding and correct reading of written text, ... One can show that each one of these solution spaces is 1-dimensional, generated by the functions ''x'' → ''e''''−x'' and ''x'' → ''e''''x'' respectively. This shows that ''D'' is not essentially self-adjoint, but does have self-adjoint extensions. These self-adjoint extensions are parametrized by the space of unitary mappings ''N''+ → ''N'', which in this case happens to be the unit circle T. In this case, the failure of essential self-adjointenss is due to an "incorrect" choice of boundary conditions in the definition of the domain of $D$. Since $D$ is a first-order operator, only one boundary condition is needed to ensure that $D$ is symmetric. If we replaced the boundary conditions given above by the single boundary condition : $\phi\left(0\right) = \phi\left(1\right)$, then ''D'' would still be symmetric and would now, in fact, be essentially self-adjoint. This change of boundary conditions gives one particular essentially self-adjoint extension of ''D''. Other essentially self-adjoint extensions come from imposing boundary conditions of the form $\phi\left(1\right) = e^\phi\left(0\right)$. This simple example illustrates a general fact about self-adjoint extensions of symmetric differential operators ''P'' on an open set ''M''. They are determined by the unitary maps between the eigenvalue spaces : $N_\pm = \left\$ where ''P''dist is the distributional extension of ''P''. ## Constant-coefficient operators We next give the example of differential operators with constant coefficients. Let :$P\left\left(\vec\right\right) = \sum_\alpha c_\alpha x^\alpha$ be a polynomial on R''n'' with ''real'' coefficients, where α ranges over a (finite) set of multi-indices. Thus : $\alpha = \left(\alpha_1, \alpha_2, \ldots, \alpha_n\right)$ and : $x^\alpha = x_1^ x_2^ \cdots x_n^.$ We also use the notation :$D^\alpha = \frac \partial_^\partial_^ \cdots \partial_^.$ Then the operator ''P''(D) defined on the space of infinitely differentiable functions of compact support on R''n'' by : $P\left(\operatorname\right) \phi = \sum_\alpha c_\alpha \operatorname^\alpha \phi$ is essentially self-adjoint on ''L''2(R''n''). More generally, consider linear differential operators acting on infinitely differentiable complex-valued functions of compact support. If ''M'' is an open subset of R''n'' : where ''a''α are (not necessarily constant) infinitely differentiable functions. ''P'' is a linear operator :$C_0^\infty\left(M\right) \to C_0^\infty\left(M\right).$ Corresponding to ''P'' there is another differential operator, the formal adjoint of ''P'' :$P^\mathrm \phi = \sum_\alpha D^\alpha \left\left(\overline \phi\right\right)$ # Spectral multiplicity theory The multiplication representation of a self-adjoint operator, though extremely useful, is not a canonical representation. This suggests that it is not easy to extract from this representation a criterion to determine when self-adjoint operators ''A'' and ''B'' are unitarily equivalent. The finest grained representation which we now discuss involves spectral multiplicity. This circle of results is called the '' Hahn HellingerHellinger is a surname. Notable people with the surname include: *Bert Hellinger (1925–2019), German psychotherapist *Ernst Hellinger (1883–1950), German mathematician **Hellinger distance, used to quantify the similarity between two probabilit ... theory of spectral multiplicity''. ## Uniform multiplicity We first define ''uniform multiplicity'': Definition. A self-adjoint operator ''A'' has uniform multiplicity ''n'' where ''n'' is such that 1 ≤ ''n'' ≤ ω if and only if ''A'' is unitarily equivalent to the operator M''f'' of multiplication by the function ''f''(λ) = λ on : $L^2_\mu\left\left(\mathbf, \mathbf_n\right\right) = \left\$ where H''n'' is a Hilbert space of dimension ''n''. The domain of M''f'' consists of vector-valued functions ψ on R such that : $\int_\mathbf , \lambda, ^2\ \, \psi\left(\lambda\right)\, ^2 \, d\mu\left(\lambda\right) < \infty.$ Non-negative countably additive measures μ, ν are mutually singular if and only if they are supported on disjoint Borel sets. This representation is unique in the following sense: For any two such representations of the same ''A'', the corresponding measures are equivalent in the sense that they have the same sets of measure 0. ## Direct integrals The spectral multiplicity theorem can be reformulated using the language of direct integralIn mathematics and functional analysis a direct integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in ... s of Hilbert spaces: Unlike the multiplication-operator version of the spectral theorem, the direct-integral version is unique in the sense that the measure equivalence class of μ (or equivalently its sets of measure 0) is uniquely determined and the measurable function $\lambda\mapsto\mathrm\left(H_\right)$ is determined almost everywhere with respect to μ. The function $\lambda \mapsto \operatorname\left\left(H_\lambda\right\right)$ is the spectral multiplicity function of the operator. We may now state the classification result for self-adjoint operators: Two self-adjoint operators are unitarily equivalent if and only if (1) their spectra agree as sets, (2) the measures appearing in their direct-integral representations have the same sets of measure zero, and (3) their spectral multiplicity functions agree almost everywhere with respect to the measure in the direct integral. Proposition 7.24 ## Example: structure of the Laplacian The Laplacian on R''n'' is the operator :$\Delta = \sum_^n \partial_^2.$ As remarked above, the Laplacian is diagonalized by the Fourier transform. Actually it is more natural to consider the ''negative'' of the Laplacian −Δ since as an operator it is non-negative; (see elliptic operator In the theory of partial differential equations In , a partial differential equation (PDE) is an equation which imposes relations between the various s of a . The function is often thought of as an "unknown" to be solved for, similarly to how ... ). # Pure point spectrum A self-adjoint operator ''A'' on ''H'' has pure point spectrum if and only if ''H'' has an orthonormal basis ''i'' ∈ I consisting of eigenvectors for ''A''. Example. The Hamiltonian for the harmonic oscillator has a quadratic potential ''V'', that is :$-\Delta + , x, ^2.$ This Hamiltonian has pure point spectrum; this is typical for bound state Hamiltonians in quantum mechanics. As was pointed out in a previous example, a sufficient condition that an unbounded symmetric operator has eigenvectors which form a Hilbert space basis is that it has a compact inverse. * Compact operator on Hilbert space In functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operato ... * Theoretical and experimental justification for the Schrödinger equation * Unbounded operatorIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... * Hermitian adjointIn mathematics Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It ha ... * Positive operator Positive is a property of positivity and may refer to: Mathematics and science * Converging lens or positive lens, in optics * Plus sign The plus and minus signs, and , are mathematical symbols used to represent the notions of positive an ... # References * * * * * * * * * * * * * * {{DEFAULTSORT:Self-Adjoint Operator Operator theory Hilbert space
2022-05-19 02:11:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 231, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206887483596802, "perplexity": 617.0691854887953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00630.warc.gz"}
http://lamport.azurewebsites.net/tla/toolbox-1-5-5.html
# Uncorrected TLC Bug ## The Problem An uncommon but serious bug in TLC has been found that has existed since its initial implementation.  The bug can cause TLC to generate an incorrect set of initial states, or an incorrect set of possible next states when examining a state.  Either can cause TLC not to examine all reachable states. The error can occur in the following two cases: 1. The possible initial values of some variable  var  are specified by a subformula F(..., var, ...) in the initial predicate, for some operator  F  such that expanding the definition of  F  results in a formula containing more than one occurrence of  var , not all occurring in separate disjuncts of that formula. 2. The possible next values of some variable  var  are specified by a subformula F(..., var', ...) in the next-state relation, for some operator  F  such that expanding the definition of  F  results in a formula containing more than one occurrence of  var' , not all occurring in separate disjuncts of that formula. An example of the first case is an initial predicate  Init  defined as follows: VARIABLES x, ... F(var) == \/ var \in 0..99 /\ var % 2 = 0 \/ var = -1 Init == /\ F(x) /\ ... The error would not appear if  F  were defined by: F(var) == \/ var \in {i \in 0..99 : i % 2 = 0} \/ var = -1 or if the definition of  F(x)  were expanded in  Init : Init == /\ \/ x \in 0..99 /\ x % 2 = 0 \/ x = -1 /\ ... A similar example holds for case 2 with the same operator  F  and the next-state formula Next == /\ F(x') /\ ... ## The Workaround The workaround is to rewrite the initial predicate or next-state relation so it is not in the form that can cause the bug.  The simplest way to do that is to expand (in-line) the definition of  F  in the definition of the initial predicate or next-state relation.
2018-01-21 22:38:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582541584968567, "perplexity": 2590.013999637466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890893.58/warc/CC-MAIN-20180121214857-20180121234857-00471.warc.gz"}
https://chem.libretexts.org/Courses/University_of_Alabama/Chem_101%3A_Rupar/Homework_problems_under_construction/Homework_54
# Homework 54 9.33, 10.60 ## 9.33 Using this precipitation reaction: 2KI (aq) + Pb(NO3)2 (aq) ----> PbI2 (s) + 2KNO3 (aq) *Hint: Potassium Iodine and Lead (II) Nitrate reaction Find the volume of 0.155 M of KI needed to completely react with 0.0754 L of 0.108 M of Pb(NO3)2. Definitions: Precipitation Reaction: The mixing of two reacting solutions in which a solid forms as a result. $Molarity = \dfrac{moles}{Liters}$ Solving Strategy: Step 1: Establish the ratio of moles of KI to moles of Pb(NO3)2 from the given balanced equation. *Hint: Identify the coefficients of the two reagents Step 2: We have been given the molarity and volume for Pb(NO3)2. We can now calculate the actual number of moles of Pb(NO3)2 in this reaction using the Molarity equation. Step 3: Use the number of moles of Pb(NO3)2 calculated in Step 2 to now find moles of KI. Do this using dimensional analysis. Make sure to use the mole to mole ratio found in Step 1 for the conversion. Step 4: Finally, plug the calculated number of moles of KI found in Step 3 to solve for the volume of KI needed. Since the molarity has been given and the number of moles has been calculated, we can plug these values into the Molarity equation once again. Solution: Step 1: 2 mole of KI : 1 mole of Pb(NO3)2 Step 2: $0.108M=\dfrac{moles Pb(NO_{3})_{2}}{0.0745L}$ $(0.108M)(0.0745L)=0.00814molesPb(NO_{3})_{2}$ Step 3: $0.00814molsPb(NO_{3})_{2}\times \dfrac{2molsKI}{1molPb(NO_{3})_{2}}=0.0163molsKI$ Step 4: $0.155M=\dfrac{0.0163molsKI}{Liters}$ $\dfrac{0.0163molsKI}{0.155M}=0.105LitersKI$ ## 10.60 As a lab experiment the class submerged a 47.3-g aluminum block with an initial temperature of 32.4 degrees C into an unknown mass of water at 73.2 degrees C. The temperature of the final mixture at equilibrium is 57.3 degrees C. Find the unknown mass of the water used in the experiment. Definitions: Specific Heat (Cs): the required amount of heat energy to increase the temperature of one gram of a certain substance by 1 degree C. Heat gained by aluminum = heat lost by water $Heat(Q)=mass\times C_{s}\times \Delta T$ *Hint: Do not convert temperature to Kelvin *Hint: Change in Temperature = Tfinal - Tinitial Specific heat of water = 4.18 J/g x C Specific heat of aluminum = 0.903 J/g x C Units: Mass = grams Specific Heat = J/g x C Temperature = Celsius Solving Strategy: Step 1: Due to the Law of Conservation of Energy, the heat exchange of aluminum and water can be set equal to each other. We know that aluminum will gain heat while water looses heat until they reach an equilibrium temperature. Step 2: Set up the equation as: $mass_{water}\times C_{swater}\times \Delta T_{water}=-mass_{Al}\times C_{sAl}\times \Delta T_{Al}$ Step 3: Now plug in known values and solve. Solution: Solve: $mass_{water}\times 4.18J/g\cdot ^{\circ}C\times (53.7^{\circ}C-73.2^{\circ}C)=-47.3g\times (0.903J/g\cdot ^{\circ}C)\times (53.7^{\circ}C-32.4^{\circ}C)$ $Mass_{water}=\dfrac{-47.3g\times 0.903J/g\cdot ^{\circ}C\times (53.7^{\circ}C-32.4^{\circ}C)}{4.18J/g\cdot ^{\circ}C\times (53.7^{\circ}C-73.2^{\circ}C)}$ $mass_{water}=11.16g$ Homework 54 is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
2022-08-12 00:33:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6697543859481812, "perplexity": 1519.0937236860345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571536.89/warc/CC-MAIN-20220811224716-20220812014716-00062.warc.gz"}