url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://ezeenotes.in/a-car-is-moving-along-a-straight-road-with-uniform-acceleration-it-passes-through-two-points-%F0%9D%91%83-and-%F0%9D%91%84-separated-by-a-distance-with-velocities-30kmh-%E2%88%921-and-40kmh/
# A car is moving along a straight road with uniform, acceleration. It passes through two points 𝑃 and 𝑄 separated by a distance with velocities 30kmh −1 and 40kmh −1 respectively. The velocity of car midway between 𝑃 and 𝑄 is Question : A car is moving along a straight road with uniform, acceleration. It passes through two points 𝑃 and 𝑄 separated by a distance with velocities 30kmh −1 and 40kmh −1 respectively. The velocity of car midway between 𝑃 and 𝑄 is (A) $33.3kmh^{-1}$ (B) $1kmh^{-1}$ (C) $2\sqrt{2}kmh^{-1}$ (D) $35.35kmh^{-1}$
2023-02-08 17:46:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 4, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4738171398639679, "perplexity": 1917.9836039562176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00223.warc.gz"}
https://dirkmittler.homeip.net/blog/archives/tag/firefox
## Firefox Quantum now available under Debian Linux There has been an ongoing subject, concerning Debian distributions of Linux, and Firefox upgrades. The Debian users, at least if they were restricting themselves to standard repositories, were being held back to a version of Firefox, which was referred to as Firefox-ESR, which stands for ‘Extended Support Release’. This release was receiving regular security patches, but no major upgrade in the version number, which meant that it was always at some sub-version of Firefox 52… Well only yesterday, my two Debian / Stretch computers, which I name ‘Plato’ and ‘Klexel’, finally received a much-anticipated upgrade to Firefox Quantum, which is also known as Firefox-ESR, v60… I am happy with Firefox Quantum, but perhaps only, because certain earlier, unstable versions of it, never made it into the Debian repositories? There is one observation about this updated browser-version which I need to make. As was announced, Mozilla dropped support for the old, ‘Netscape Plugin API’, which I had still been using to custom-compile plug-ins. Instead of using this API, up-to-date developers are being asked to use the ‘firefox-esr-dev’ package. but alas, the last time I checked, this package was not up to version 60… This package was still at version 52… ## Maintaining My Ability to View VRML 2.0 on the Web What some modern readers may not realize, is that even before the Shockwave Flash plug-in allowed it, and before WebGL inherited the responsibility of displaying 3D content in a Web-browser, there existed A more-straightforward way to display 3D scenes within our Web-browser, which was referred to as “VRML”. Most browsers today lack the ability to display this format of content, but I usually make sure to custom-compile a version of the plug-in which does this, which is named “FreeWRL”. When doing so, I need to set up the configuration of the source-tree with the following line: ./configure --enable-plugin --with-plugindir=/usr/lib/mozilla/plugins --with-target=motif --with-imageconvert=/usr/bin/convert --with-unzip=/usr/bin/unzip --enable-libeai --enable-docs --with-wget=/usr/bin/wget And, even if I give this command, often, the Firefox plug-in will not be built, because an additional dependency which I may not have installed, would be ‘npapi-sdk-dev’ This build-dependency gives our computers the header files necessary, to compile old-fashion plug-ins, which ‘Netscape’ and ‘Firefox’ allowed as add-ons, to view additional content-types embedded within the browser. And, Mozilla recently gave notice, that they would be dropping support for this plug-in API shortly. However, ‘firefox-esr’, available under Linux, still supports this plug-in API. What I find additionally, is that even if I get the most-recent versions of ‘FreeWRL’ to compile, the resulting program does not work correctly, and that I need to compile an older version instead. Well on the box which I name ‘Plato’, I just recently compiled and tested v2.3.3 of ‘FreeWRL’ and found that it still works. What I was also reminded of, was that support for VRML 1.0 was dropped a long time ago, and that only VRML97 / VRML 2.0 is still supported for on-line viewing. Thus, VRML 2.0 was already defined, in 1997. Content can still be found on the Web, even though the examples are sparse. Other examples, not linked to here, such as the NASA examples, were simply hosted on a non-NASA computer, and then abandoned, which means that most NASA VRML-links are broken links. Further, some graphics students will display their VRML-worlds, as proof that they’ve achieved some level of competency in graphics in general, but will fail to publish a URL. ## How to Add a Web-browser to GNURoot + XSDL. In This earlier posting – out of several – I had explained, that I’ve installed the Android apps “GNURoot Debian” and “XSDL” to my old Samsung Galaxy Tab S (first generation). The purpose is, to install Linux software on that tablet, without requiring that I root it. This uses the Android variant of ‘chroot’, which is actually also called ‘proot’, and is quick and painless. However, there are certain things which a ch-rooted Linux system cannot do. One of them is to start services to run in the background. Another is, to access hardware, as doing the latter would require access to the host’s ‘/dev’ folder, not the local, ch-root’s ‘/dev’ folder. Finally, because XSDL is acting as my X-server, when GNURoot’s guest-software tries to connect to one, there will be no hardware-acceleration, because this X-server is really just an Android app, and does not really correspond to a display device. This last detail can be quite challenging, because in today’s world, even many Linux applications require, direct-rendering, and will not function properly, if left just to use X-server protocol, à la legacy-Unix. One such application is any serious Web-browser. This does not result from any malfunction of either Android app, because it just follows from the logic, of what the apps are being asked to do. But we’d like to have a Web-browser installed, and will find that “Firefox”, “Arora” etc., all fail over this issue. This initially leaves us in an untenable situation, because even if we were not to use our Linux guest-system for Web-browsing – because there is a ‘real’ Web-browser installed on the (Android) host-system – the happenstance can take place, by which a Web-document needs to be viewed anyway – let’s say, because we want to click on an HTML-file, that constitutes the online documentation for some Linux-application. What can we do? ## Iceweasel to Firefox Transition In the past, there had been a split between Debian / Linux Devs, and the Mozilla team, where at first Linux was allowed to share “Firefox”. According to that split, Debian / Linux continued to develop its own version of the Web-browser, naming that “Iceweasel“. Newly, this split seems to have been resolved, so that Debian is now offering Firefox again. The packages have been made available. Just now, I did my own transition, from the deprecated Iceweasel to Firefox. This process was quick and painless. Dirk
2019-10-22 20:20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33867552876472473, "perplexity": 3997.235624528254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00200.warc.gz"}
http://crawlingrobotfortress.blogspot.com/2013/
20131205 Thread-safe Java Swing initialization simplified by Jython Java's Swing API is not thread-safe: initialization and manipulation of Swing components must be done only from the event dispatch thread (EDT). Programmers must use verbose idioms to initialize Swing components on the EDT. Jython is a Python interpreter which runs on top of Java. It provides the functionality of Python, plus a "pythonic" way to interact with existing Java APIs. Consider initializing a Swing application in Jython. According to specifications, even constructing Swing components on the Main thread like this can be unsafe. from javax.swing import * jf = JFrame("Hi") # more unsafe initialization here # ... jf.pack() jf.visible = 1 This code might seem OK, or it might crash intermittently and only on some platforms. The only way to conform to specifications is to move this initialization onto the event dispatch thread. The classic Java idiom to achieve this is to wrap the initialization code in a new subclass of Runnable, and pass this Runnable to the event dispatch thread to be executed. In Jython, this looks something like from javax.swing import * from java.awt import EventQueue from java.lang import Runnable class initializeApp(Runnable): def run(self): global jf jf = JFrame("Hi") # more unsafe initialization here jf.pack() jf.visible = 1 EventQueue.invokeAndWait(initializeApp()) This is a little verbose in Python, and would be significantly verbose in Java, to the point where programmers would prefer some sort of macro to simplify the process. The Python language feature of "decorators" provides this. In Python, decorators are functions which enhance or modify other functions. They are implemented as a Python function that accepts and returns functions as an argument. Here is a Python decorator that will transform any function into a version which is thread-safe for Swing: from java.awt import EventQueue from java.lang import Runnable def EDTsafe(function): def safe(*args,**kwargs): return function(*args,**kwargs) else: class foo(Runnable): def run(self): self.result = function(*args,**kwargs) f = foo() EventQueue.invokeAndWait(f) return f.result safe.__name__ = function.__name__ safe.__doc__ = function.__doc__ safe.__dict__.update(function.__dict__) return safe Using this decorator, we can do thread-safe Swing initialization succinctly from javax.swing import * @EDTsafe def initializeApp(): global jf jf = JFrame("Hi") # more unsafe initialization here jf.pack() jf.visible = 1 initializeApp() There is also the option of using the decorator in one-line lambda functions, for example EDTsafe( lambda: jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE) )() 20130722 test Sorry, your browser does not support canvas. Sure, why not. 20130615 Optimizing a hue rotation operator Hue rotation is a shift of the hue in a HSV or HSL color model, example: I am interested in implementing a hue rotation for Java. The colors are input in RGB format, with each component ranging from 0 to 255. The first implementation I tested uses HSV color space. It combines an RGB to HSV conversion, hue shifting, and HSV to RGB conversion, into a single series of operations: float min = r<b?g<r?g:r:g<b?g:b; float value = r>b?g>r?g:r:g>b?g:b; float C= value - min; if (C!=0.f) { //with no saturation, hue is undefined float hue; // CONVERT FROM RGB TO HSV if (value==r) hue = 6+(g-b)/C; else if (value==g) hue = 2+(b-r)/C; else hue = 4+(r-g)/C; // ROTATE THE HUE hue += fhuerotate; hue %= 6.f; // CONVERT FROM HSV TO RGB r=g=b=min; switch((int)hue) { case 0: r+=C; g+=C*hue; break; case 1: r+=C*(2-hue); g+=C; break; case 2: g+=C; b+=C*(hue-2); break; case 3: g+=C*(4-hue); b+=C; break; case 4: r+=C*(hue-4); b+=C; break; case 5: r+=C; b+=C*(6-hue); break; } } This approach is decent. The branching associated with the different hues is cheap on the CPU, and there aren't that many floating point multiplications or divisions. The hue space is mapped to [0,6) rather than [0,2π), which simplifies the computation. It is also possible to implement HSV rotation as a linear transformation of the RGB color components. To start, it is possible to derive a notion of hue and chroma that relates to the RGB components without casing on hue. This is taken directly from Wikipedia and more information can be found there. $\alpha=\frac{1}{2}(2R - G - B)$ $\beta =\frac{\sqrt{3}}{2}(G - B)$ $H_2 =\operatorname{atan2}(\beta, \alpha)$ $C_2 =\sqrt{\alpha^2 + \beta^2}$ This gives us a Cartesian representation of hue and chroma. To complete this color space, we need a notion of "brightness". The "value" and "lightness" of HSV and HSL color spaces are defined in terms of the brightest and dimmest color component. Another notion of brightness is simply the average of the three RGB color components, termed "intensity". Intensity facilitates representing hue rotation as a linear transform in RGB space. The intensity value, combined with the cartesian hue and chroma ( or $\alpha$ and $\beta$ ), form a complete colorspace. We can do hue rotation by rotating the Cartesian ($\alpha$,$\beta$) plane, without ever explicitly converting to hue and chroma. Once you write hue rotation as a transform in the ($\alpha$,$\beta$) plane, and combine that with converting from RGB to our HCI color-space and back, you are left with a linear transformation on the RGB color components. This transformation can be simplified to reduce the number of floating point multiplications. Expensive constants (Q1 and Q2 below) related to the rotation can be computed before iterating over the image. Compute these as a function of the hue rotation "th" ahead of time, before iterating over the image: Q1 = sin(th)/sqrt(3) Q2 = (1-cos(th))/3 Then, for each RGB pixel, the hue rotation is: rb = r-b; gr = g-r; bg = b-g; r1 = Q2*(gr-rb)-Q1*bg+r; Z = Q2*(bg-rb)+Q1*gr; g += Z + (r-r1); b -= Z; r = r1; One major difference between the HSV hue rotation and this one is how it treats the chroma component. Subjectively, the brightness of yellow, magenta, and cyan, are kept more equal to red, green, and blue, using this operator. In cases where you want to preserve subjective brightness, this is a good thing. However, this also makes yellows seem less colorful than one might expect for a hue rotation operator. This is related to how chroma is represented in the HCI color space. To paraphrase Wikipedia The two definitions of chroma ($C$ and $C_2$) differ substantially: they are equal at the corners of our hexagon, but at points halfway between two corners, such as $H=H_2=30^o$, we have $C=1$, but $C_2\approx 0.866$, a difference of about 13.4%. One patch is to restore the maximum color value after hue rotation. This preserves the "value" of the HSV color model and makes yellows brighter: max = r>b?g>r?g:r:g>b?g:b; rb = r-b; gr = g-r; bg = b-g; r1 = Q2*(gr-rb)-Q1*bg+r; Z = Q2*(bg-rb)+Q1*gr; g += Z + (r-r1); b -= Z; r = r1; adjust = max / ( r>b?g>r?g:r:g>b?g:b ); b *= adjust; Representing hue rotation as a linear transform is slightly faster on my system, and could be substantially faster in other environments ( e.g. where division is very costly). The subjective performance of these algorithms are comparable. The image at the start of this post was generated using the ($\alpha$,$\beta$) rotation approach without correcting for value. 20130320 Generating Vivid Geometric Hallucinations using Flicker Phosphenes with the “Neurolyzer Table” SaikoLED, a Cambridge MA based open-source and open-hardware lighting company, modified their "Neurolyzer" display table prototype to induce flicker phosphene geometric visual hallucinations. I contributed a brief writeup on one of the neurological theories of how such hallucinations arise, which is included in their post (draft mirrored on dropbox). Shown to the left is their Neurolyzer display table being used with some of Nervous System's Hyphae and Radiolaria designs. 20130319 Charlieplexing with LED dot matrix modules ( hosted for now at dropbox but copied here for stability ) Pin savings of Charlieplexing, easy assembly of multiplexed LED modules. Charlieplexing ( named after Charlie Allen ) is a great way to save pins on a microcontroller. Since each pin can be either high, low, or off ( 'high impedence' ), and LEDs conduct in only one direction, one can place two LEDs for for each unique pair of microcontroller pins. This also works with other devices, like buttons, but you can only place one for each pair of pins since they conduct in both directions*. However, a recent foray into designign with Charlieplexing revealed its major drawback to me: soldering a zillion discrete LEDs is very time consuming and not for everyone. It is easier to use LED modules, which have LEDs already wired up, and are designed to be driven by multiplexing. For an N by M multiplexed grid you need N+M driving pins. However, for an N by M charlieplexed grid you need only K pins where K(K-1)=NM **. However, there is often a way to Charlieplex LED matrices to save pins without increasing assembly difficulty. Charlieplexing logical grid One might ask: how the hell am I supposed to keep track of all the possible combinations used in Charlieplexing? Since each pin can be either high (positive, or anode) or low (negative, or cathode), we can draw a K by K grid for K pins, where the cases where a pin acts as an anode are on one axis, and as a cathode, the other. Along the diagonal you have sites where a pin is supposed to act as both an anode and a cathode -- these are forbidden, and are blacked out. Here is an example grid for 16 pins: Placing modules I can now place components on this grid to fill it up. Say I have an 8x8 LED matrix with 8 cathodes and 8 anodes. All I have to do is find an 8x8 space large enough to hold it somewhere on the grid. For example, two 8x8 LED matrices fit into this 16 grid: Another common size for LED matrices is 5x7. We can fit two of them on 12 pins like so: Now it gets fun. It's ok for components to wrap around the sides. We can fit four 5x7 ( for a 10x14 pixel game board perhaps? ) matrices on 16 pins like this: We can fit six 5x7 matrices on 18 pins ( for a 10x21 pixel game board perhaps? Large enough for original tetris! ). Eight 5x7 matrices fit on 20 pins. 8x8 matrices are a little more clunky, but you can still fit 3 of them onto 20 pins or 4 of them onto 22 pins ( 22 pins also fits 10 5x7 arrays ). We leave these last three as exercizes. ( solutions 1 2 3 4) To demonstrate that this approach does in fact work, I rigged up a little game of life on four 8x8 modules running on 22 pins on an AtMega328. After correcting for a problem with the brightness related to the PORTC pins sourcing less current, the display is quite functional -- the scanning is not visible and all lights are of equal brightness. I scan the lights one at a time, but only spend time on those that are on. (The variable frame rate is from the video processing -- the actual device is quite smooth) Other packaged LED modules can be laid out similarly. 7 segment displays ( 8 with decmal point ) come packaged in "common cathode" and "common anode" configurations, which would be represented as a column of 8 cells, or a row of 8 cells, respectively. Often, four 7-segment displays ( 8 with decimal ) are packaged at once in a multiplexed manner -- these would be represented as a 4x8 or 8x4 block on our grid, depending on whether they were common anode or cathode. RGB LEDs also come packaged in common cathode and anode configurations. For example, here is how one could charlieplex 14 common anode RGB LEDs on 7 pins: Hardware note: don't blow it up When driving LEDs with multiplexing or charlieplexing, it is not uncommon to omit current limiting resistors. Since the grid is scanned, only a few LEDs will be on at once, and all LEDs spend most of their time off. If the supply voltage lies between the typical forward voltage, and the peak instantaneous voltage, we can figure out the largest acceptable duty cycle and enforce this in software. However, now one must ensure that software glitches do not cause the array scanning to stall, or that LEDs can survive a period of elevated forward voltage. Microcontrollers will have a maximum safe current per IO pin. Sometimes, you can rely on the microcontroller to limit current to this level. Other times, attempting to force more than the maximum rating through a pin will damage the microcontroller. You can ensure that this never happens in software by never turning on more LEDs than a single IO pin can handle. Or, you can use tri-state drivers. If your microcontroller limits over-current, you can probably turn on as many LEDs as you want at once, but they will dim exponentionally with the reduction in current per LED. Combining devices There is nothing stopping us from combining different types of LED modules, or LEDs and buttons, in our grid. However, buttons conduct in both the forward and backward direction, so they occupy both the anode-cathode and cathode-anode positions for any pair of pins. I represent this as a black and white pair of buttons in the grid drawing. For example, one could get an acceptable calculator with 6 display digitis and 21 buttons onto 10 pins if you use a mix of common-cathode and common-anode 7-segment displays like so: You could probably get a pretty decent mini-game using the space left over from Charlie-muxing four 5x7 modules on 16 pins. There is enough room to fit 17 buttons and 6 7-segment displays (shown as earth-tone strips below): For the grand finali, we revisit the six 5x7 modules on 18 pins. Apart from giving us a grid large enough to hold classic Tetris, we also have room for 18 buttons, 6 7-segment displays (shown as earth-tone strips below), with 12 single-LED spots left over -- all on 18 pins. On an AtMega, this would leave 5 IO pins free -- enough room to fit a serial interface, piezo speaker, and crystal oscillator. Programming, however, would be a challenge.*** Hardware note: combining different LED colors in one grid There are problems with combining different LEDs in one grid. If two LEDs with different forward voltages are placed on the same, say, cathode, then the one with the lower forward voltage can hog all the current, and the other LED won't light. I have found that ensuring in software that LEDs with mixed forward voltages are never illuminated simultaneously solves this problem. Also ensure that your largest forward voltage is smaller than twice the lowest forward voltage. For example, if you try to drive a 3.6V white LED in a matrix that contains 1.8V red LEDs, the current may decide take a shortcut through two red LEDs rather than the white LED. However, it may be possible to ensure that there are no such paths by design. You must ensure that for every 3.6V forward path from pin A to B, there are no two 1.8V forward paths AC and CB for any C. Driving software Saving microcontroller pins and soldering time in well and good, but programming for these grids can be a real challenge! Here are some practices ( for AVR ) that I have found useful. • Overclock the processor. Most AVRs are configured to 1MHz by default, but can run up to 8MHz even without an external crystal. The AVR fuse calculator is a godsend. Test the program first without overclocking, then raise the clock rate. Ensure that the power supply voltage is high enough for the selected clok rate. If things get dire and you need more speed, you can tweak the OSCCAL register as well. • Prototype driver code on a device that can be removed and replaced if necessary. Repeated *ucking with the fuses to tweak the clock risks bricking the AVR. It's a shame when you have bricked AVR soldered in a TQFP package. • Row-scan the grid. If this places too much current on the IO pins, break each row into smaller pieces that are safe. If too many LEDs are lit on a row and they appear dim, adjust the time the row is displayed to compensate. • Store the LED state vector in the format that you will use to scan. Write set() and get() methods to access and manipulate this state that maps the structure of the charlipelxing grid onto the logical structure of your displays. Scanning code is hard enough to get fast and correct without worrying about the abstract logical arrangement of the LED grid. • Use a single timer interrupt to do all of the scanning. Having multiple recurring timer interrupts along with a main loop can create interesting interference and beat effects in the LED matrix that are hard to debug. • If there are buttons and LEDs on the same grid, switch to polling the buttons every so often at a fixed interval, and write there state into volatile memory that other threads can query. • If your display is sparse ( e.g. a game of life ) you can skip sections that aren't illuminated to get a higher effective refresh rate. If your display is very sparse, and you have a lot of memory to spare, you can even scan LEDs one at a time. Conclusion This document outlines how to drive many LED modules from a limited number of microcontroller pins. The savings in part cost and assembly time are offset by increased code complexity. These design practices would be useful for somone who enjoys coding puzzles, or gets a kick out of making microcontrollers do way more than they are supposed to. They could also be useful for reducing part costs and assembly time for mass produced devices, where the additional time in driver development is offset by the savings in production. I originally worked through these notes when considering how to bulid easy-to-assemble LED marquee kits, but as I have no means to produce such kits, nor easy mechanism for selling them, I am leaving the notes up here for general benefit. Also... Charrliee. _______________________________________________ *If you place a diode in series with a button you can place two buttons for each unique pair of pins. One can make this diode a LED to create a button-press indicator. **For those interested, K*(K-1)=N*M solves to K = ceil[ ( 1 + sqrt( 1+4NM ) ) / 2] ***I have done this with an 8x16 grid using a maximally overclocked AtMega. It is tricky. To avoid beat effects, sound, display, and button polling are handeled with the same timer interrupt. The music is intentionally restricted to notes that can be rendered with low clock resolution. Some day i may even write this up.
2017-10-23 07:56:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34562069177627563, "perplexity": 2590.3510254718003}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825812.89/warc/CC-MAIN-20171023073607-20171023093607-00745.warc.gz"}
http://mpotd.com/226/
Problem of the Day #226: Halloween MadnessOctober 31, 2011 Posted by Billy in : potd , trackback It’s Halloween, and after Albert finishes his college apps he wants to go trick-or-treating. Albert’s neighborhood consists of 6 houses equally spaced around a circle of radius $1$ kilometer. He starts from his own house, visits the rest of the houses exactly once in a random order, and returns back to his house. What is the expected length of Albert’s trick-or-treating path?
2017-06-26 12:22:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6078522205352783, "perplexity": 2307.1485251750137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00634.warc.gz"}
http://www.physicsforums.com/showthread.php?t=215547
# Parametric equation of the intersection between surfaces Math Emeritus Sci Advisor Thanks PF Gold P: 39,569 In a situation like that it is better not to solve for one of the variables. Instead, change x2+ y2= 1- y2 to x2+ 2y2= 1, the equation of an ellipse. Then use the "standard" parameterization of an ellipse: x= cos(t), y= sin(t)/$\sqrt{2}$. Then, of course, you can have either $z= cos^2(t)+ (1/2)sin^2(t)$ or $z= 1- (1/2)sin^2(t)$.
2014-09-03 02:22:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8040303587913513, "perplexity": 1403.9559185880248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924131.19/warc/CC-MAIN-20140901014524-00350-ip-10-180-136-8.ec2.internal.warc.gz"}
https://rpg.stackexchange.com/questions/166625/can-the-chronurgy-wizards-convergent-future-feature-result-in-a-die-roll-above
# Can the Chronurgy wizard's Convergent Future feature result in a die roll above 20? In the new Explorer's Guide to Wildemount, we are introduced to a new wizard Arcane Tradition called Chronurgy Magic, whose 14th-level feature Convergent Future says: You can peer through possible futures and magically pull one of them into events around you, ensuring a particular outcome. When you or a creature you can see within 60 feet of you makes an attack roll, an ability check, or a saving throw, you can use your reaction to ignore the die roll and decide whether the number rolled is the minimum needed to succeed or one less than that number (your choice). Say the DC for a Strength ability check is 25, and I have a +2 Strength modifier. Does Convergent Future allow the die roll to be either 23 ("the minimum needed to succeed") or 22 ("one less than that number") even though the maximum possible on a d20 is 20? • Now I'm wondering whether "the minimum needed to succeed" takes into account modifiers at all since there are things like bless, temple of the gods, and various other possible modifiers to rolls, even proficiency bonus in some cases – Medix2 Mar 20 at 15:23 • @Medix2: Per a tweet by Matt Mercer, the intent seems to be to include regular modifiers on the check but to allow other modifiers to affect it afterward: "Some creatures/characters have features that can (often as a reaction) adjust a final roll (Shield spell, enemies with Parry, etc). In those rare moments, I imagine, these abilities could still affect the roll after Convergent Future takes place." Seems like it could get confusing. (Obviously that's not an official ruling, and doesn't address this question itself.) – V2Blast Mar 20 at 22:34 ## No If it's not possible to succeed you don't roll the dice. For attacks, a Natural 20 is always a success, ability checks and saves (excepting Death Saves) do not auto succeed on a Natural 20. If you want to jump to the Moon there is no chance you can actually succeed, therefore you don't even make a check. Since no check is rolled, you can't use Convergent Future to modify your roll. • +1 A good way to think about this is there are no successful futures to pull from, as the task is impossible. – Tal Mar 20 at 16:44 • I am won over by your answer! It might help to cite the rule for using ability checks The DM calls for an ability check when a character or monster attempts an action (other than an attack) that has a chance of failure. Since, essentially you're saying the opposite is true, too (ie, when there is no chance of success) – Rykara Mar 20 at 17:05 • @Rykara I’m pretty sure there’s another rule for that, but either way, I agree, quote support would be good here. But please don’t use code formatting for quotes, or anything that’s not actually code. Quotation marks work fine, bold or italics if you must, but code formatting can cause problems. – KRyan Mar 20 at 22:30 • This isn't quite true in the questioner's case. There are several ways to add to an ability check, such as using Bardic Inspiration, so it is entirely feasible to succeed at a check that you need to roll above 20 for. – Tektotherriggen Mar 22 at 17:06 • Sorry, let me be clearer: the questioner's situation is DC25, modifier +2. In normal circumstances the best they can do is 22, so yes, it is unattainable. But suppose they get Bardic Inspiration - they could add a d6 to the roll, potentially getting up to 28 and passing the test. The DM can't just declare that the DC25 task was impossible, without a roll, unless they are sure that the party can't rustle up any extra bonuses. – Tektotherriggen Mar 23 at 18:10 ## No decide … the number rolled The number rolled can only be a number between 1 and 20. If you need more than 20 to succeed (or less than 1 to fail) then no such number exists on a 20 sided die.
2020-09-26 09:16:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38649114966392517, "perplexity": 1784.7189350584554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400238038.76/warc/CC-MAIN-20200926071311-20200926101311-00198.warc.gz"}
https://socratic.org/questions/is-the-relation-1-1-1-1-2-4-2-4-a-function
Is the relation {(1, 1), (-1, 1), (2, 4), (-2, 4)} a function? Jun 13, 2015 Answer: Yes. It is a function. Explanation: A relation fails to be a function if and only if there are 2 pairs with the same $x$ and different $y$'s. That c is not happen in your question, do it is a function. (It us not a one-to-one function.)
2019-10-21 23:37:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5766944289207458, "perplexity": 427.0102361953432}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00209.warc.gz"}
https://gmatclub.com/forum/if-x-and-y-are-both-negative-and-xy-y-2-which-of-the-162011.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 20 Jun 2019, 12:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # If x and y are both negative and xy < y^2, which of the Author Message TAGS: ### Hide Tags Intern Joined: 02 Jul 2013 Posts: 19 Schools: LBS MIF '15 If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 22 Oct 2013, 07:44 8 34 00:00 Difficulty: 55% (hard) Question Stats: 63% (01:35) correct 37% (01:44) wrong based on 1155 sessions ### HideShow timer Statistics If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Math Expert Joined: 02 Sep 2009 Posts: 55732 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 22 Oct 2013, 09:07 3 5 bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. _________________ ##### General Discussion Intern Joined: 16 Jun 2013 Posts: 14 GMAT 1: 540 Q34 V30 GMAT 2: 700 Q43 V42 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 22 Oct 2013, 08:19 1 ... could please someone explain this? I chose A. Because xy < y^2 and both are negative, I thought x < y. So I crossed off answers c), d), and e). And because x < y , x^2 < y^2, so I crossed off answer b) and picked answer a), which seems to be wrong. Where am I wrong? Intern Joined: 02 Jul 2013 Posts: 19 Schools: LBS MIF '15 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 22 Oct 2013, 20:57 Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. is this because -x*-y< y^2, and we are dividing by -y, and that is why we flip the equality sign after? for me, I got the wrong answer because I solved the following way: Since x and y are both negative xy is positive. Thus we divide a positive y over from xy < y^2 to become x< y^2/y, which becomes x<y. Please help me understand why this is not the case? Math Expert Joined: 02 Sep 2009 Posts: 55732 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 23 Oct 2013, 00:30 bulletpoint wrote: Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. is this because -x*-y< y^2, and we are dividing by -y, and that is why we flip the equality sign after? for me, I got the wrong answer because I solved the following way: Since x and y are both negative xy is positive. Thus we divide a positive y over from xy < y^2 to become x< y^2/y, which becomes x<y. Please help me understand why this is not the case? No. We know that $$y$$ is negative. Divide $$xy < y^2$$ by $$y$$ and flip the sign because y is negative to get $$x > y$$. Hope it's clear. _________________ Manager Status: Please do not forget to give kudos if you like my post Joined: 19 Sep 2008 Posts: 89 Location: United States (CA) Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 08 Nov 2013, 10:35 bulletpoint wrote: Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. is this because -x*-y< y^2, and we are dividing by -y, and that is why we flip the equality sign after? for me, I got the wrong answer because I solved the following way: Since x and y are both negative xy is positive. Thus we divide a positive y over from xy < y^2 to become x< y^2/y, which becomes x<y. Please help me understand why this is not the case? This is how i understand this (btw i did the same mistake) but later realized after plugging in some number (when in doubt plug it) Say X = -1 and Y = -2, XY => 2 < 4 Check. Now divide denominators with -2. LHS => -1 < -2 but this is not true so flip the sign. -1 > -2 Check. Now the equation becomes X > Y. Ans is C. _________________ Manager Joined: 12 Jan 2013 Posts: 145 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 16 Dec 2013, 15:25 1 bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x I used the exact same thought process as you but after seeing Bunuel's solution I think I understand where my/our intuition is lacking: When it says xy < y^2 you have to remember that both terms on either side of the inequality are POSITIVE. Meaning, change their sign as if they were negative (at least this helps me because I am used to changing the sign of the inequality when we have negatives). So: (value of x, which is negatie)*(value of y, which is negative) = positive, and the same goes for y^2. So simply change their sign so it says (-x)*(-y) < (-y)*(-y) ----> after you've moved the y from left to right you get -x < -y ----> x > y And from there, you already know that x and y are smaller numbers than their respective squares (since both are negative), thus only (C) works. Intern Joined: 19 Jan 2014 Posts: 25 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 22 Jan 2014, 00:40 bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x I'm not sure if this is a correct method to use but here is how I solved it. I just replaced x and y with numbers. a) -2<-1<2<1 ( no ) b) -2<-1<1<2 ( 3 < 1 not sufficient) c) -2<-1<1<4 ( 3<4 sufficient) Manager Joined: 02 Jul 2015 Posts: 102 Schools: ISB '18 GMAT 1: 680 Q49 V33 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 21 Nov 2015, 23:22 Hi, How to get x>y? I am getting y>x. y^2>xy y^2-xy>0 y(y-x)>0 So, y>0 or Y>x So as per this, y>x! Can't figure out where is it going wrong! "If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x" Intern Joined: 29 Aug 2013 Posts: 39 GPA: 3.76 WE: Supply Chain Management (Transportation) Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 19 Dec 2015, 10:52 aeglorre wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x I used the exact same thought process as you but after seeing Bunuel's solution I think I understand where my/our intuition is lacking: When it says xy < y^2 you have to remember that both terms on either side of the inequality are POSITIVE. Meaning, change their sign as if they were negative (at least this helps me because I am used to changing the sign of the inequality when we have negatives). So: (value of x, which is negatie)*(value of y, which is negative) = positive, and the same goes for y^2. So simply change their sign so it says (-x)*(-y) < (-y)*(-y) ----> after you've moved the y from left to right you get -x < -y ----> x > y And from there, you already know that x and y are smaller numbers than their respective squares (since both are negative), thus only (C) works. Hi aeglorre, I am clear and convinced with y<x. can you please tell me about how x < x^2 < y^2 or x^2 < y^2? _________________ Appreciate Kudos if the post seems worthwhile! Retired Moderator Joined: 29 Oct 2013 Posts: 257 Concentration: Finance GPA: 3.7 WE: Corporate Finance (Retail Banking) Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 21 Dec 2015, 01:58 y^2>xy y^2-xy>0 y(y-x)>0 Since we are given that y<0, we know that y-x<0 --> y<x. C is the only option that shows y<x hence its the answer _________________ My journey V46 and 750 -> http://gmatclub.com/forum/my-journey-to-46-on-verbal-750overall-171722.html#p1367876 Math Expert Joined: 02 Aug 2009 Posts: 7756 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 21 Dec 2015, 02:06 shapla wrote: aeglorre wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x I used the exact same thought process as you but after seeing Bunuel's solution I think I understand where my/our intuition is lacking: When it says xy < y^2 you have to remember that both terms on either side of the inequality are POSITIVE. Meaning, change their sign as if they were negative (at least this helps me because I am used to changing the sign of the inequality when we have negatives). So: (value of x, which is negatie)*(value of y, which is negative) = positive, and the same goes for y^2. So simply change their sign so it says (-x)*(-y) < (-y)*(-y) ----> after you've moved the y from left to right you get -x < -y ----> x > y And from there, you already know that x and y are smaller numbers than their respective squares (since both are negative), thus only (C) works. Hi aeglorre, I am clear and convinced with y<x. can you please tell me about how x < x^2 < y^2 or x^2 < y^2? Hi, we know x>y and it is given that both are negative.. this will mean the numeric value(or distance from 0 on a number line ) of y>x... for example if x=-3, y>-3 or -4,-5,.. etc since the numeric value of y>x , the square of y will be > that of x, as all integers turn positive on squaring.. hope it helped _________________ Manager Joined: 02 Feb 2016 Posts: 88 GMAT 1: 690 Q43 V41 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 27 Aug 2017, 05:13 I'm completely dumbfounded here. The approach I tried: xy<y^2 xy - y^2<0 y(x-y)<0 Either y<0 and x<y Or y>0 and x>y As y<0, only the first possibility stands and the second one is ruled out. This suggests that x<y !! Where am I making the mistake? Math Expert Joined: 02 Aug 2009 Posts: 7756 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 27 Aug 2017, 05:18 1 TheMastermind wrote: I'm completely dumbfounded here. The approach I tried: xy<y^2 xy - y^2<0 y(x-y)<0 Either y<0 and x<y Or y>0 and x>y As y<0, only the first possibility stands and the second one is ruled out. This suggests that x<y !! Where am I making the mistake? Hi.. y(x-y)<0 means ONLY one of x or x-y is NEGATIVE and other POSITIVE. Because +*-will be -or<0 So if y<0, x-y>0 or x>y.. _________________ CEO Joined: 12 Sep 2015 Posts: 3786 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 06 Feb 2018, 15:39 Top Contributor bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x We have xy < y² If we divide both sides by y we must REVERSE the inequality sign (because we are dividing by a NEGATIVE value). So, we get: x > y This allows us to ELIMINATE answer choices A and B (since they suggest that x < y) From here, let's PLUG IN values for x and y such that they are both negative AND x > y Let's try x = -1 and y = -2 This means x² = 1 and y² = 4 When we arrange the four values in ascending order we get: y < x < x² < y² RELATED VIDEO _________________ Test confidently with gmatprepnow.com Intern Joined: 30 Nov 2017 Posts: 39 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 08 Feb 2018, 23:32 Given that xy < y^2 Divide both sides by y^2 xy/y^2 < y^2/y^2 x/y < 1 It is given that x and y are both negative So, x/y < 1 is possible only when y is more negative than x (for example, x = -1, y = -2) So, then we know that y < x < x^2 < y^2 Target Test Prep Representative Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2823 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 12 Feb 2018, 17:07 bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x We see that in all the answer choices we have to compare the following four quantities: x, y, x^2 and y^2. We are given that x and y are both negative and xy < y^2. We see that both xy and y^2 are positive since the product of two negative quantities is positive and the square of a nonzero quantity is always positive. However, we divide both sides of the inequality by y (a negative quantity), we have: x > y Since x^2 and y^2 are both positive, we see that y is the smallest of the four quantities. The only answer choice that has y as the smallest quantity is choice C. Thus, it’s the correct answer. (Note: We don’t have to analyze, in this case, which is the larger quantity between x^2 and y^2 since y has to be the smallest quantity. However, if we have to, it is always true that if y < x < 0, then y^2 > x^2 > 0. For example, -3 < -2, but (-3)^2 > (-2)^2 since 9 > 4.) _________________ # Jeffrey Miller Jeff@TargetTestPrep.com 122 Reviews 5-star rated online GMAT quant self study course See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews If you find one of my posts helpful, please take a moment to click on the "Kudos" button. Intern Joined: 05 Jul 2017 Posts: 16 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 16 Sep 2018, 10:38 Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. Hello I had learnt on Gmat club itself that one shouldn't reduce a quadratic inequality, because then we end up assuming the variables to be non- zero. Is the same rule applicable for linear inequalities too? _________________ Optimism, pessimism, screw that! We'd make it happen. This one dream that I breath every single moment, would be a reality soon. Math Expert Joined: 02 Sep 2009 Posts: 55732 Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 16 Sep 2018, 21:33 Tanisha_shasha wrote: Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. Hello I had learnt on Gmat club itself that one shouldn't reduce a quadratic inequality, because then we end up assuming the variables to be non- zero. Is the same rule applicable for linear inequalities too? MULTIPLYING/DIVIDING AN INEQUALITY BY A NUMBER 1. Whenever you multiply or divide an inequality by a positive number, you must keep the inequality sign. 2. Whenever you multiply or divide an inequality by a negative number, you must flip the inequality sign. 3. Never multiply (or reduce) an inequality by a variable (or the expression with a variable) if you don't know the sign of it or are not certain that variable (or the expression with a variable) doesn't equal to zero. Here know that $$y$$ is negative. Divide $$xy < y^2$$ by $$y$$ and flip the sign because y is negative to get $$x > y$$. Hope it's clear. _________________ Senior Manager Joined: 10 Apr 2018 Posts: 267 Location: United States (NC) Re: If x and y are both negative and xy < y^2, which of the  [#permalink] ### Show Tags 16 Sep 2018, 21:47 Bunuel wrote: bulletpoint wrote: If x and y are both negative and xy < y^2, which of the following must be true? a) x < y < x^2 < y^2 b) x < y < y^2 < x^2 c) y < x < x^2 < y^2 d) x^2 < y^2 < y < x e) y^2 < x^2 < y < x Since $$y$$ is negative then after reducing $$xy < y^2$$ by $$y$$ we get: $$x > y$$ which is the same as $$y<x$$. Both x^2 and y^2 will be greater than either x or y. Only C fits. Hi Bunuel & chetan2u, I need your help on this . Most of my questions i have used this property that we can move the term from one side of inequality to to other without changing the sign of the inequality. here is what i mean say if x<y then we can say x-y<0 However if i use the same poperty here i don't get the result where am i going wrong? we are given that x<0, y<0 also xy<$$y^2$$ so we can write xy- $$y^2$$<0 y(x-y)<0 so we have y<0 and or x-y<0 or we can say y<0 and or x<y or we can say x<y<0 then as per this we get option B . I understand the solution given by Bunuel, but please help me correct my error in my understanding of how i solved. Thanks Probus _________________ Probus ~You Just Can't beat the person who never gives up~ Babe Ruth Re: If x and y are both negative and xy < y^2, which of the   [#permalink] 16 Sep 2018, 21:47 Go to page    1   2    Next  [ 21 posts ] Display posts from previous: Sort by
2019-06-20 19:25:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6572527289390564, "perplexity": 1440.7209871795742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.24/warc/CC-MAIN-20190620190041-20190620212041-00144.warc.gz"}
http://15462.courses.cs.cmu.edu/fall2021/lecture/vectorcalc/slide_018
Previous | Next --- Slide 18 of 52 Back to Lecture Thumbnails manchas Are approximations ever used to approximate the derivative at certain points? minhsual How do we deal with functions that are not differentiable in CG? goose_r_s How do we compute derivatives on computers in practice? Do we use a very small epsilon? jefftan How do we compute the derivative for discretized images where we cannot use the limit definition? Do we weight closer points more than farther away points when computing the slope using nearby points?
2022-09-24 21:54:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010917544364929, "perplexity": 947.9891289203943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00649.warc.gz"}
https://codereview.stackexchange.com/questions/86894/java-a-algorithm-implementation-performance
# Java A* Algorithm Implementation Performance I've coded a working implementation of the A* Algorithm however it's not meeting my performance expectations - it explores way too many nodes and takes a lot longer to find a route than I expect. private Vertex dijkstra(Vertex startLocation, Vertex endLocation, int routetype) { Long startTime = System.nanoTime(); Vertex vertexNeighbour; startLocation.setTentativeDistance(0); startLocation.setH(heuristic(endLocation, startLocation)); startLocation.setF(startLocation.getH() + startLocation.getTentativeDistance()); startLocation.from = startLocation; while (!(pqOpen.IsEmpty())) { tempVertex = pqOpen.GetNextItem(); if (tempVertex == null || tempVertex.getTentativeDistance() == Double.POSITIVE_INFINITY) { //System.out.println("Route calculation time: " + ((System.nanoTime() - startTime)/1000000) + " ms"); return null; } else if (tempVertex.city == endLocation.city) { for (int i = 0; i < pqClosed.queueSize; i++) { for (int z = 0; z < pqClosed.QueueArray[i].neighbors.GetNoOfItems(); z++) { pqClosed.QueueArray[i].neighbors.GetItem(z).visited = false; } } //System.out.println("Route calculation time: " + ((System.nanoTime() - startTime)/1000000) + " ms"); return tempVertex; } else { for (int i = 0; i < tempVertex.neighbors.GetNoOfItems() && tempVertex.neighbors.GetItem(i).visited == false; i++) //for each neighbor of tempVertex { tempEdge = tempVertex.neighbors.GetItem(i); tempVertex.neighbors.GetItem(i).visited = true; vertexNeighbour = allVertices.GetItem(binarySearch(allVertices, 0, allVertices.GetNoOfItems(), tempEdge.toid)); nodesVisited++; boolean boolClosed = false; //if the neighbor is in closed set, move to next neighbor for (int z = 0; z < pqClosed.GetQueueSize(); z++) { if (pqClosed.QueueArray[z].city == vertexNeighbour.city) { boolClosed = true; } } if (boolClosed) { continue; } double temp_g_score = (tempVertex.getTentativeDistance() + tempEdge.distance); //checks if neighbor is in open set boolean foundNeighbor = false; for (int z = 0; z < pqOpen.GetQueueSize() && foundNeighbor == false; z++) { if (pqOpen.QueueArray[z].city == vertexNeighbour.city) { foundNeighbor = true; } } if (!(foundNeighbor) || temp_g_score < vertexNeighbour.getTentativeDistance()) { vertexNeighbour.from = tempVertex; vertexNeighbour.setTentativeDistance(temp_g_score); //calculate H once, store it and then do an if statement to see if it's been used before - if true, grab from memory, else calculate. if (vertexNeighbour.getH() == 0) vertexNeighbour.setH(heuristic(endLocation, vertexNeighbour)); if (routetype == 2) vertexNeighbour.setF(((vertexNeighbour.getH() + vertexNeighbour.getTentativeDistance())*(0.000621371192) / tempEdge.speedlimit)); else vertexNeighbour.setF(vertexNeighbour.getH() + vertexNeighbour.getTentativeDistance()); if (!(foundNeighbor)) // if neighbor isn't in open set, add it to open set { } } } } } return null; } private double heuristic(Vertex goal, Vertex next) { return (Math.sqrt(Math.pow((goal.x - next.x), 2) + Math.pow((goal.y - next.y), 2)))*1.10; } Here is my Vertex class: public class Vertex { public int city, x, y; public boolean visited = false; public EdgeVector neighbors; private double f; // f = tentativeDistance + h private double tentativeDistance; // tentativeDistance is distance from the source private double h; // h is the heuristic of destination. public Vertex from; public Vertex(int city, int x, int y) { this.city = city; this.x = x; this.y = y; this.neighbors = new EdgeVector(); this.tentativeDistance = Double.POSITIVE_INFINITY; this.f = Double.POSITIVE_INFINITY; } { } public double getTentativeDistance() { return tentativeDistance; } public void setTentativeDistance(double g) { this.tentativeDistance = g; } public double getF() { return f; } public void setF(double f) { this.f = f; } public double getH() { return h; } public void setH(double h) { this.h = h; } } and my Edge class: public class Edge { public String label; public int fromid, toid, speedlimit; public double distance; public boolean visited; public Edge(String label, int fromid, int toid, double distance, int speedlimit) { this.label = label; this.fromid = fromid; this.toid = toid; this.distance = distance; this.speedlimit = speedlimit; this.visited = false; } } Lastly, I use my own Vector class: public class Vector { private static final int MAXDISPLAY=20; private int growby; private int noofitems; private Object[] data; private static final int MINGROW=10; public Vector() { Init(10); } public Vector(int initsize) { Init(initsize); } private void Init(int initsize) { growby=initsize; if (growby<MINGROW) growby=MINGROW; noofitems=0; data=new Object[initsize]; } public int GetNoOfItems() { return noofitems; } public Object GetItem(int index) { return (Object)(index>=0 && index<noofitems?data[index]:null); } { if (noofitems==data.length) GrowDataStore(); data[noofitems++]=item; } public boolean InsertItem(int index, Object item) { if (index>=0 && index<=noofitems) { if (noofitems==data.length) GrowDataStore(); for (int i=noofitems; i>index; i--) data[i]=data[i-1]; data[index]=item; ++noofitems; return true; } else return false; } public boolean DeleteItem(int index) { if (index>=0 && index<noofitems) { --noofitems; for (int i=index; i<noofitems; i++) data[i]=data[i+1]; return true; } else return false; } public void ResetData(int[] items) { if (items!=null && items.length==noofitems) System.arraycopy(items, 0, data, 0, noofitems); } public void Swap(int index1, int index2) { if (index1>=0 && index1<noofitems && index2>=0 && index2<noofitems) { Object tmp=data[index1]; data[index1]=data[index2]; data[index2]=tmp; } } private void GrowDataStore() { Object[] tmp=new Object[noofitems+growby]; System.arraycopy(data, 0, tmp, 0, noofitems); data=tmp; } public void Randomise() { for (int i=0; i<noofitems; i++) { int pos=(int) (Math.random()*noofitems); Swap(i, pos); } } public String toString() { StringBuilder str=new StringBuilder(); str.append('['); if (noofitems>0) str.append(data[0]); int max=(noofitems<MAXDISPLAY?noofitems:MAXDISPLAY); for (int i=1; i<max; i++) { str.append(", "); str.append(data[i]); } if (noofitems>MAXDISPLAY) { str.append(", ...("); str.append(noofitems-MAXDISPLAY); str.append(')'); } str.append(']'); return str.toString(); } } Please note, that I have EdgeVector and VertexVector classes that are simply alterations of the above Vector class (replace Object for Edge or Vertex) - I didn't post them two as it would take up a lot of the post space. I do this as it is more efficient than casting an Object to either Vertex or Edge in the algorithm. I'm hoping that somebody could take a quick look at my code and see where I may be going wrong - the code does work and produces correct results however it does explore significantly more nodes than I expect and want which therefore increases run-time substantially. It look my a long time to code this as I'm a novice and I've spent the best part of a full day trying to see what I've done incorrect which is causing the problem but I can't spot anything. My own implementation of PQ: public class PriorityQueue { Vertex[] QueueArray = new Vertex[10]; int queueSize = 0; // Default Constructor public PriorityQueue() { } // Returns true if the priority queue is empty, else false public boolean IsEmpty() { if (queueSize == 0) { return true; } else { return false; } } // Returns the number of items in the queue public int GetQueueSize() { return queueSize; } public Vertex removeVertex(int info) { int i = 0; boolean found = false; Vertex data = null; for (i = 0; i < QueueArray.length && found == false; i++) { if(QueueArray[i].city == info) { found = true; data = QueueArray[i]; } } if (found == true) { for (i = i; i < QueueArray.length-1; i++) { QueueArray[i] = QueueArray[i+1]; } QueueArray[queueSize] = null; queueSize--; } return data; } { if (queueSize == QueueArray.length) { Vertex[] QueueArray2 = new Vertex[queueSize*2]; System.arraycopy(QueueArray, 0, QueueArray2, 0, queueSize); QueueArray = QueueArray2; } if (queueSize == 0) { QueueArray[queueSize] = item; // insert at 0 queueSize++; } else { int index=queueSize; //Vertex newNode = new Vertex(item, priority); QueueArray[index] = item; queueSize++; int parent=(index-1)/2; while (index!=0 && QueueArray[index].getF()<QueueArray[parent].getF()) { // swap parent and index items Vertex temp = QueueArray[parent]; QueueArray[parent] = QueueArray[index]; QueueArray[index] = temp; index=parent; parent=(index-1)/2; } } } public Vertex GetNextItem() { if (queueSize == 0) { return null; } Vertex temp = QueueArray[0]; --queueSize; if (queueSize > 0) { QueueArray[0] = QueueArray[queueSize]; swapNodes(0); } QueueArray[queueSize] = null; return temp; } public void swapNodes(int root) { int child; if ((2*root+1) >= queueSize) { child = root; //no children } else if ((2*root)+2 == queueSize) { child = (2*root)+1; } else if (QueueArray[(2*root)+1].getF()< QueueArray[(2*root)+2].getF()) { child = (2*root)+1; //left child } else { child = (2*root)+2; //right child } //swap the nodes around if (QueueArray[root].getF() > QueueArray[child].getF()) { Vertex temp = QueueArray[root]; QueueArray[root] = QueueArray[child]; QueueArray[child] = temp; swapNodes(child); } } public void siftUp(int root) { while (root > 0 && root < queueSize && (QueueArray[root].getF() < QueueArray[(root-1)/2].getF())) { Vertex temp = QueueArray[root]; QueueArray[root] = QueueArray[(root-1)/2]; QueueArray[(root-1)/2] = temp; root = (root-1)/2; } } public String toString() { return super.toString(); } } The program is a sat-nav, there are many vertex's which can have edges connecting them. Each vertex's neighbors are stored in a VortexVector within the Vertex class itself. It would probably be easier if I could upload a zip of my program because I test my program by reading Edge and Vertex data from two .bat files (they contain 250,000 pieces of data each), however I presume there are rules against that for safety of users? • and pqOpen is a home brew priority queue? – ratchet freak Apr 14 '15 at 18:02 • Oops yeah it is, I forgot to include that - I'll edit my post with the implementation of that – Craig Apr 14 '15 at 18:03 • What kind of problems are you trying to solve with this code? Can you give us some example data (or a program that computes some example data) that illustrates the performance problem? – Gareth Rees Apr 14 '15 at 18:03 • @GarethRees I've updated my post with a bit of information about it - the test data can be very simple such as creating vetex's with incrementing data, however my program's performance lacks when there are say a few thousand vertex's and edges. The performance becomes extremely slow (compared to what I need) when I enter the 100's of thousands. – Craig Apr 14 '15 at 18:12 When you update the vertex' estimated distance you should signal the openQueue to update it's position in the backing array. if (!(foundNeighbor)) // if neighbor isn't in open set, add it to open set { } else { pqOpen.UpdateItem(vertexNeighbour); } This means that you'd need to find it again in the priority queue and shift it up if needed. So you may as well remove the for loop and change the name to AddOrUpdateItem This can be more efficient by first collecting all changed neightbours to outside the loop over all neighbours. And then submitting the collection of changed vertices. This saves you the loop over all vertices in the open set to once per pqOpen.GetNextItem(). Even that loop can be saved by using a secondary index (like a HashMap) that will let you find the index of each changed vertex in constant time. • Ahh thank you, I shall change this when I get back home. You said "So you may as well remove the for loop and change the name to AddOrUpdateItem", which for loop do you mean? also, change the name of what to AddOrUpdateItem? Thank you – Craig Apr 15 '15 at 10:32 • @Craig where you set foundNeighbor to true. And I meant AddItem if you add functionality to update if it already is contained. – ratchet freak Apr 15 '15 at 10:36 I don't know if this is the main source of your problem, but this part definitely jumps out at me: //if the neighbor is in closed set, move to next neighbor for (int z = 0; z < pqClosed.GetQueueSize(); z++) { if (pqClosed.QueueArray[z].city == vertexNeighbour.city) { boolClosed = true; } } Since pqClosed grows as you traverse the graph, this loop will become more and more expensive. Consider using a Set or a Map instead of whatever pqClosed is. • Thank you for pointing that out, I have updated my code so that it now adds the visited Vertices to a Vector and then I order the vector using quick sort. I then call a simple binary search on the vector and if the result is greater than -1 (it's been found) I move on to the next neighbour. However, the initial problem still remains of it exploring too many vertices. – Craig Apr 15 '15 at 8:07 • Sorting the list every time you add something? That's not any better. Why are you trying to reinvent the wheel? Use HashSet<Vertex> or HashMap<Integer,Vertex> (map of city to vertex). – Misha Apr 15 '15 at 10:37 • I'm not familiar with hash maps, however I will learn them once I get back home and implement that - I guess my attempt earlier was due to my lack of knowledge. Thank you for the advice :) – Craig Apr 15 '15 at 10:44 • Just to keep you updated, I have been able to implement a hashmap which means my code runs much more efficiently! I've also been able to use a hash map for my collection of Edges, again meaning my start-up time is more efficient so thank you! – Craig Apr 16 '15 at 19:32 There are several issues with this code (amid being hard to read for me): 1. When SetF updates estimation priority queue do not always update (particulary if node was already there), this could make make more nodes traversed then needed. 2. I can guarantee that code do not always produce correct result, because hueristics can yeild larger value than actual path length. 3. F calculation for routetype == 2 (??) just make no sense. Issues above can cause more nodes to be traversed or incorrect result yeilded. But there are more issues you should consider to fix: 1. Writing your own containers is a waste of time (unless you have a data structures class, which is obviosly not the case here). Use standard Java classes for this, they are generic, so one can have List and List instead of copies of the same code for different elements. 2. Some issues with naming. Particulary, dijkstra not being dijkstra and pqOpen with wierd prefix disclosing implementation detail. 3. Consider to split dijkstra function (after renaming) to several more concise ones. Right now it is a mess. 4. Looks like you do much more searching then needed, though I do not have complete knowledge about your domain. Consider to put Vertex references to edges. And consider adding bool fields (like visited) to Vertex, so you can ask Vertex if it is put into closed or open already. • Thanks for the feedback, for 1) are you referring to what Ratchet Freak said in their answer? 2) I don't quite understand what you mean by this, my function heuristic calculates that - and I was told to always overestimate the distance so I added a small adjustment - even if I remove this adjustment I get the same result. Could you possibly expand that sentence so I can understand it better? 3) I have two route calculation types - if routetype == 1 it's based on shortest distance, 2 == shortest time, hence the calculation involving speed and distance – Craig Apr 15 '15 at 10:35 • the other issues you mentioned, 1) unfortunately I do have to write all my containers myself - this is a piece of work and I'm not allowed to use the Java standard library. 2) I understand the confusion, I'll try and stick to a standard naming convention from now on - I use the name dijkstra because I modified my implementation of that and forgot to edit the name. 3) That's a good idea, i'll do that later on! 4) I'll definitely do that - I can see i do loop through my data structures a lot. – Craig Apr 15 '15 at 10:37 • 1) Yes, this is the same as Ratchet Freak said. Haven't seen his answer when began to write my own. – GeniusIsme Apr 15 '15 at 11:29 • 2) A algorithm is only guaranteed to find a shortest path if heuristics used is admissable - ie. provides values less or equal to an actual distance. This small adjustment you've made may not cause trouble on particular data, but it is not guaranteed to do so. – GeniusIsme Apr 15 '15 at 11:39 • 3) This still doesn't make sense. Magic number do not change anything at all and speedLimit will ruin everything if it is differs along edges. Consider all edges have speedLimit == 1 and one very short edge have speedLimit == 100. Path found will contain this edge while time walking it may be quite large. – GeniusIsme Apr 15 '15 at 11:39
2020-05-31 00:34:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18340037763118744, "perplexity": 4613.536378926391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410535.45/warc/CC-MAIN-20200530231809-20200531021809-00220.warc.gz"}
https://math.stackexchange.com/questions/1902537/is-this-the-basis-for-the-topology-of-the-complex-projective-plane
# Is this the basis for the topology of the complex projective plane? I am doing this as a thought exercise to test my understanding of the quotient topology. Anyway, given an open set $U \subset \mathbb{CP}^2$, if $\pi: \mathbb{C}^3\setminus\{(0,0,0)\} \to \mathbb{CP}^2$ is the defining quotient map, then $\pi^{-1}(U)$ is open. Moreover, since each point of $\mathbb{CP}^2$ is an equivalence class equal to a line through the origin $(0,0,0)$ minus the origin, the fibers of $\pi$ must be lines through the origin minus the origin. Thus the preimage of any set in $\mathbb{CP}^2$ must be the union of lines through the origin minus the origin, including the pre-image of any open set in $\mathbb{CP}^2$, which thus also must be open in $\mathbb{C}^3$. The only open sets of this form which I could think of would be "double cones without boundary", for example the union of the first and third quadrant minus the $x$ and $y$ axes and the origin. Question: Do these "double cones" form a basis for the topology of $\mathbb{CP}^2$? This would be a nice way to visualize the topology if it were true. I cannot think of a simple way to verify it however. Also, if I wanted to write this basis for the topology in homogeneous coordinates, would it consist of sets of the form (for $(x,y) \in \mathbb{C^2}$ and $(a:b:0)$ in the line at infinity): $$( B_{\varepsilon}(x) :B_{\varepsilon}(y) : 1 ) \quad and \quad (a:b:B_{\varepsilon}(0)) ?$$ $B_{\varepsilon}(z)$ is the ball of radius $\varepsilon$ centered at the point $z \in \mathbb{C}$. ## Open sets I have never worked with topological bases before, so I'm working with what Wikipedia tells me. The double cones you describe are indeed reasonable preimages of open sets in $\mathbb{CP}^2$. They are of course not the only ones, since you can build unions of these, but that's probably what you meant anyway. Now to show that what you'd write down is a basis, according to Wikipedia you have to show two things. 1. These sets cover $\mathbb{CP}^2$. Sure, for any finite point you have a neighborhood of the first form, and for every infinite point you have one of the second form. 2. For two such sets $B_1,B_2$, any point $x$ in the intersection $I=B_1\cap B_2$ will belong to a basis set $B_3\subseteq I$. You'd choose your basis as a neighborhood of $x$. To show that there exists a suitable $\varepsilon$, observe that $B_1$ and $B_2$ are open sets, so any point outside these sets has to be a positive “distance” away from $x$. ## My suggestion for a basis But this definition of “distance” might be a bit hard to make rigorous here in this context. What metric would you use? In order to address that problem, I'd suggest a different basis, namely one which does not depend on a distinguished line at infinity or a specific coordinate system for $\mathbb{C}^3$. Since you want to describe neighborhoods, I'd start with a “neighborhood” of radius zero and ask you this: when do two vectors $v$ and $w$ describe the same point in $\mathbb{CP}^2$? That is the case if they point in the same (or opposite) direction. Which depends on the angle. Which in $\mathbb{RP}^2$ you can check using the scalar product (dot product). The two vectors are equal if and only if $$\langle v,w\rangle^2 = \langle v,v\rangle\langle w,w\rangle$$ since in that case, both sides of the equation equal the product of the squared lengths of the vectors. Otherwise the left hand side falls short by a factor of $\cos^2\alpha$ with $\alpha$ the angle between the vectors. So when are two vectors almost equal? If $$\langle v,w\rangle^2 > \langle v,v\rangle\langle w,w\rangle \cdot(1-\varepsilon)$$ So given a vector $v$ and a “radius” $\varepsilon$, this will define the set of vectors $w$ in the open neighborhood of $v$. How well does this translate to $\mathbb C$? Well, for one thing you will have to take the standard sesquilinear product (i.e. $\langle v,w\rangle=\bar v_1w_1+\bar v_2w_2+\bar v_3w_3$) to ensure that the right hand side really represents something like real lengths. And for the left hand side you should use a sesquilinear product and its conjugate, which is equal to the argument-swapped version. $$\langle v,w\rangle\langle w,v\rangle > \langle v,v\rangle\langle w,w\rangle \cdot(1-\varepsilon)$$ Then you can show that you get equality only of $w=zv$ for some $z\in\mathbb C$, and almost-equality for small $\varepsilon$. Now yo can again check coverage and the intersection property, as above. 1. If you choose $\varepsilon>1$ then any $v$ will give you a basis element which contains all of $\mathbb{CP}^2$. Or you observe that around any point $v$ you have a whole non-empty bunch of neighborhoods in your basis. 2. This boils down to the same “distance” argument as above, only this time you may have a better notion of what a “distance” actually is.
2020-03-31 20:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277161955833435, "perplexity": 115.96150805547894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00175.warc.gz"}
https://www.groundai.com/project/luck-matters-understanding-training-dynamics-of-deep-relu-networks2937/3
Luck Matters: Understanding Training Dynamics of Deep ReLU Networks # Luck Matters: Understanding Training Dynamics of Deep ReLU Networks Yuandong Tian Tina Jiang Qucheng Gong Ari Morcos {yuandong, tinayujiang, qucheng, arimorcos}@fb.com ###### Abstract We analyze the dynamics of training deep ReLU networks and their implications on generalization capability. Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks. With this relationship and the assumption of small overlapping teacher node activations, we prove that (1) student nodes whose weights are initialized to be close to teacher nodes converge to them at a faster rate, and (2) in over-parameterized regimes and 2-layer case, while a small set of lucky nodes do converge to the teacher nodes, the fan-out weights of other nodes converge to zero. This framework provides insight into multiple puzzling phenomena in deep learning like over-parameterization, implicit regularization, lottery tickets, etc. We verify our assumption by showing that the majority of BatchNorm biases of pre-trained VGG11/13/16/19 models are negative. Experiments on (1) random deep teacher networks with Gaussian inputs, (2) teacher network pre-trained on CIFAR-10 and (3) extensive ablation studies validate our multiple theoretical predictions. Code is available at https://github.com/facebookresearch/luckmatters. Luck Matters: Understanding Training Dynamics of Deep ReLU Networks Yuandong Tian Tina Jiang Qucheng Gong Ari Morcos Facebook AI Research {yuandong, tinayujiang, qucheng, arimorcos}@fb.com \@float noticebox[b]Preprint. Work in progress.\end@float ## 1 Introduction Although neural networks have made strong empirical progress in a diverse set of domains (e.g., computer vision [17, 33, 11], speech recognition [12, 1], natural language processing [23, 4], and games [31, 32, 36, 24]), a number of fundamental questions still remain unsolved. How can Stochastic Gradient Descent (SGD) find good solutions to a complicated non-convex optimization problem? Why do neural networks generalize? How can networks trained with SGD fit both random noise and structured data [39, 18, 25], but prioritize structured models, even in the presence of massive noise [28]? Why are flat minima related to good generalization? Why does over-parameterization lead to better generalization [26, 40, 34, 27, 20]? Why do lottery tickets exist [7, 8]? In this paper, we propose a theoretical framework for multilayered ReLU networks. Based on this framework, we try to explain these puzzling empirical phenomena with a unified view. We adopt a teacher-student setting where the label provided to an over-parameterized deep student ReLU network is the output of a fixed teacher ReLU network of the same depth and unknown weights (Fig. 1(a)). Here over-parameterization means that at each layer, the number of nodes in student network is more than the number of nodes in the teacher network. In this perspective, hidden student nodes are randomly initialized with different activation regions (Fig. 2(a)). During optimization, student nodes compete with each other to explain teacher nodes. From this setting, Theorem 4 shows that lucky student nodes which have greater overlap with teacher nodes converge to those teacher nodes at a fast rate, resulting in winner-take-all behavior. Furthermore, Theorem 5 shows that in the 2-layer case, if a subset of student nodes are close to the teachers’, they converge to them and the fan-out weights of other irrelevant nodes of the same layer vanishes. With this framework, we try to intuitively explain various neural network behaviors as follows: Fitting both structured and random data. Under gradient descent dynamics, some student nodes, which happen to overlap substantially with teacher nodes, will move into the teacher node and cover them. This is true for both structured data that corresponds to small teacher networks with few intermediate nodes, or noisy/random data that correspond to large teachers with many intermediate nodes. This explains why the same network can fit both structured and random data (Fig. 2(a-b)). Over-parameterization. In over-parameterization, lots of student nodes are initialized randomly at each layer. Any teacher node is more likely to have a substantial overlap with some student nodes, which leads to fast convergence (Fig. 2(a) and (c), Thm. 4), consistent with [7, 8]. This also explains that training models whose capacity just fit the data (or teacher) yields worse performance [20]. Flat minima. Deep networks often converge to “flat minima” whose Hessian has a lot of small eigenvalues [29, 30, 22, 3]. Furthermore, while controversial [5], flat minima seem to be associated with good generalization, while sharp minima often lead to poor generalization [13, 15, 37, 21]. In our theory, when fitting with structured data, only a few lucky student nodes converge to the teacher, while for other nodes, their fan-out weights shrink towards zero, making them (and their fan-in weights) irrelevant to the final outcome (Thm. 5), yielding flat minima in which movement along most dimensions (“unlucky nodes”) results in minimal change in output. On the other hand, sharp minima is related to noisy data (Fig. 2(d)), in which more student nodes match with the teacher. Implicit regularization. On the other hand, the snapping behavior enforces winner-take-all: after optimization, a teacher node is fully covered (explained) by a few student nodes, rather than splitting amongst student nodes due to over-parameterization. This explains why the same network, once trained with structured data, can generalize to the test set. Lottery Tickets. Lottery Tickets [7, 8, 41] is an interesting phenomenon: if we reset “salient weights” (trained weights with large magnitude) back to the values before optimization but after initialization, prune other weights (often of total weights) and retrain the model, the test performance is the same or better; if we reinitialize salient weights, the test performance is much worse. In our theory, the salient weights are those lucky regions ( and in Fig. 3) that happen to overlap with some teacher nodes after initialization and converge to them in optimization. Therefore, if we reset their weights and prune others away, they can still converge to the same set of teacher nodes, and potentially achieve better performance due to less interference with other irrelevant nodes. However, if we reinitialize them, they are likely to fall into unfavorable regions which cannot cover teacher nodes, and therefore lead to poor performance (Fig. 3(c)), just like in the case of under-parameterization. Recently, Supermask [41] shows that a supermask can be found from winning tickets. If it is applied to initialized weights, the network without training gives much better test performance than chance. This is also consistent with the intuitive picture in Fig. 3(b). ## 2 Mathematical Framework Notation. Consider a student network and its associated teacher network (Fig. 1(a)). Denote the input as . For each node , denote as the activation, as the ReLU gating (for the top-layer, and are always ), and as the backpropagated gradient, all as functions of . We use the superscript to represent a teacher node (e.g., ). Therefore, never appears as teacher nodes are not updated. We use to represent weight between node and in the student network. Similarly, represents the weight between node and in the teacher network. We focus on multi-layered ReLU networks. We use the following equality extensively: . For ReLU node , we use as the activation region of node . Teacher network versus Dataset. The reason why we formulate the problem using teacher network rather than a dataset is the following: (1) It leads to a nice and symmetric formulation for multi-layered ReLU networks (Thm. 1). (2) A teacher network corresponds to an infinite size dataset, which separates the finite sample issues from induction bias in the dataset, which corresponds to the structure of teacher network. (3) If student weights can be shown to converge to teacher ones, generalization bound can naturally follow for the student. (4) The label complexity of data generated from a teacher is automatically reduced, which could lead to better generalization bound. On the other hand, a bound for arbitrary function class can be hard. Objective. We assume that both the teacher and the student output probabilities over classes. We use the output of teacher as the input of the student. At the top layer, each node in the student corresponds to each node in the teacher. Therefore, the objective is: minwJ(w)=12Ex[∥fc(x)−fc∘(x)∥2] (1) By the backpropagation rule, we know that for each sample , the (negative) gradient . The gradient gets backpropagated until the first layer is reached. Note that here, the gradient sent to node is correlated with the activation of the corresponding teacher node and other student nodes at the same layer. Intuitively, this means that the gradient “pushes” the student node to align with class of the teacher. If so, then the student learns the corresponding class well. A natural question arises: Are student nodes at intermediate layers correlated with teacher nodes at the same layers? One might wonder this is hard since the student’s intermediate layer receives no direct supervision from the corresponding teacher layer, but relies only on backpropagated gradient. Surprisingly, the following theorem shows that it is possible for every intermediate layer: ###### Theorem 1 (Recursive Gradient Rule). If all nodes at layer satisfies Eqn. 2 gj(x)=f′j(x)⎡⎣∑j∘β∗jj∘(x)fj∘(x)−∑j′βjj′(x)fj′(x)⎤⎦, (2) then all nodes at layer also satisfies Eqn. 2 with and defined recursively from top to bottom-layer as follows: β∗kk∘(x)≡∑jj∘wjkf′j(x)β∗jj∘(x)f′j∘(x)w∗j∘k∘,βkk′(x)≡∑jj′wjkf′j(x)βjj′(x)f′j′(x)wj′k′ (3) And for the base case where node and are at the top-most layer, (i.e., when corresponds to , and otherwise). Note that Theorem 1 applies to arbitrarily deep ReLU networks and allows different number of nodes for the teacher and student. The role played by ReLU activation is to make the expression of concise, otherwise and can take a very complicated (and asymmetric) form. In particular, we consider the over-parameterization setting: the number of nodes on the student side is much larger (e.g., 5-10x) than the number of nodes on the teacher side. Using Theorem 1, we discover a novel and concise form of gradient update rule: ###### Assumption 1 (Separation of Expectations). Ex[β∗jj∘(x)f′j(x)f′j∘(x)fk(x)fk∘(x)] = Ex[β∗jj∘(x)]Ex[f′j(x)f′j∘(x)]Ex[fk(x)fk∘(x)] (4) Ex[βjj′(x)f′j(x)f′j′(x)fk(x)fk′(x)] = Ex[βjj′(x)]Ex[f′j(x)f′j′(x)]Ex[fk(x)fk′(x)] (5) ###### Theorem 2. If Assumption 1 holds, the gradient dynamics of deep ReLU networks with objective (Eqn. 1) is: ˙Wl=L∗lW∗lH∗l+1−LlWlHl+1 (6) Here we explain the notations. is teacher weights, , and , , and . We can define similar notations for (which has columns/filters), , , and (Fig. 4(c)). At the lowest layer , , at the highest layer where there is no ReLU, we have due to Eqn. 1. According to network structure, and only depends on weights , while and only depend on . ## 3 Analysis on the Dynamics In the following, we will use Eqn. 6 to analyze the dynamics of the multi-layer ReLU networks. For convenience, we first define the two functions and ( is the ReLU function): ψl(w,w′)=Ex[σ(wTx)σ(w′Tx)],ψd(w,w′)=Ex[I(wTx)I(w′Tx)]. (7) We assume these two functions have the following property . ###### Assumption 2 (Lipschitz condition). There exists and so that: ∥ψi(w,w1)−ψi(w,w2)∥≤ψi(w,w1)(1+Ki∥w1−w2∥),i∈{d,l} (8) Using this, we know that , , and so on. For brevity, denote (when notation is heavy) and so on. We impose the following assumption: ###### Assumption 3 (Small Overlap between teacher nodes). There exists and so that: d∗∗j1j2≤ϵdd∗∗j1j1 (or ϵdd∗∗j2j2),l∗∗j1j2≤ϵll∗∗j1j1 (or ϵll∗∗j2j2),for j1≠j2 (9) Intuitively, this means that the probability of the simultaneous activation of two teacher nodes and is small. If we have sufficient training data to cover the input space, then a sufficient condition for Assumption 3 to happen is that the teacher has negative bias, which means that they cut corners in the space spanned by the node activations of the lower layer (Fig. 4a). We have empirically verified that the majority of biases in BatchNorm layers (after the data are whitened) are negative in VGG11/16 trained on ImageNet (Sec. 4.1). ### 3.1 Effects of BatchNorm Batch Normalization [14] has been extensively used to speed up the training, reduce the tuning efforts and improve the test performance of neural networks. Here we use an interesting property of BatchNorm: the total “energy” of the incoming weights of each node is conserved over training iterations: ###### Theorem 3 (Conserved Quantity in Batch Normalization). For Linear ReLU BN or Linear BN ReLU configuration, of a filter before BN remains constant in training. (Fig. 15). See Appendix for the proof. The similar lemma is also in [2]. This may partially explain why BN has stabilization effect: energy will not leak from one layer to nearby ones. Due to this property, in the following, for convenience we assume , and the gradient is always orthogonal to the current weight . Note that on the teacher side we can always push the magnitude component to the upper layer; on the student side, random initialization naturally leads to constant magnitude of weights. ### 3.2 Same number of student nodes as teacher We start with a simple case first. Consider that we only analyze layer without over-parameterization, i.e., . We also assume that , i.e., the input of that layer is whitened, and the top-layer signal is uniform, i.e., (all entries are 1). Then the following theorem shows that weight recovery could follow (we use as ). ###### Theorem 4. For dynamics , where is a projection matrix into the orthogonal complement of . , are corresponding -th column in and . Denote and assume . If , then with the rate ( is learning rate). Here and . See Appendix for the proof. Here we list a few remarks: Faster convergence near . we can see that due to the fact that in general becomes larger when (since can be close to ), we expect a super-linear convergence near . This brings about an interesting winner-take-all mechanism: if the initial overlap between a student node and a particular teacher node is large, then the student node will snap to it (Fig. 1(c)). Importance of projection operator . Intuitively, the projection is needed to remove any ambiguity related to weight scaling, in which the output remains constant if the top-layer weights are multiplied by a constant , while the low-layer weights are divided by . Previous works [6] also uses similar techniques while we justify it with BN. Without , convergence can be harder. Top-down modulation. Note that here we assume the top-layer signal is uniform, which means that according to , there is no preference on which student node corresponds to which teacher node . If there is a preference (e.g., ), then from the proof, the cross-term will be suppressed due to , making convergence easier. As we will see next, such a top-down modulation plays an important role for 2-layer and over-parameterization case. We believe that it also plays a similar role for deep networks. ### 3.3 Over-Parameterization and Top-down Modulation in 2-layer Network In the over-parameterization case (, e.g., 5-10x), we arrange the variables into two parts: , where contains columns (same size as ), while contains columns. We use (or -set) to specify nodes , and (or -set) for the remaining part. In this case, if we want to show “the main component” converges to , we will meet with one core question: to where will converge, or whether will even converge at all? We need to consider not only the dynamics of the current layer, but also the dynamics of the upper layer. Using a 1-hidden layer over-parameterized ReLU network as an example, Theorem 5 shows that the upper-layer dynamics automatically apply top-down modulation to suppress the influence of , regardless of their convergence. Here , where are the weight components of -set. See Fig. 5. ###### Theorem 5 (Over-Parameterization and Top-down Modulation). Consider with over-parameterization () and its upper-layer dynamics . Assume that initial value is close to : for . If (1) Assumption 3 holds for all pairwise combination of columns of and , and (2) there exists and so that Eqn. 41 and Eqn. 42 holds, then , and with rate . See Appendix for the proof (and definition of in Eqn. 45). The intuition is: if is close to and are far away from them due to Assumption 3, the off-diagonal elements of and are smaller than diagonal ones. This causes to move towards and to move towards zero. When becomes small, so does for or . This in turn suppresses the effect of and accelerates the convergence of . exponentially so that stays close to its initial locations, and Assumption 3 holds for all iterations. A few remarks: Flat minima. Since , can be changed arbitrarily without affecting the outputs of the neural network. This could explain why there are many flat directions in trained networks, and why many eigenvalues of the Hessian are close to zero [29]. Understanding of pruning methods. Theorem 5 naturally relates two different unstructured network pruning approaches: pruning small weights in magnitude [9, 7] and pruning weights suggested by Hessian [19, 10]. It also suggests a principled structured pruning method: instead of pruning a filter by checking its weight norm, pruning accordingly to its top-down modulation. Accelerated convergence and learning rate schedule. For simplicity, the theorem uses a uniform (and conservative) throughout the iterations. In practice, is initially small (due to noise introduced by -set) but will be large after a few iterations when vanishes. Given the same learning rate, this leads to accelerated convergence. At some point, the learning rate becomes too large, leading to fluctuation. In this case, needs to be reduced. Many-to-one mapping. Theorem 5 shows that under strict conditions, there is one-to-one correspondence between teacher and student nodes. In general this is not the case. Two students nodes can be both in the vicinity of a teacher node and converge towards it, until that node is fully explained. We leave it to the future work for rigid mathematical analysis of many-to-one mappings. Random initialization. One nice thing about Theorem 5 is that it only requires the initial to be small. In contrast, there is no requirement for small . Therefore, we could expect that with more over-parameterization and random initialization, in each layer , it is more likely to find the -set (of fixed size ), or the lucky weights, so that is quite close to . At the same time, we don’t need to worry about which grows with more over-parameterization. Moreover, random initialization often gives orthogonal weight vectors, which naturally leads to Assumption 3. ### 3.4 Extension to Multi-layer ReLU networks Using a similar approach, we could extend this analysis to multi-layer cases. We conjecture that similar behaviors happen: for each layer, due to over-parameterization, the weights of some lucky student nodes are close to the teacher ones. While these converge to the teacher, the final values of others irrelevant weights are initialization-dependent. If the irrelevant nodes connect to lucky nodes at the upper-layer, then similar to Thm. 5, the corresponding fan-out weights converge to zero. On the other hand, if they connect to nodes that are also irrelevant, then these fan-out weights are not-determined and their final values depends on initialization. However, it doesn’t matter since these upper-layer irrelevant nodes eventually meet with zero weights if going up recursively, since the top-most output layer has no over-parameterization. We leave a formal analysis to future work. ## 4 Simulations ### 4.1 Checking Assumption 3 To make Theorem 4 and Theorem 5 work, we make Assumption 3 that the activation field of different teacher nodes should be well-separated. To justify this, we analyze the bias of BatchNorm layers after the convolutional layers in pre-trained VGG11/13/16/19. We check the BatchNorm bias as these models use Linear-BatchNorm-ReLU architecture. After BatchNorm first normalizes the input data into zero mean distribution, the BatchNorm bias determines how much data pass the ReLU threshold. If the bias is negative, then a small portion of data pass ReLU gating and Assumption 3 is likely to hold. From Fig. 6, it is quite clear that the majority of BatchNorm bias parameters are negative, in particular for the top layers. ### 4.2 Numerical Experiments of Thm. 5 We verify Thm. 5 by checking whether moves close to under different initialization. We use a network with one hidden layer. The teacher network is 10-20-30, while the student network has more nodes in the hidden layers. Input data are Gaussian noise. We initialize the student networks so that the first nodes are close to the teacher. Specifically, we first create matrices and by first filling with i.i.d Gaussian noise, and then normalizing their columns to . Then the initial value of student is , where is a factor controlling how close is to . For we initialize with noise. Similarly we initialize with a factor . The larger and , the close the initialization and to the ground truth values. Fig. 7 shows the behavior over different iterations. All experiments are repeated 32 times with different random seeds, and (mean std) are reported. We can see that a close initialization leads to faster (and low variance) convergence of to small values. In particular, it is important to have close to (large ), which leads to a clear separation between row norms of and , even if they are close to each other at the beginning of training. Having close to makes the initial gap larger and also helps convergence. On the other hand, if is small, then even if is large, the gap between row norms of and only shifts but doesn’t expand over iterations. ## 5 Experiments ### 5.1 Experiment Setup We evaluate both the fully connected (FC) and ConvNet setting. For FC, we use a ReLU teacher network of size 50-75-100-125. For ConvNet, we use a teacher with channel size 64-64-64-64. The student networks have the same depth but with nodes/channels at each layer, such that they are substnatially over-parameterized. When BatchNorm is added, it is added after ReLU. We use random i.i.d Gaussian inputs with mean 0 and std (abbreviated as GAUS) and CIFAR-10 as our dataset in the experiments. GAUS generates infinite number of samples while CIFAR-10 is a finite dataset. For GAUS, we use a random teacher network as the label provider (with classes). To make sure the weights of the teacher are weakly overlapped, we sample each entry of from , making sure they are non-zero and mutually different within the same layer, and sample biases from . In the FC case, the data dimension is 20 while in the ConvNet case it is . For CIFAR-10 we use a pre-trained teacher network with BatchNorm. In the FC case, it has an accuracy of ; for ConvNet, the accuracy is . We repeat 5 times for all experiments, with different random seed and report min/max values. Two metrics are used to check our prediction that some lucky student nodes converge to the teacher: Normalized correlation . We compute normalized correlation (or cosine similarity) between teacher and student activations111For and , , where . evaluated on a validation set. At each layer, we average the best correlation over teacher nodes: , where is computed for each teacher and student pairs . means that most teacher nodes are covered by at least one student. Mean Rank . After training, each teacher node has the most correlated student node . We check the correlation rank of , normalized to (=rank first), back at initialization and at different epochs, and average them over teacher nodes to yield mean rank . Small means that student nodes that initially correlate well with the teacher keeps the lead toward the end of training. ### 5.2 Results Experiments are summarized in Fig. 8 and Fig. 9. indeed grows during training, in particular for low layers that are closer to the input, where moves towards . Furthermore, the final winning student nodes also have a good rank at the early stage of training, in particular after the first epoch, which is consistent with late-resetting used in [8]. BatchNorm helps a lot, in particular for the CNN case with GAUS dataset. For CIFAR-10, the final evaluation accuracy (see Appendix) learned by the student is often higher than the teacher. Using BatchNorm accelerates the growth of accuracy, improves , but seems not to accelerate the growth of . The theory also predicts that the top-down modulation helps the convergence. For this, we plot at different layers during optimization on GAUS. For better visualization, we align each student node index with a teacher node according to highest . Despite the fact that correlations are computed from the low-layer weights, it matches well with the top-layer modulation (identity matrix structure in Fig. 11). Besides, we also perform ablation studies on GAUS. Size of teacher network. As shown in Fig. 10(a), for small teacher networks (FC 10-15-20-25), the convergence is much faster and training without BatchNorm is faster than training with BatchNorm. For large teacher networks, BatchNorm definitely increases convergence speed and growth of . Degree of over-parameterization. Fig. 12 shows the effects of different degree of over-parameterization (, , , , and ). We initialize 32 different teacher network (10-15-20-25) with different random seed, and plot standard derivation with shaded area. We can clearly see that grows more stably and converges to higher values with over-parameterization. On the other hand, and are slower in convergence due to excessive parameters. Finite versus Infinite Dataset. We also repeat the experiments with a pre-generated finite dataset of GAUS in the CNN case (Fig. 10(b)), and find that the convergence of node similarity stalls after a few iterations. This is because some nodes receive very few data points in their activated regions, which is not a problem for infinite dataset. We suspect that this is probably the reason why CIFAR-10, as a finite dataset, does not show similar behavior as GAUS. ## 6 Conclusion and Future Work In this paper we propose a new theoretical framework that uses teacher-student setting to understand the training dynamics of multi-layered ReLU network. With this framework, we are able to conceptually explain many puzzling phenomena in deep networks, such as why over-parameterization helps generalization, why the same network can fit to both random and structured data, why lottery tickets [7, 8] exist. We backup these intuitive explanations by Theorem 4 and Theorem 5, which collectively show that student nodes that are initialized to be close to the teacher nodes converge to them with a faster rate, and the fan-out weights of irrelevant nodes converge to zero. As the next steps, we aim to extend Theorem 5 to general multi-layer setting (when both and are present), relax Assumption 3 and study more BatchNorm effects than what Theorem 3 suggests. ## 7 Acknowledgement The first author thanks Simon Du, Jason Lee, Chiyuan Zhang, Rong Ge, Greg Yang, Jonathan Frankle and many others for the informal discussions. ## References • [1] Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning, pages 173–182, 2016. • [2] Sanjeev Arora, Zhiyuan Li, and Kaifeng Lyu. Theoretical analysis of auto rate-tuning by batch normalization. arXiv preprint arXiv:1812.03981, 2018. • [3] Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, G Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, and Giulio Biroli. Comparing dynamics: Deep neural networks versus glassy systems. ICML, 2018. • [4] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. • [5] Laurent Dinh, Razvan Pascanu, Samy Bengio, and Yoshua Bengio. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1019–1028. JMLR. org, 2017. • [6] Simon S Du, Jason D Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient descent learns one-hidden-layer cnn: Don’t be afraid of spurious local minima. ICML, 2018. • [7] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Training pruned neural networks. ICLR, 2019. • [8] Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611, 2019. • [9] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015. • [10] Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pages 293–299. IEEE, 1993. • [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. • [12] Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29, 2012. • [13] Sepp Hochreiter and Jürgen Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997. • [14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. • [15] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. ICLR, 2017. • [16] Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Towards a theoretical understanding of batch normalization. arXiv preprint arXiv:1805.10694, 2018. • [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. • [18] David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S Kanwal, Tegan Maharaj, Emmanuel Bengio, Asja Fischer, and Aaron Courville. Deep nets don’t learn via memorization. ICLR Workshop, 2017. • [19] Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pages 598–605, 1990. • [20] Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. ICLR, 2018. • [21] Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, pages 6389–6399, 2018. • [22] Zachary C Lipton. Stuck in a what? adventures in weight space. arXiv preprint arXiv:1602.07320, 2016. • [23] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013. • [24] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. • [25] Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5947–5956, 2017. • [26] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018. • [27] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. ICLR Workshop, 2015. • [28] David Rolnick, Andreas Veit, Serge Belongie, and Nir Shavit. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017. • [29] Levent Sagun, Leon Bottou, and Yann LeCun. Eigenvalues of the hessian in deep learning: Singularity and beyond. ICLR, 2017. • [30] Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. ICLR 2018 Workshop Contribution, arXiv preprint arXiv:1706.04454, 2018. • [31] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. • [32] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. • [33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. • [34] Stefano Spigler, Mario Geiger, Stéphane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart. A jamming transition from under-to over-parametrization affects loss landscape and generalization. arXiv preprint arXiv:1810.09665, 2018. • [35] Yuandong Tian. A theoretical framework for deep locally connected relu network. arXiv preprint arXiv:1809.10829, 2018. • [36] Yuandong Tian and Yan Zhu. Better computer go player with neural network and long-term prediction. ICLR, 2016. • [37] Lei Wu, Zhanxing Zhu, et al. Towards understanding generalization of deep learning: Perspective of loss landscapes. arXiv preprint arXiv:1706.10239, 2017. • [38] Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S Schoenholz. A mean field theory of batch normalization. ICLR, 2019. • [39] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. ICLR, 2017. • [40] Chiyuan Zhang, Samy Bengio, Moritz Hardt, and Yoram Singer. Identity crisis: Memorization and generalization under extreme overparameterization. arXiv preprint arXiv:1902.04698, 2019. • [41] Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask. arXiv preprint arXiv:1905.01067, 2019. ## Appendix A Appendix: Proofs ### a.1 Theorem 1 ###### Proof. The first part of gradient backpropagated to node is: g1j(x) = f′j(x)∑j∘β∗jj∘(x)fj∘(x) (10) = f′j(x)∑j∘β∗jj∘(x)f′j∘(x)∑k∘w∗j∘k∘fk∘(x) (11) = ∑k∘⎡⎣f′j(x)∑j∘β∗jj∘(x)f′j∘(x)w∗j∘k∘⎤⎦fk∘(x) (12) Therefore, for the gradient to node , we have: g1k(x) = f′k(x)∑jwjkg1j(x) (13) = f′k(x)∑k∘⎡⎣∑jj∘wjkf′j(x)β∗jj∘(x)f′j∘(x)w∗j∘k∘⎤⎦β∗kk∘(x)fk∘(x) (14) And similar for . Therefore, by mathematical induction, we know that all gradient at nodes in different layer follows the same form. ∎ ### a.2 Theorem 2 ###### Proof. Using Thm. 1, we can write down weight update for weight that connects node and node : ˙wjk = ∑j∘,k∘w∗j∘k∘Ex[f′j(x)β∗jj∘(x)f′j∘(x)fk(x)fk∘(x)]β∗jj∘kk∘ (15) − ∑j′,k′wj′k′Ex[f′j(x)βjj′(x)f′j′(x)fk(x)fk′(x)]βjj′kk′ Note that , , and run over all parents and children nodes on the teacher side. This formulation works for over-parameterization (e.g., and can run over different nodes). Applying Assumption 1 and rearrange terms in matrix form yields Eqn. 6. ∎ ### a.3 Theorem 3 ###### Proof. Given a batch with size , denote pre-batchnorm activations as and gradients as (See Fig. 14(a)). is its whitened version, and is the final output of BN. Here and and , are learnable parameters. With vector notation, the gradient update in BN has a compact form with clear geometric meaning: ###### Lemma 1 (Backpropagation of Batch Norm [35]). For a top-down gradient , BN layer gives the following gradient update ( is the orthogonal complementary projection of subspace ): gf=JBN(f)g=c0σP⊥f,1g,gc=S(f)Tg (16) Intuitively, the back-propagated gradient is zero-mean and perpendicular to the input activation of BN layer, as illustrated in Fig. 14. Unlike [16, 38] that analyzes BN in an approximate manner, in Thm. 1 we do not impose any assumptions. Given Lemma 1, we can prove Thm. 3. For Fig. 15(a), using the property that (the expectation is taken over batch) and the weight update rule (over the same batch), we have: 12d∥wj∥2dt=∑k∈ch(j)wjk˙wjk=Ex⎡⎣∑k∈ch(j)wjkfk(x)glinj(x)⎤⎦=Ex[flinj(x)glinj(x)]=0 (17) For Fig. 15(b), note that and conclusion follows. ∎ ### a.4 Lemmas For simplicity, in the following, we use . ###### Lemma 2 (Bottom Bounds). Assume all . Denote p∗jj′≡w∗j′d∗jj′,pjj′≡wj′djj′,Δpjj′≡p∗jj′−pjj′ (18) If Assumption 2 holds, we have: ∥Δpjj′∥≤(1+Kd)d∗jj′∥δwj′∥ (19) If Assumption 3 also holds, then: d∗jj′≤ϵd(1+Kd∥δwj′∥)(1+Kd∥δwj∥)d∗jj (20) ###### Proof. We have for : ∥Δpjj′∥ = ∥w∗j′d∗jj′−wj′djj′∥ (21) = ∥wj′(d∗jj′−djj′)+(w∗j′−wj′)d∗jj′∥ (22) ≤ ∥wj′∥∥d∗jj′−djj′∥+∥w∗j′−wj′∥d∗jj′ (23) ≤ d∗jj′Kd∥δwj′∥+d∗jj′∥δwj′∥ (24) ≤ (1+Kd)d∗jj′∥δwj′∥ (25) If Assumption 3 also holds, we have: d∗jj′ ≤ d∗∗jj′(1+Kd∥δwj′∥) (26) ≤ ϵdd∗∗jj(1+Kd∥δwj′∥ (27) ≤ ϵdd∗jj(1+Kd∥δwj∥)(1+Kd∥δwj′∥) (28) ###### Lemma 3 (Top Bounds). Denote q∗jj′≡v∗j′l∗jj′,qjj′≡vj′ljj′,Δqjj′≡q∗jj′−qjj′ (29) If Assumption 2 holds, we have: ∥Δqjj′∥≤(1+Kl)l∗jj′∥δwj′∥ (30) If Assumption 3 also holds, then: l∗jj′≤ϵl(1+Kl∥δwj′∥)(1+Kl∥δwj∥)l∗jj (31) ###### Proof. The proof is similar to Lemma 2. ∎ ###### Lemma 4 (Quadratic fall-off for diagonal elements of L). For node , we have: ∥l∗jj−ljj∥≤C0l∗jj∥δwj∥2 (32) ###### Proof. The intuition here is that both the volume of the affected area and the weight difference are proportional to . is their product and thus proportional to . See Fig. 16. ∎ ### a.5 Theorem 4 ###### Proof. First of all, note that . So given , we also have a bound for . When , the matrix form can be written as the following: ˙wj=P⊥wjw∗jh∗jj+∑j′≠jP⊥wj(w∗j′h∗jj′−wj′hjj′)=P⊥wjp∗jj+∑j′≠jP⊥wjΔpjj′ (33) by using (and thus doesn’t matter). Since is conserved, it suffices to check whether the projected weight vector of onto the complementary space of the ground truth node , goes to zero: P⊥w∗j˙wj=P⊥w∗jP⊥wjp∗jj+∑j′≠jP⊥w∗jP⊥wjΔpjj′ (34) Denote and a simple calculation gives that . First we have: P⊥w∗jP⊥wjw∗j=P⊥w∗j(I−wjwTj)w∗j=−P⊥w∗jwjwTjw∗j=−cosθjP⊥w∗jwj (35) From Lemma 2, we knows that ∥Δpjj′∥≤(1+Kd)d∗jj′∥δwj′∥≤ϵd(1+Kd)[1+2Kdsin(θ0/2)]2d∗jj∥δwj′∥ (36) Note that here we have
2020-07-11 19:56:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591628074645996, "perplexity": 2536.103777432584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655937797.57/warc/CC-MAIN-20200711192914-20200711222914-00504.warc.gz"}
https://testbook.com/question-answer/taylors-stability-number-curves-are-used-fo--5f544fad9d7a5abae1ca7ace
# Taylor’s stability number curves are used for the analysis of stability of slopes. The angle of shearing resistance used in the chart is the This question was previously asked in KPSC AE 2017: Specific Paper View all KPSC Assistant Engineer Papers > 1.  Effective angle 2. Apparent angle 3. Mobilized angle 4. Weighed angle Option 3 : Mobilized angle Free ST 1: Logical reasoning 5208 20 Questions 20 Marks 20 Mins ## Detailed Solution Concept Taylor proposed a dimensionless parameter called Taylor stability number. Stability number method It is the method used to evaluate slope stability for homogeneous soils having cohesion. It is based on the principle resistance of soil mass against sliding, because of cohesion and internal friction acting over the failure plane. This failure surface is assumed to be a circular arc. The factors affecting the stability of soil slope is expressed with the parameter stability number. The stability number (Sn) is given by $${S_n} = \frac{c}{{\gamma {H_c}}} = \frac{{{c_m}}}{{\gamma H}} = \frac{c}{{\gamma FH}}$$ Where, c is Unit Cohesion, cm is unit mobilized cohesion H is the vertical height of the slope Hc is the Critical height of the slope F is a Factor of safety with respect to cohesion In this method, the Taylor stability number is read from the Taylor stability chart. Taylor method is suitable for c-ϕ soil and for ϕ = 0 (pure clay) From the chart, we can see the angle of shearing resistance used is the mobilized angle (ϕm). ∴ The angle of shearing resistance used is the mobilized angle (ϕm).
2021-09-18 19:35:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249864935874939, "perplexity": 4351.550031116908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00554.warc.gz"}
https://rdrr.io/cran/hht/man/PrecisionTester.html
# PrecisionTester: Test numerically determined instantaneous frequency against... In hht: The Hilbert-Huang Transform: Tools and Methods ## Description This function compares the performance of InstantaneousFrequency against signals of known instantaneous frequency. The known signal is of the form a * \sin(omega_1 + phi_1) + b * sin(omega_2 + phi_2) + c One can create quite complicated signals by choosing the various amplitude, frequency, and phase constants. ## Usage 1 2 3 4 PrecisionTester(tt = seq(0, 10, by = 0.01), method = "arctan", lag = 1, a = 1, b = 1, c = 1, omega.1 = 2 * pi, omega.2 = 4 * pi, phi.1 = 0, phi.2 = pi/6, plot.signal = TRUE, plot.instfreq = TRUE, plot.error = TRUE, new.device = TRUE, ...) ## Arguments tt Sample times. method How the numeric instantaneous frequency is calculated, see InstantaneousFrequency lag Differentiation lag, see the diff function in the base package. a Amplitude coefficient for the first sinusoid. b Amplitude coefficient for the second sinusoid. c DC shift omega.1 Frequency of the first sinusoid. omega.2 Frequency of the second sinusoid. phi.1 Phase shift of the first sinusoid. phi.2 Phase shift of the second sinusoid. plot.signal Whether to show the time series. plot.instfreq Whether to show the instantaneous frequencies, comparing the numerical and analytical result. plot.error Whether to show the difference between the numerical and analytical result. new.device Whether to open each plot as a new plot window (defaults to TRUE). However, Sweave doesn't like dev.new(). If you want to use PrecisionTester in Sweave, be sure that new.device = FALSE ... Plotting parameters. ## Details For a description of how the exact analytical frequency is derived, see http://www.unc.edu/%7Ehaksaeng/hht/analytic_instantaneous_freq.pdf ## Value instfreq$sig The time series instfreq$analytic The exact instantaneous frequency instfreq\$numeric The numerically-derived instantaneous frequency from InstantaneousFrequency ## Author(s) Daniel C. Bowman [email protected] InstantaneousFrequency 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 #Simple signal tt <- seq(0, 10, by = 0.01) a <- 1 b <- 0 c <- 0 omega.1 <- 30 * pi omega.2 <- 0 phi.1 <- 0 phi.2 <- 0 PrecisionTester(tt, method = "arctan", lag = 1, a, b, c, omega.1, omega.2, phi.1, phi.2) #That was nice - what happens if we use the "chain" method...? PrecisionTester(tt, method = "chain", lag = 1, a, b, c, omega.1, omega.2, phi.1, phi.2) #Big problems! Let's increase the sample rate tt <- seq(0, 10, by = 0.0005) PrecisionTester(tt, method = "chain", lag = 1, a, b, c, omega.1, omega.2, phi.1, phi.2) #That's better #Frequency modulations caused by signal that is not symmetric about 0 tt <- seq(0, 10, by = 0.01) a <- 1 b <- 0 c <- 0.25 omega.1 <- 2 * pi omega.2 <- 0 phi.1 <- 0 phi.2 <- 0 PrecisionTester(tt, method = "arctan", lag = 1, a, b, c, omega.1, omega.2, phi.1, phi.2) #Non-uniform sample rate set.seed(628) tt <- sort(runif(500, 0, 10)) a <- 1 b <- 0 c <- 0 omega.1 <- 2 * pi omega.2 <- 0 phi.1 <- 0 phi.2 <- 0 PrecisionTester(tt, method = "arctan", lag = 1, a, b, c, omega.1, omega.2, phi.1, phi.2)
2019-01-23 18:57:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3575424551963806, "perplexity": 3483.708658376167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584336901.97/warc/CC-MAIN-20190123172047-20190123194047-00226.warc.gz"}
https://tavianator.com/a-quick-trick-for-faster-naive-matrix-multiplication/
# A quick trick for faster naïve matrix multiplication If you need to multiply some matrices together very quickly, usually it's best to use a highly optimized library like ATLAS. But sometimes adding such a dependency isn't worth it, if you're worried about portability, code size, etc. If you just need good performance, rather than the best possible performance, it can make sense to hand-roll your own matrix multiplication function. Unfortunately, the way that matrix multiplication is usually taught: $$C_{i,j} = \sum_k A_{i,k} \, B_{k,j}$$ $$\Bigg($$$$\Bigg) = \Bigg($$$$\Bigg)\,\Bigg($$$$\Bigg)$$ void matmul(double *dest, const double *lhs, const double *rhs, size_t rows, size_t mid, size_t cols) { for (size_t i = 0; i < rows; ++i) { for (size_t j = 0; j < cols; ++j) { const double *rhs_row = rhs; double sum = 0.0; for (size_t k = 0; k < mid; ++k) { sum += lhs[k] * rhs_row[j]; rhs_row += cols; } *dest++ = sum; } lhs += mid; } } This function multiplies a rows×mid matrix with a mid×cols matrix using the "linear algebra 101" algorithm. Unfortunately, it has a bad memory access pattern: we loop over dest and lhs pretty much in order, but jump all over the place in rhs, since it's stored row-major but we need its columns. Luckily there's a simple fix that's dramatically faster: instead of computing each cell of the destination separately, we can update whole rows of it at a time. Effectively, we do this: $$C_{i} = \sum_j A_{i,j} \, B_j$$ $$\Bigg($$$$\Bigg) = \Bigg($$$$\Bigg)\,\Bigg($$$$\Bigg)$$ In code, it looks like this: void matmul(double *dest, const double *lhs, const double *rhs, size_t rows, size_t mid, size_t cols) { memset(dest, 0, rows * cols * sizeof(double)); for (size_t i = 0; i < rows; ++i) { const double *rhs_row = rhs; for (size_t j = 0; j < mid; ++j) { for (size_t k = 0; k < cols; ++k) { dest[k] += lhs[j] * rhs_row[k]; } rhs_row += cols; } dest += cols; lhs += mid; } } On my computer, that drops the time to multiply two 256×256 matrices from 37ms to 13ms (with gcc -O3). ATLAS does it in 5ms, though, so always use something like it if it's available.
2020-05-26 18:23:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4078528583049774, "perplexity": 7121.714301500029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391277.13/warc/CC-MAIN-20200526160400-20200526190400-00062.warc.gz"}
http://touchnerds.com/standard-deviation/standard-error-anova-formula.html
Home > Standard Deviation > Standard Error Anova Formula # Standard Error Anova Formula ## Contents You will also learn how to obtain these results using Minitab Express and Minitab.For each pairwise comparison, $$H_0: \mu_1 - \mu_2=0$$ and $$H_a: \mu_1 - \mu_2 \neq 0$$. Individual observations (X's) and means (red dots) for random samples from a population with a parametric mean of 5 (horizontal line). Web pages This web page calculates standard error of the mean and other descriptive statistics for up to 10000 observations. Geoff Cumming 4,583 views 6:20 Analysis of Variance (ANOVA) - Duration: 4:47. have a peek here How secure is a fingerprint sensor versus a standard password? Development of an online learning self-efficacy scale. Sometimes "standard error" is used by itself; this almost certainly indicates the standard error of the mean, but because there are also statistics for standard error of the variance, standard error Within the inferential context of the ANOVA, we might rather want to show the SE, 95% CIs, or LSD intervals, for example. ## Standard Error Anova Formula NOTE: The X'X matrix has been found to be singular, and a generalized inverse was used to solve the normal equations. We wish to ask whether mean pig weights are the same for all 4 diets.$$H_0: \mu_1 = \mu_2 = \mu_3 = \mu_4$$$$H_a:$$ Not all $$\mu$$ are equalData from study of pigs State a "real world" conclusion.Based on your decision in step 4, write a conclusion in terms of the original research question. The degrees of freedom for the model is equal to one less than the number of categories. The Mean Squares are the Sums of Squares divided by the corresponding degrees of freedom. State a “real world” conclusion”There is evidence that the mean online learning self-efficacy scores of students from World Campus, University Park, and the Commonwealth Campuses are not all equal.Note: This does We can analyze this data set using ANOVA to determine if a linear relationship exists between the independent variable, temperature, and the dependent variable, yield. Pooled Standard Deviation Anova statisticsfun 476,939 views 14:30 Standard Deviation vs Standard Error - Duration: 3:57. It's the sqrt of residual SS / (n-2). Anova Standard Deviation Calculator The amount of uncertainty that remains is sum of the squared differences between each observation and its group's mean, . The ANOVA table partitions this variability into two parts. Sign in 1 4 Don't like this video? Joining two lists with relational operators What are some counter-intuitive results in mathematics that involve only finite objects? Anova From Summary Data Happiness was measured on a scale of 1 to 3. Sign in 5 Loading... Example: ACT Scores by ProgramThe datasetACT_Program.MTWis used to compare the ACT scores of students who have completed three different test prep programs. 1. People almost always say "standard error of the mean" to avoid confusion with the standard deviation of observations. 2. Source SS df MS F p Between Groups (Factor) $$\sum_{k}n_k(\overline{x}_k-\overline{x}_\cdot)^2$$ k-1 $$\frac{SS_{Between}}{df_{Between}}$$ $$\frac{MS_{Between}}{MS_{Within}}$$ See Minitab Express or F table Within Groups (Error) $$\sum_k \sum_i(x_{ik}-\overline{x}_k)^2$$ n-k $$\frac{SS_{Within}}{df_{Within}}$$ Total $$\sum_k \sum_i(x_{ik}-\overline{x}_\cdot)^2$$ n-1 k 4. Close Yeah, keep it Undo Close This video is unavailable. 5. E., M. ## Anova Standard Deviation Calculator Schenker. 2003. You can compare these p-values to the standard alpha level of .05. Standard Error Anova Formula Greenstone, and N. Residual Standard Deviation Anova Error bars in experimental biology. How many times do you need to beat mom and Satan etc to 100% the game? navigate here This web page calculates standard error of the mean, along with other descriptive statistics. My understanding is that groups SE's should be based on experimental error mean square. Determine a p-value associated with the test statistic.When performing a one-way ANOVA using statistical software, you will be given the p-value in the ANOVA source table.If performing a one-way ANOVA by Anova Standard Deviation Assumption Seating) used in the analysis. The model is considered to be statistically significant if it can account for a large amount of variability in the response. When I see a graph with a bunch of points and error bars representing means and confidence intervals, I know that most (95%) of the error bars include the parametric means. http://touchnerds.com/standard-deviation/calculate-standard-error-from-standard-deviation-in-excel.html The sample variance is also referred to as a mean square because it is obtained by dividing the sum of squares by the respective degrees of freedom. Loading... Anova From Summary Data Excel The heights of students with brown and blue eyes are significantly different from one another. Online learning self-efficacy is approximately normally distributed. ## The quantity in the numerator of the previous equation is called the sum of squares. In this class we will be relying on statistical software to perform these analyses, if you are interested in seeing how the calculations are performed, this information is contained in the That is, it tests the hypothesis H0: 1...g. What do you do with all the bodies? Anova Table Calculator is the mean of the n observations. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Because all three instructors share the letter A, there are no significantly different pairs of instructors.We could also look at the second table which gives us the t test statistic and Note that it's a function of the square root of the sample size; for example, to make the standard error half as big, you'll need four times as many observations. "Standard http://touchnerds.com/standard-deviation/to-cut-the-standard-error-of-the-mean-in-half-the-sample-size-must-be-increased-by-a-factor-of.html The * indicates the sample mean value (e.g. 3.13). This means that the average happiness differs for at least one marital status group. A., & Kulikowich, J. how to match everything between a string and before next space How secure is a fingerprint sensor versus a standard password? As you increase your sample size, the standard error of the mean will become smaller. Brandon Foltz 240,093 views 24:18 t Test vs ANOVA with Two Groups - P-Values Compared - Duration: 5:28. It is usually safer to test hypotheses directly by using the whatever facilities the software provides that by taking a chance on the proper interpretation of the model parametrization the software Because the estimate of the standard error is based on only three observations, it varies a lot from sample to sample.
2018-04-24 18:20:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5194946527481079, "perplexity": 1012.4498058822618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00266.warc.gz"}
https://openjij.github.io/OpenJij/tutorial/en/optimization/knapsack.html
# Knapsack Problem# Here we show how to solve the knapsack problem using OpenJij, JijModeling, and JijModeling Transpiler. This problem is also mentioned in 5.2. Knapsack with Integer Weights in Lucas, 2014, “Ising formulations of many NP problems”. ## Overview of the Knapsack Problem# The knapsack problem is the problem of finding the optimal solution in the following situations. It is known as one of the most famous NP-hard integer programming problems. First, let us consider an example. ### Example# As an example of this problem, consider the following story. In a cave, an explorer unexpectedly discovered several treasures. Treasure A Treasure B Treasure C Treasure D Treasure E Treasure F Price $5000$7000 $2000$1000 $4000$3000 Weight 800g 1000g 600g 400g 500g 300g Unfortunately, the explorer only has a small knapsack. This knapsack can only hold up to 2 kg. The explorer wants to get as much value as possible for the treasure in this knapsack, so which treasures should he bring back? ### Generalizing the Problem# To generalize this problem, assume that there is a set $\{ 0, 1, \dots, i, \dots, N-1\}$ of $N$ items to put in the knapsack and that each item has $i$ as its index. We can represent the problem by making a list of costs $\boldsymbol{v}$ and a list of weights $\boldsymbol{w}$ for each luggage $i$ to be put in the knapsack. $\nonumber \boldsymbol{v} = \{v_0, v_1, \dots, v_i, \dots, v_{N-1}\}$ $\nonumber \boldsymbol{w} = \{w_0, w_1, \dots, w_i, \dots, w_{N-1}\}$ Let $x_i$ further denote the binary variable that represents the $i$th package selected. It is $x_i = 1$ when $i$ is placed in the knapsack and $x_i = 0$ when $i$ is not. Finally, let $W$ be the maximum capacity of the knapsack. We want to maximize the total value of luggage we can put in the knapsack, and we express this as an objective function. Given the further constraint that the knapsack must be below the capacity limit, the knapsack problem can be expressed as the following expression: (1)#$\max \ \sum_{i=0}^{N-1} v_i x_i$ (2)#$\mathrm{s.t.} \quad \sum_{i=0}^{N-1} w_i x_i \leq W$ (3)#$x_i \in \{0, 1\} \quad (\forall i \in \{0, 1, \dots, N-1\})$ ## Modeling by JijModeling# ### Variables# Let us define the variables $\boldsymbol{v}, \boldsymbol{w}, N, W, x_i, i$ used in expressions (1), (2) and (3) as follows: import jijmodeling as jm # define variables v = jm.Placeholder('v', dim=1) N = v.shape[0].set_latex('N') w = jm.Placeholder('w', shape=(N,)) W = jm.Placeholder('W') x = jm.Binary('x', shape=(N,)) i = jm.Element('i', (0, N)) v = jm.Placeholder('v', dim=1) declares a one-dimensional list of values of things to put in the knapsack, and the number of elements is N. N has set_latex() expression so that the representation changes (link). Using that N, we can guarantee that v and w have the same length by defining a one-dimensional list representing the weight of the items to put in the knapsack as w = jm.Placeholder('w', shape=(N)). W = jm.Placeholder('W') defines $W$ to represent the knapsack capacity limit. x = jm.Binary('x', shape=(N)) defines a binary variable list x of the same length as v, w. Finally, i = jm.Element('i', (0, N)) defines the indices of $v_i, w_i, x_i$, which are integers in the range $0\leq i < N$. ### Objective Function# Expression (1) is implemented as an objective function. Note that we added a negative sign to make this a minimization problem. Let us create a problem and add an objective function to it. By Sum(i, formula), we can sum the expression part to the subscript i. # set problem problem = jm.Problem('Knapsack') # set objective function obj = - jm.Sum(i, v[i]*x[i]) problem += obj ### Constraint# Let us implement the constraint in expression (2) by using Constraint(constraint name, constraint expression). This gives the appropriate constraint name to the constraint expression. # set total weight constraint total_weight = jm.Sum(i, w[i]*x[i]) problem += jm.Constraint('weight', total_weight<=W) problem \begin{alignat*}{4}\text{Problem} & \text{: Knapsack} \\\min & \quad - \sum_{ i = 0 }^{ N - 1 } v_{i} \cdot x_{i} \\\text{s.t.} & \\& \text{weight} :\\ &\quad \quad \sum_{ i = 0 }^{ N - 1 } w_{i} \cdot x_{i} \leq W,\\[8pt]& x_{i_{0}} \in \{0, 1\}\end{alignat*} ### Instance# Let us set up an instance of the explorer story from earlier. The value of the treasure is normalized to \$1000, and the weight of the treasure is also normalized to 100g. # set a list of values & weights inst_v = [5, 7, 2, 1, 4, 3] inst_w = [8, 10, 6, 4, 5, 3] # set maximum weight inst_W = 20 instance_data = {'v': inst_v, 'w': inst_w, 'W': inst_W} ### Undetermined Multiplier# This knapsack problem has one constraint, and we need to set the weight of that constraint. We will set it to match the name we gave in the Constraint part earlier using a dictionary type. # set multipliers lam1 = 1.0 multipliers = {'weight': lam1} ### Conversion to PyQUBO by JijModeling Transpiler# JijModeling has executed all the implementations so far. By converting this to PyQUBO, it is possible to perform combinatorial optimization calculations using OpenJij and other solvers. from jijmodeling.transpiler.pyqubo import to_pyqubo # convert to pyqubo pyq_model, pyq_chache = to_pyqubo(problem, instance_data, {}) qubo, bias = pyq_model.compile().to_qubo(feed_dict=multipliers) The PyQUBO model is created by to_pyqubo with the problem created by JijModeling and the instance_data we set to a value as arguments. Next, we compile it into a QUBO model that can be computed by OpenJij or other solver. ### Optimization by OpenJij# This time, we will use OpenJij’s simulated annealing to solve the optimization problem. We set the SASampler and input the QUBO into that sampler to get the result of the calculation. import openjij as oj # set sampler # solve problem response = sampler.sample_qubo(qubo) ### Decoding and Displaying the Solution# Decode the returned results to facilitate analysis. # decode solution result = pyq_chache.decode(response) From the result thus obtained, let us see which treasures we decide to put in the knapsack. indices, _, _ = result.lowest().record.solution['x'][0] inst_w = instance_data['w'] sum_w = 0 for i in indices[0]: sum_w += inst_w[i] print('Indices of x = 1: ', indices[0]) print('Value of objective function: ', result.lowest()[0].evaluation.objective) print('Value of constraint term: ', result.lowest()[0].evaluation.constraint_violations['weight']) print('Total weight: ', sum_w) Indices of x = 1: [1, 4, 5] Value of objective function: [-14.0] Value of constraint term: [0.0] Total weight: 18 The value of the objective function multiplied by the minus sign is the total value of the treasure in the knapsack. result.evaluation.constraint_violations[constraint name] shows how many of its constraints are not satisfied.
2023-01-28 17:19:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.808574378490448, "perplexity": 1898.5310395880051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00029.warc.gz"}
https://stats.stackexchange.com/questions/288686/confusion-in-hypothesis-testing-for-the-linear-model/288693
# Confusion in hypothesis testing for the linear model Consider the usual normal linear model $y=X\beta + \epsilon$ where $X$ is $N\times (p+1)$ and $\epsilon \sim N(0,\sigma^2I_N)$ I'm reading up on hypothesis testing in this model: $H_0:$ $\beta_j=0$,$H_1:$ $\beta_j\neq0$. A statistic of interest for this test is $\frac{\hat{\beta}_j}{\hat{\sigma}\sqrt{v_j}}$ where $v_j=(X^TX)^{-1}_{jj}$. What I fail to understand is the link between this kind of hypothesis testing and the usual definition. The framework for hypothesis testing is usually the following: we have a sample $X_1,\ldots, X_n$ where the $X_i$ are i.i.d and follow the probability distribution $P_{\theta}$ where $\theta$ is unknown. We consider a partition of the parameter space and some $R\subset \mathbb R^n$ (the rejection region). Here, what is the sample ? What are the probability distributions $P_{\theta}$ ? What is $R$ ? ## 1 Answer A sample is $\{y_i, x_{i1}, x_{i2}, \ldots, x_{ip} \}$ for $i \in \{ 1, 2, \ldots , N\}$. Or simply matrix $X$ and vector $y$. We assume that $\varepsilon \sim N(0, \sigma^2 I_N)$ which translates to $y \sim N(X\beta, \sigma^2 I_N)$. So family of $N$-dimensional normal distributions with mean vector $X\beta$ and covariance matrix $\sigma^2 I_N$ plays role of $P_\theta$ and $\beta$ plays a role of $\theta$. Statistic from your question follows $t$ distribution with $N-p-1$ degrees of freedom, so rejection region is $(-\infty, -t^\star) \cup (t^\star, +\infty)$, where $t^\star$ is $(1-\alpha/2)$- quantile of $t$ distribution with $N-p-1$ degrees of freedom and $\alpha$ is your significance level (typically $\alpha=0.05$). • Thanks for your input. Please have a look at how Casella and Berger define the rejection region in Statistical Inference: imgur.com/a/HxGHc It's a subset of the sample space. What definition of the rejection region are you using ? Jul 4 '17 at 8:17 • In my answer rejection region is subset of values that test statistic can possibly take. So, to be consistent with Casella and Bergrer is sholud write that rejection region is set of $X$'s and $y$'s such that $\hat\beta_j / \hat \sigma \sqrt{v_j}$ lies in $(-\infty, -t^\star) \cup (t^\star, +\infty)$. Jul 4 '17 at 8:25
2021-09-18 19:56:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654279351234436, "perplexity": 190.6496542988894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00708.warc.gz"}
https://socratic.org/questions/what-are-the-important-points-needed-to-graph-f-x-x-2-2x-1
# What are the important points needed to graph f(x)= -x^2+2x+1? Nov 1, 2015 You need the x and y intercepts and the vertex of the graph #### Explanation: To find the x-intercepts, set y = 0 so ${x}^{2} + 2 x + 1 = 0$ Factorise this to $\left(x + 1\right) \left(x + 1\right) = 0$ So there is only one x-intercept at x = -1; that means the graph touches the x-axis at -1 To find the y intercept set, x = 0 So y = 1 This means that the graph crosses the y-axis at y = 1 Because the graph touches the x-axis at x = -1 then that is the x co-ordinate of the vertex and the y co-ordinate is y = 0 and it looks like this graph{x^2 +2x +1 [-5, 5, -5, 5]}
2022-10-07 12:46:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5450354218482971, "perplexity": 569.9891684887242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00515.warc.gz"}
https://pythoninchemistry.org/ch40208/working_with_data/monte_carlo.html
# Exercise: Monte Carlo¶ Monte Carlo methods may be familiar from earlier years, where stochastic (random) processes are used to find, for example, the minimum energy position of some atom in space. Here we will look at using Monte Carlo to improve our model-fitting methods, in particular the quantification of the uncertainties on the model parameters. In the model fitting section, you were shown how the curve_fit function can get estimations of these inverse uncertainties (a fancy name for the parameter uncertainties in the model). However, these uncertainties are only estimates and are based on assumptions about the fitting process (the particular assumptions made are beyond the scope of this course). Therefore, we will use a stochastic method to more completely probe these uncertainties. For this problem, we are going to look at the investigation of a mixture of organic species by IR spectroscopy. We have been given this data set, and are told it contains a mixture of toluene and benzyl alcohol. Use the !head command to investigate the experimental data file and plot the data using the plt.errorbar function. ## Our model¶ This exercise aims to determine the relative concentration of toluene and benzyl alcohol that make up the mixture data that you have just plotted. To achieve this, we must define a model to describe this data. The transmittance data from the mixture ($$T_{\text{mix}}$$) is made up of transmittance data from toluene ($$T_{\text{t}}$$) and benzyl alcohol ($$T_{\text{b}}$$), and nothing else. Since we are interested in the relative concentration, we can describe this as a fraction of the composition and since the transmittances are additive, we can define the following model. $T_{\text{mix}} = cT_{\text{t}} + (1 - c)T_{\text{b}},$ where $$c$$ describes the composition of the mixture and is the single parameter in our model. Write a function for the above model, c should be the first argument as this is the variable parameter we are investigating. Other arguments should include the transmittance from the pure samples. We don’t yet have IR spectra for the pure samples that we can use in the above model. Files for these spectra are available here for toluene and here for benzyl alcohol. Read in and plot each of the pure datasets. ## Interpolation¶ You will have noticed that the x-axis on each of the three plots created so far do not cover the same range. However, to accurately use the model outline above, the transmittance values for all three IR spectra should be at the same wavenumbers. Therefore, we must interpolate the two pure datasets such that the wavenumber values are the same as for the mixture. Interpolation is where we determine new data points within the range of a discrete set of known points. Essentially, we use what we know about the x- and y-values to determine the y-values for a different set of x-values. It is important that the new range of x-values is from within the existing range, or else we are extrapolating (which is often unscientific). In the three datasets that you have, the two pure samples both have broader ranges than the mixture. Therefore, we will use the x-values for the mixture and extrapolate new y-values for each of the pure samples. For interpolation, we can use the np.interp method. This method takes three arguments, x the new x-axis, xp the old x-axis, and fp the old y-axis. It will return a new set of y-values. Interpolate transmittance values for the two model IR spectra and plot these over the original data to check that they agree. ## Optimisation¶ Now that we have the pure data read in and on the correct x-axis, we can test the model that we created earlier. Generate the model transmittance data that would arise from a 50:50 mixture of the two components. Plot this data on top of the mixture data and see if it agrees visually. Now that our model is working, we can use the methodology introduced previously to minimise the difference between our model and the data. Write a chi-squared function and minimise this using the scipy.optimize.minimize function. This will give an optimised value for c. ## Sampling¶ Having found the optimised value for the concentration, we can now use a modification of a Monte Carlo process to sample the uncertainty in this concentration parameter. This methodology is called Markov chain Monte Carlo (MCMC), it involves starting from some value and then changing the value by some small random amount with each iteration. The fact that the next parameter value depends on the previous one makes the sampling a Markov chain, while the use of random perturbations is Monte Carlo. In MCMC we start from an initial value, usually found by obtaining an optimised solution. This initial value is changed by some random amount ($$\delta$$), which is obtained based on some defined. step size ($$s$$) change with respect to the variable ($$v$$), $\delta = Nsv,$ where $$N$$ is some random number, obtained from a normal distribution centred on 0 with a standard deviation of 1. In Python, $$R$$ can be obtained with the function np.random.randn(). We then determine if this random perturbation has improved the. agreement to the data or not. If it has, we accept this new value for our variable ($$v + \delta$$) and perform another perturbation. If this perturbation does not improve agreement with the data, the new value is not immediately rejected, rather it is only rejected if the probability of this transition ($$p$$) is less than some random number from 0 to 1 (this time we use np.random.random() to obtain such a number). The probability is found by, $p = \exp\Bigg(\frac{-\chi^2_{\text{new}} + \chi^2}{2}\Bigg),$ where $$\chi^2$$ is the original goodness of fit and $$\chi^2_{\text{new}}$$ is the goodness of fit after the perturbation. This means that it is possible for the agreement to get worse over time. However, the amount by which it can get worse is controlled by the probability. The result of this is that the values of our Markov chain that are accepted will describe the statistically feasible value for our parameter given the uncertainty in the experimental measurements. The algorithm for a typical MCMC sampling process is as follows: 1. Create an empty list for the accepted values 2. Evaluate $$\chi^2$$ for the initial guess, typically this initial guess will be the optimised solution. 3. Perturb the parameter value ($$v + \delta$$) 4. Calculate $$\chi^2_{\text{new}}$$ for the perturbed value 5. Determine the probability of this transition 6. Check that $$p\geq R$$, where $$R$$ is a random number from 0 to 1, if the new $$\chi^2$$ is less than the old one, then $$p>1$$ and therefore always accepted 7. If true, updated the values of $$v$$ and $$\chi^2$$, and append $$v$$ to the list of the accepted values 8. Go to step 3 and repeat until the desired number of iterations have been achieved. Write a function to perform the MCMC algorithm outlined above, this should take a number of iterations (this should be no more than 2000) and step size as arguments and return the list of accepted values. Plot a histogram of the accepted values (using plt.hist), these should be normally distributed (you may need to google to see what this looks like). Vary the step size between 1 and 0.001 to get the most normally distributed data you can. Using the statistical functions in NumPy, calculate the mean and standard deviation of this distribution. Worked Example
2022-05-28 19:23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7488095760345459, "perplexity": 410.36454003757495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00730.warc.gz"}
https://undergroundmathematics.org/polynomials/r8701
Review question # What if the remainder when we divide by $x-k$ is $k$? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource Ref: R8701 ## Question 1. Given that $f(x)=x^3+kx^2-2x+1$ and that when $f(x)$ is divided by $(x-k)$ the remainder is $k$, find the possible values of $k$. 2. When the polynomial $p(x)$ is divided by $(x-1)$ the remainder is $5$ and when $p(x)$ is divided by $(x-2)$ the remainder is $7$. Given that $p(x)$ may be written in the form $(x-1)(x-2)q(x)+Ax+B,$ where $q(x)$ is a polynomial and $A$ and $B$ are numbers, find the remainder when $p(x)$ is divided by $(x-1)(x-2)$.
2018-09-22 17:52:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29370495676994324, "perplexity": 67.66217999140133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158609.70/warc/CC-MAIN-20180922162437-20180922182837-00112.warc.gz"}
http://www.research.ibm.com/haifa/Workshops/PADTAD2005/abstracts.html
All of IBM IBM Haifa Labs # A Kernel-based Communication Fault Injector for Dependability Testing of Distributed Systems Roberto Jung Drebes, Gabriela Jacques-Silva, Joana Matos Fonseca da Trindade, and Taisy Silva Weber, Instituto de Informatica - Universidade Federal do Rio Grande do Sul Software-implemented fault injection is a powerful strategy to test fault-tolerant protocols in distributed environments. In this paper, we present ComFIRM, a communication fault injection tool we developed that minimizes the probe effect on the tested protocols. ComFIRM explores the possibility of inserting code directly inside the Linux kernel in the lowest level of the protocol stack through the load of modules. The tool injects faults directly into the message exchange subsystem, allowing the definition of test scenarios from a wide fault model that can affect messages being sent and/or received. Additionally, the tool is demonstrated in an experiment that applies the fault injector to evaluate the behavior of a group membership service under communication faults. # Invited Talk: Testing of MPI Software Dan Quinlan, Lawrence Livermore National Laboratory Large-scale scientific applications using the Message Passing Interface (MPI) standard are often difficult to test. Systems at DOE laboratories commonly use thousands of processors and emerging systems have tens of thousands of processors. The development environment for these systems is typically a small fraction of the whole machine and tools are often difficult to use at larger sizes. The problem is that applications often exhibit timing issues such as deadlocks and races on the real execution environment that they did not show in the small development environment. Thus, a method that causes applications to exhibit the bugs in the simpler development environment is crucial. We present initial work on techniques that automatically or semi-automatically cause the manifestation of timing-related bugs in MPI applications. These techniques work with small processor counts and alleviate the need to test on the entire system. Specifically, we will present a technique to evaluate the execution of MPI programs under perturbations of their scheduling of message passing calls. We will demonstrate that this technique aids in testing problems that are often not easily reproduced when testing on small fractions of the machine. The technique builds upon ($P^NMPI$), an extension of the MPI profiling interface that supports multiple layers of profiling libraries. We will also explore how tools that directly transform the source code could support for testing and debugging MPI applications at reduced scale. # Keynote: Checking Atomicity in Concurrent Java Programs Scott D. Stoller, State University of New York at Stony Brook Atomicity is a common correctness requirement for concurrent programs. It requires that concurrent invocations of a set of methods be equivalent to performing the invocations serially in some order. This is like serializability in transaction processing. Analysis of atomicity in concurrent programs is challenging, because synchronization commands may be scattered differently throughout each program, instead of being handled by a fixed strategy, such as two-phase locking. We are developing techniques for checking atomicity in concurrent programs, using static analysis, dynamic analysis (run-time monitoring), and novel combinations of them. This talk surveys our research in this area, discusses trade-offs between different techniques, and describes our experience applying them to about 40 KLOC of Java programs. This is joint work with Rahul Agarwal, Amit Sasturkar, and Liqiang Wang. # Detecting Potential Deadlocks with Static Analysis and Runtime Monitoring Rahul Agarwal, Liqiang Wang, and Scott D. Stoller, State University of New York at Stony Brook Concurrent programs are notorious for containing errors that are difficult to reproduce and diagnose. A common kind of concurrency error is deadlock, which occurs when a set of threads is blocked each trying to acquire a lock held by another thread in that set. Static and dynamic (run-time) analysis techniques exist to detect deadlocks. Havelund's GoodLock algorithm detects potential deadlocks at run-time. However, it detects only potential deadlocks involving exactly two threads. This paper presents a generalized version of the GoodLock algorithm that detects potential deadlocks involving any number of threads. Run-time checking may miss errors in unexecuted code. On the positive side, run-time checking generally produces fewer false alarms than static analysis. This paper explores the use of static analysis to automatically reduce the overhead of run-time checking. We extend our type system, Extended Parameterized Atomic Java (EPAJ), which ensures absence of races and atomicity violations, with Boyapati {\it et al.}'s deadlock types. We give an algorithm that infers deadlock types for a given program and an algorithm that determines, based on the result of type inference, which run-time checks can safely be omitted. The new type system, called Deadlock-Free EPAJ (DEPAJ), has the added benefit of giving stronger atomicity guarantees than previous atomicity type systems. Saddek Bensalem, Universite Joseph Fourier/Verimag, Grenoble, France, and Klaus Havelund, Kestrel Technology, Palo Alto, California, USA This paper presents a dynamic program analysis algorithm that can detect deadlock potentials in a multi-threaded program by examining a single deadlock-free execution trace, obtained by running an instrumented version of the program. The algorithm is interesting because it can identify deadlock potentials even though no deadlocks occur in the examined execution, and therefore it scales very well in contrast to more formal approaches to deadlock detection. It is an improvement of an existing algorithm in that it reduces the number of false positives (false warnings). The presented work complements a collection of algorithms for detecting potentials for various forms of data races in multi-threaded programs from error-free program runs. The paper describes an implementation and two case studies. # Cross-Run Lock Discipline Checker for Java Eitan Farchi, Yarden Nir-Buchbinder, and Shmuel Ur, IBM Haifa Research Lab Avoiding deadlock is a major challenge in concurrent/parallel programming. The classical deadlock situation happens when a thread T1 has taken lock L1 and attempts to take (nestedly) L2, while another thread T2 has taken L2 and attempts to take L1. A well-known methodology to avoid deadlocks is lock discipline - when several locks need to be taken together (nestedly), they must be taken in predefined order, and this order is to be shared among all threads in the system. Several tools, such as Java PathFinder and Microsoft's Driver Verifier identify violations of lock discipline during test runtime. They only look within the scope of one process run. If a cycle in the graph is caused by lock sequences from two different runs, then these tools will not reveal it. A test suite is often composed of many small tests, each in a different process, each activating only a few and short paths, so these tools are much less likely to reveal cycles than a tool that has a view of the entire history of runs. Overcoming this limitation is not trivial, since it's not obvious how to identify the same lock across different runs. We present a cross-run lock discipline checker. We associate a lock with a set of locations. At runtime, we trace lock operations (entrance to synchronized blocks). When the test suite has finished running, we have a set T of traces. An analyzer is run on T to create a partition of the code locations to "same lock" relation, as follows: Two locations l1 and l2 are the "same-lock" if there exists a trace t in T and a lock object ID x such that both l1 and l2 appeared in t associated with x at least once. "same-lock" is the transitive closure of (1). The nested locking graph is created as in existing algorithms, except that the nodes are "same lock" classes. Alerts are given for cycles found in this graph. It has been shown that this algorithm can detect deadlocks in testing scenarios where existing tools cannot. # Verification of the Java Causality Feature Sergey Polyakov and Assaf Schuster, Department of Computer Science, Technion - Israel Institute of Technology The Java Memory Model (JMM) formalizes the behavior of shared memory accesses in a multi-threaded Java program. Dependencies between memory accesses are acyclic, as defined by the JMM causality requirements. We study the problem of post-mortem verification of these requirements and prove that the task is NP-complete. We then argue that in some cases, the task may be simplified by tracing the actual execution order of read actions in each thread. Our verification algorithm has two versions: a polynomial version, to be used when the aforementioned simplification is possible, and a nonpolynomial version - for short test sequences only - to be used in all other cases. Finally, we argue that the JMM causality requirements could benefit from some fine-tuning. Our examination of causality test case 6 (presented in the public discussion of the JMM) clearly shows that some useful compiler optimizations - which one would expect to be permissible - are in fact prohibited by the formal model. # Choosing Among Alternative Futures Steve MacDonald and Jun Chen, School of Computer Science, University of Waterloo, and Diego Novillo, Red Hat, Inc. Non-determinism is a serious impediment to testing and debugging concurrent programs. Such programs do not execute the same way each time they are run, which can hide the presence of errors. Existing techniques use a variety of mechanisms that attempt to increase the probability of uncovering error conditions by altering the execution sequence of a concurrent program, but do not test for specific errors. This paper presents some preliminary work in deterministically executing a multi-threaded program using a combination of an intermediate compiler form that identifies concurrent reaching definitions and aspect-oriented programming to control program execution. Specifically, the aspects allow a read of a shared variable to return any of the reaching definitions, where the desired definition can be selected before the program is run. As a result, we can deterministically run test cases. This work is preliminary and many issues have yet to be resolved, but we believe this idea shows some promise.
2017-10-18 13:12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.365179181098938, "perplexity": 1926.1741726081636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822966.64/warc/CC-MAIN-20171018123747-20171018143747-00846.warc.gz"}
https://docs.scikit-nano.org/generated/sknano.structures.compute_symmetry_operation.html
#### Previous topic sknano.structures.compute_R_chiral_angle #### Next topic sknano.structures.compute_psi # sknano.structures.compute_symmetry_operation¶ sknano.structures.compute_symmetry_operation(*Ch, *, bond=None)[source][source] Compute symmetry operation $$(\psi|\tau)$$. The symmetry vector R represents a symmetry operation of the nanotube which arises as a screw translation–a combination of a rotation $$\psi$$ and translation $$\tau$$. The symmetry operation of the nanotube can be written as: $R = (\psi|\tau)$ Parameters: *Ch : {tuple or ints} Either a 2-tuple of ints or 2 integers giving the chiral indices of the nanotube chiral vector $$\mathbf{C}_h = n\mathbf{a}_1 + m\mathbf{a}_2 = (n, m)$$. bond : float, optional Distance between nearest neighbor atoms (i.e., bond length). Must be in units of Å. Default value is the carbon-carbon bond length in graphite, approximately $$\mathrm{a}_{\mathrm{CC}} = 1.42$$ Å (psi, tau) : tuple 2-tuple of floats – $$\psi$$ in radians and $$\tau$$ in Å.
2020-11-24 12:26:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7025386095046997, "perplexity": 3639.0738178792662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141176256.21/warc/CC-MAIN-20201124111924-20201124141924-00537.warc.gz"}
http://curtis.ml.cmu.edu/w/courses/index.php/Controversial_events_detection
# Controversial events detection Jump to navigationJump to search ## Comments This is a neat idea. The main difficulty I see here is formalizing the task precisely. What does it mean for an event to be controversial, exactly? Part of the problem is that it's not perfectly clear what an "event" is. One suggestion would be to look at a topic-modeling approach, eg topics over time, to find topics with a short temporal span in social-media data. You might be able to combine this with sentiment around those topics in two different communities - eg using something like my MCR-LDA model. So one way to flesh out this idea would be to start with two topic models: • MCR-LDA, to measure 'controversy' - you might be able to get predictions from Ramnath on his blog data, if the code's not ready to distribute yet. I would not completely commit to using twitter data exclusively, btw. • TOT, to detect shortlived 'events' vs long-term topics. Then write some inference code to combine the predictions and pick out "controversial events". The next stage would be working out a joint model (which you might not chose to do for the project). It's not obvious how you'd evaluate all this, however...maybe do some user labeling of final predictions like "this topic corresponds to a controversial event." These are just ideas - you might try and flesh out some other concrete idea instead. Good luck! --Wcohen 14:33, 10 October 2012 (UTC) PS. There is also a one-person team working on similar topic, you all should talk - it's User:Yuchen Tian --Wcohen 18:40, 10 October 2012 (UTC) ## Project idea In our project, we propose to jointly detect events and the controversy surrounding it in the context of social media. For example, Christmas day is an event that receives the most attention around December 25th, while the Presidential debates once every four years. Controversy-wise, Christmas day is relatively one sided, with most of the text mentioning it being relatively homogeneous. In contrast, the Presidential debates event will have obvious sides (supporting the different candidates). Our goal is not only to detect controversial events, but also to discover what the different sides are - both grouping the individuals associated with each faction and describing how each faction talks about the event differently. We propose to use a probabilistic graphical model to achieve our goals of learning these latent structures from the data without labeled training data. ## Formalizing the task Event - In the context of social media, an event is a period of time where there is a "surge" in the amount of interest (i.e. blog posts, tweets, comments, etc) surrounding the occurrence. We call this event controversial if given the text surrounding the event, the nature of the discussions are highly non-homogeneous (or exhibit high entropy). Each side of this event can be grouped together into a small number of distinct factions. Thus, in our task, given a collection of social media documents over time, we seek to jointly infer the the events that have occurred, as well as the controversy associated with it. ## A probabilistic model Here's a sketch of a topic model that we are considering for our task. It is a variant of a topic model, where each word is assumed to be jointly generated by an event and faction. It is also similar to the topic over time model, where we generate the time stamps for each document. A graphical plate diagram of our model will be up soon. ### Notation ${\displaystyle E}$ - fixed number of events ${\displaystyle \theta _{d}}$ - multinomial distribution of events specific to document ${\displaystyle d}$ ${\displaystyle \phi _{e_{di}}}$ - multinomial distribution of factions specific to event ${\displaystyle e_{di}}$ ${\displaystyle \psi _{e_{di}}}$ - the beta distribution of time specific to event ${\displaystyle e_{di}}$ ${\displaystyle w_{di}}$ - the ${\displaystyle i}$th token in document ${\displaystyle d}$ ${\displaystyle t_{di}}$ - timestamp associated with the ${\displaystyle i}$th token in document ${\displaystyle d}$ ${\displaystyle \eta ^{e},\eta ^{e,f},\eta ^{m}}$ - SAGE vectors, which are log additive weights for each word in the vocabulary. We have one for each event, each combination of event and faction, and a background word distribution. ### Generative story 1. Draw ${\displaystyle E}$ multinomials, ${\displaystyle \phi _{e}}$ from a Dirichlet prior, one for each event ${\displaystyle e}$. This is the distribution over factions for each event that we have. 2. For each document ${\displaystyle d}$, draw a multinomial ${\displaystyle \theta _{d}}$ from a prior ${\displaystyle \alpha }$ (this prior could be Dirichlet or logistic normal); then for each word ${\displaystyle w_{di}}$ in the document ${\displaystyle d}$: 1. Draw an event ${\displaystyle e_{di}}$ from multinomial ${\displaystyle \theta _{d}}$; 2. Draw a faction ${\displaystyle f_{di}}$ from multinomial ${\displaystyle \phi _{e_{di}}}$; 3. Draw a word ${\displaystyle w_{di}}$ from a SAGE language model ${\displaystyle p(w_{di}\mid e_{di},f_{di},\mathbf {\eta } )\propto \exp(\eta _{w}^{e_{di}}+\eta _{w}^{e_{di},f_{di}}+\eta _{w}^{m})}$; 4. Draw a timestamp ${\displaystyle t_{di}}$ from Beta ${\displaystyle \psi _{e_{di}}}$. ### SAGE language model To model the different effects of events and factions, we use a sparse additive generative (SAGE) model. In contrast to the popular Dirichlet-multinomial for topic modeling, which directly models lexical probabilities associated with each (latent) topic, SAGE models the deviation in log frequencies from a background lexical distribution. Applying a sparsity inducing prior on the topic term vectors limits the number of terms whose frequencies diverge from the background lexical frequencies, thereby increasing robustness to limited training data. Also, in the case of our model, it eliminates the need for a switching variable to choose between event words and faction words. ### Logistic normal prior for events Using a logistic normal prior for events will allow us to incorporate features (such as Twitter hashtags, blog posts titles, comments count, etc) in a principled manner. Logistic normal priors have been used in here and here ## Data and evaluation We intend to experiment with two different sets of data: 1. Set of tweets collected over 12 weekends (Sep-Dec 2011) 2. Posts and comments from political blogs (relating to the presidential elections) in the year 2012 Over the 12 weekends from Sep-Dec, there are football games played every Sunday evenings. Football games present an obvious way for us to evaluate the performance of our model. Each of these games qualify as an event with a known time of occurrence. Additionally, we also know that there are at least two factions associated with each game (one set of fans for each team). One way of identifying factions would be to manually inspect the word vectors associated with the factions, identifying the teams that they are supporting. Another option is to leverage on the location metadata associated with each tweet. To identify factions with fans bases, we will compute the mean location (expressed as latitude and longitude) for each faction as the weighted average of words that draw from that faction, and then associate it with the geographically closest NFL market (in terms of great-circle distance). Also, significant events that have occurred during this period are 9/11 anniversary, Halloween, thanksgiving and Christmas. These events should have low entropy in the faction distribution of words within a document, which will serve as a reference for evaluating our model in terms of its ability to identify factions. Blog posts provide substantially more content per document. Since this is an election year, hope to use data scraped from political blogs to qualitatively evaluate our model in its ability to pick up key election year events (like debates, primaries, conventions, Todd Akin-like controversial remarks, etc). Also, politics is one of the most contentious subject with much discussions and debates, which we hope our model will be able to learn the factions from.
2022-11-29 08:42:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 30, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3322610855102539, "perplexity": 1479.1819257710627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710690.85/warc/CC-MAIN-20221129064123-20221129094123-00103.warc.gz"}
https://wiki2.org/en/Fibered_knot
To install click the Add extension button. That's it. The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time. 4,5 Kelly Slayton Congratulations on this excellent venture… what a great idea! Alexander Grigorievskiy I use WIKI 2 every day and almost forgot how the original Wikipedia looks like. Live Statistics English Articles Improved in 24 Hours Added in 24 Hours Languages Recent Show all languages What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better. . Leo Newton Brights Milds # Fibered knot Figure-eight knot is fibered. In knot theory, a branch of mathematics, a knot or link ${\displaystyle K}$ in the 3-dimensional sphere ${\displaystyle S^{3}}$ is called fibered or fibred (sometimes Neuwirth knot in older texts, after Lee Neuwirth) if there is a 1-parameter family ${\displaystyle F_{t}}$ of Seifert surfaces for ${\displaystyle K}$, where the parameter ${\displaystyle t}$ runs through the points of the unit circle ${\displaystyle S^{1}}$, such that if ${\displaystyle s}$ is not equal to ${\displaystyle t}$ then the intersection of ${\displaystyle F_{s}}$ and ${\displaystyle F_{t}}$ is exactly ${\displaystyle K}$. For example: Fibered knots and links arise naturally, but not exclusively, in complex algebraic geometry. For instance, each singular point of a complex plane curve can be described topologically as the cone on a fibered knot or link called the link of the singularity. The trefoil knot is the link of the cusp singularity ${\displaystyle z^{2}+w^{3}}$; the Hopf link (oriented correctly) is the link of the node singularity ${\displaystyle z^{2}+w^{2}}$. In these cases, the family of Seifert surfaces is an aspect of the Milnor fibration of the singularity. A knot is fibered if and only if it is the binding of some open book decomposition of ${\displaystyle S^{3}}$. • 1/3 Views: 1 136 316 875 • Trefoil knot • GTiNY2013 Video Series 10/20: Stefan Friedl (3rd day, 2nd Talk on Wednesday 8/14 ) • David Morrison: Calabi–Yau manifolds, Mirror Symmetry, and F-theory - Part I ## Knots that are not fibered Stevedore's knot is not fibered The Alexander polynomial of a fibered knot is monic, i.e. the coefficients of the highest and lowest powers of t are plus or minus 1. Examples of knots with nonmonic Alexander polynomials abound, for example the twist knots have Alexander polynomials qt − (2q + 1) + qt−1, where q is the number of half-twists.[1] In particular the Stevedore's knot is not fibered.
2017-09-22 02:40:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 14, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7806459069252014, "perplexity": 1205.3643256025546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688158.27/warc/CC-MAIN-20170922022225-20170922042225-00427.warc.gz"}
https://moodle.org/mod/forum/discuss.php?d=358292&parent=1445387
## General help ### updating the number of sections of course page in Bulk Re: updating the number of sections of course page in Bulk there was a small error in the sql in WHERE name='numsections' the correct one as follows: to update the number of course sections for all courses to a specific value (15 in my case): UPDATE mdl_course_format_options SET value=15 WHERE name='numsections' AND courseid<>1 and format='weeks' I agree with Howard I don't recommend updating the database too, however, I needed fast solution Average of ratings: -
2017-09-20 12:00:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4395950138568878, "perplexity": 4261.43417698413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00106.warc.gz"}
https://par.nsf.gov/biblio/10159346
Absorbing–active transition in a multi-cellular system regulated by a dynamic force network Collective cell migration in 3D extracellular matrix (ECM) is crucial to many physiological and pathological processes. Migrating cells can generate active pulling forces via actin filament contraction, which are transmitted to the ECM fibers and lead to a dynamically evolving force network in the system. Here, we elucidate the role of this force network in regulating collective cell behaviors using a minimal active-particle-on-network (APN) model, in which active particles can pull the fibers and hop between neighboring nodes of the network following local durotaxis. Our model reveals a dynamic transition as the particle number density approaches a critical value, from an “absorbing” state containing isolated stationary small particle clusters, to an “active” state containing a single large cluster undergoing constant dynamic reorganization. This reorganization is dominated by a subset of highly dynamic “radical” particles in the cluster, whose number also exhibits a transition at the same critical density. The transition is underlaid by the percolation of “influence spheres” due to the particle pulling forces. Our results suggest a robust mechanism based on ECM-mediated mechanical coupling for collective cell behaviors in 3D ECM. Authors: ; ; ; ; ; ; ; ; Award ID(s): Publication Date: NSF-PAR ID: 10159346 Journal Name: Soft Matter Volume: 15 Issue: 35 Page Range or eLocation-ID: 6938 to 6945 ISSN: 1744-683X National Science Foundation ##### More Like this 1. Cells interacting over an extracellular matrix (ECM) exhibit emergent behaviors, which are often observably different from single-cell dynamics. Fibroblasts embedded in a 3-D ECM, for example, compact the surrounding gel and generate an anisotropic strain field, which cannot be observed in single cellinduced gel compaction. This emergent matrix behavior results from collective intracellular mechanical interaction and is crucial to explain the large deformations and mechanical tensions that occur during embryogenesis, tissue development and wound healing. Prediction of multi-cellular interactions entails nonlinear dynamic simulation, which is prohibitively complex to compute using first principles especially as the number of cells increase. Here, we introduce a new methodology for predicting nonlinear behaviors of multiple cells interacting mechanically through a 3D ECM. In the proposed method, we first apply Dual- Faceted Linearization to nonlinear dynamic systems describing cell/matrix behavior. Using this unique linearization method, the original nonlinear state equations can be expressed with a pair of linear dynamic equations by augmenting the independent state variables with auxiliary variables which are nonlinearly dependent on the original states. Furthermore, we can find a reduced order latent space representation of the dynamic equations by orthogonal projection onto the basis of a lower dimensional linear manifold within themore » 2. Many-body interactions in systems of active matter can cause particles to move collectively and self-organize into dynamic structures with long-range order. In cells, the self-assembly of cytoskeletal filaments is critical for cellular motility, structure, intracellular transport, and division. Semiflexible cytoskeletal filaments driven by polymerization or motor-protein interactions on a two-dimensional substrate, such as the cell cortex, can induce filament bending and curvature leading to interesting collective behavior. For example, the bacterial cell-division filament FtsZ is known to have intrinsic curvature that causes it to self-organize into rings and vortices, and recent experiments reconstituting the collective motion of microtubules driven by motor proteins on a surface have observed chiral symmetry breaking of the collective behavior due to motor-induced curvature of the filaments. Previous work on the self-organization of driven filament systems have not studied the effects of curvature and filament structure on collective behavior. In this work, we present Brownian dynamics simulation results of driven semiflexible filaments with intrinsic curvature and investigate how the interplay between filament rigidity and radius of curvature can tune the self-organization behavior in homochiral systems and heterochiral mixtures. We find a curvature-induced reorganization from polar flocks to self-sorted chiral clusters, which is modified by filament flexibility.more » 3. Abstract The transport of particles and fluids through multichannel microfluidic networks is influenced by details of the channels. Because channels have micro-scale textures and macro-scale geometries, this transport can differ from the case of ideally smooth channels. Surfaces of real channels have irregular boundary conditions to which streamlines adapt and with which particle interact. In low-Reynolds number flows, particles may experience inertial forces that result in trans-streamline movement and the reorganization of particle distributions. Such transport is intrinsically 3D and an accurate measurement must capture movement in all directions. To measure the effects of non-ideal surface textures on particle transport through complex networks, we developed an extended field-of-view 3D macroscope for high-resolution tracking across large volumes ($$25\,\hbox {mm} \times 25\,\hbox {mm} \times 2\,\hbox {mm}$$$25\phantom{\rule{0ex}{0ex}}\text{mm}×25\phantom{\rule{0ex}{0ex}}\text{mm}×2\phantom{\rule{0ex}{0ex}}\text{mm}$) and investigated a model multichannel microfluidic network. A topographical profile of the microfluidic surfaces provided lattice Boltzmann simulations with a detailed feature map to precisely reconstruct the experimental environment. Particle distributions from simulations closely reproduced those observed experimentally and both measurements were sensitive to the effects of surface roughness. Under the conditions studied, inertial focusing organized large particles into an annular distribution that limited their transport throughout the network while small particles were transported uniformly tomore » 4. Abstract Active particle systems can vary greatly from one-component systems of spheres to mixtures of particle shapes at different composition ratios. We investigate computationally the combined effect of anisotropy and stoichiometry on the collective behavior of two-dimensional active colloidal mixtures of polygons. We uncover three emergent phenomena not yet reported in active Brownian particle systems. First, we find that mixtures containing hexagons exhibit micro-phase separation with large grains of hexagonal symmetry. We quantify a measurable, implicit ‘steric attraction’ between the active particles as a result of shape anisotropy and activity. This calculation provides further evidence that implicit interactions in active systems, even without explicit attraction, can lead to an effective preferential attraction between particles. Next, we report stable fluid clusters in mixtures containing one triangle or square component. We attribute the fluidization of the dense cluster to the interplay of cluster destabilizing particles, which introduce grain boundaries and slip planes into the system, causing solid-like clusters to break up into fluid clusters. Third, we show that fluid clusters can coexist with solid clusters within a sparse gas of particles in a steady state of three coexisting phases. Our results highlight the potential for a wide variety of behavior to bemore » 5. Abstract Vasculogenesis is thede novoformation of a vascular network from individual endothelial progenitor cells occurring during embryonic development, organogenesis, and adult neovascularization. Vasculogenesis can be mimicked and studiedin vitrousing network formation assays, in which endothelial cells (ECs) spontaneously form capillary-like structures when seeded in the appropriate microenvironment. While the biochemical regulators of network formation have been well studied using these assays, the role of mechanical and topographical properties of the extracellular matrix (ECM) is less understood. Here, we utilized both natural and synthetic fibrous materials to better understand how physical attributes of the ECM influence the assembly of EC networks. Our results reveal that active cell-mediated matrix recruitment through actomyosin force generation occurs concurrently with network formation on Matrigel, a reconstituted basement membrane matrix regularly used to promote EC networks, and on synthetic matrices composed of electrospun dextran methacrylate (DexMA) fibers. Furthermore, modulating physical attributes of DexMA matrices that impair matrix recruitment consequently inhibited the formation of cellular networks. These results suggest an iterative process in which dynamic cell-induced changes to the physical microenvironment reciprocally modulate cell behavior to guide the formation and stabilization of multicellular networks.
2023-02-08 23:55:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38101741671562195, "perplexity": 3162.3579247289626}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00348.warc.gz"}
https://www.spiderflock.com/posts/sscanf-fun.html
Sscanf Fun 23 January 2023 It is always surprising how well the C library holds together when you want to do certain tasks. It is ancient, unsafe (well lets call that sharp) and has evolved over a period of time giving it plenty of opportunity to pick up the many quirks it has. However it is small-ish and mostly fits in your head. One set of functions I never used for a long time were the scanf functions. This week I learnt a new trick. I was parsing a fairly simple csv file that contains the lines like this tree.png,38,48,2,2,0,0,38,48 It is a description of a texture region on a sprite sheet for a small game engine I am working on. Now I could split this up based on the commas. Heck I could use strtok for that :) Ok I mean strtok_r but really it looks like a job for sscanf The later parts are easy but then first element is the name of the original file and really I wanted to extract that without the .png to avoid the extra step of having to remove it. I wanted to read into a string the first part of the line until I hit a .. This is actually doable using sscanf and here it is. char name[1024]; const int num_read = sscanf(line.begin, "%1023[^.].png, %f,%f,%f,%f,%f,%f,%f,%f", name, &x_size, &y_size, &x_pos, &y_pos, &x_off, &y_off, &x_orig, &y_orig); That part that was new to me was the [^.] part of 1023[^.] which says read up to 1023 characters and then the [...] says what characters I want to read. If you interested in only digits and dashes you would write [1234567890-]. The ^ is a modifier if used straight after the [ chaanges the meaning to read anything not in the []. Our expression can now be read as Read upto 1023 characters stopping when hit a . This was all new to me but it appears to have been in C and hence C++ for quite a long time. While the 1023 limit does worry me to much, if you are willing to go off standard then you can supply an a modifier that will allocate the string make it possible to read an unknown number of characters. Of course you have to remember to deallocate the char array when you are done. That was another new discovery. I am not suggesting this is the best way to parse csv in C or C++ as plenty of libraries exist. Actually you will probably have more fun using a different language to make a game. Actually use Godot, Unity or Unreal. I am just find I kind of enjoy writing in a subset of C++ that is very close to C. I quite like C really and may just port it to that. I also enjoy Rust and many other programming languages so really just do what you want. After all if you are making small games it probably doesn't matter anyway.
2023-03-30 23:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2800922095775604, "perplexity": 974.964602627648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00158.warc.gz"}
http://scholarpedia.org/article/User:Eugene_M._Izhikevich/Proposed/Smale-Williams_attractor
# User:Eugene M. Izhikevich/Proposed/Smale-Williams attractor Dr. Robert F. Williams accepted the invitation on 29 January 2007 (self-imposed deadline: 29 February 2007). This article will briefly cover: Attractors with a hyperbolic structure in the sense of Smale. (possible working name: SW attractors)Originated with the 1967 paper of Smale and an article by me, also in 1967, entitled 'One dimensional non-wandering sets' in which 1-dimensional sw attractors were characterized. This was extended to higher dimensions with a paper 'Expanding Attractors'. Where this concept was introduced in 1974. It has recently (1998, Anderson-Putnam) to include 'tiling spaces.' msmath,Amssymb} \Usepackage{Epsf} %\Usepackage{Dvips} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \Newtheorem{Cor}{Corollary} \Newtheorem{Con}{Conjecture} \newtheorem{defn}{Definition} \newtheorem{lem}{Lemma} \newtheorem{prop}{Proposition} \newtheorem{rem}{Remark} \newtheorem{thm}{Theorem} %\newtheorem{stand}[stand]{Assumption} \def \ted{{\mathbb T} } \def \ded{{\mathbb D} } \def \red{{\mathbb R} } \def \zed{{\mathbb Z} } \def \ned{{\mathbb N} } \def \l{\ell} \def \led{\mathcal{L}} \def \ced{\mathcal{C}} \def \bed{\mathcal{B}} \begin{document} \title{Smale-Williams attractors } \author{R. F. Williams} %\markboth \maketitle \section{Introduction} \section{Contents} \begin{enumerate} \item definition of attractors, and SW attractors \item example dyadic solenoid \item example DA \item show how they collapse to branched manifolds \item branched manifolds \item Theorem 1 dimensional SWA's arise in this manner, where UNIQ4939542045730407-MathJax-2-QINU satisfies 1,2,3. \item commutative diagram big \item 1-D tiling spaces \item Expanding attractors \item Anderson Putnam theorem \item An unsolved conjecture \end{enumerate} A Smale-Williams attractor SWA for dynamical system is 1) an attractor for the system which 2) has a hyperbolic structure on its chain recurrent set, and 3) is neither a sink nor a periodic orbit. The dyadic solenoid, though long know in many branches of mathematics as a space (even a group), was described by Smale as an attractor in lectures in the mid 60's. \cite[S](The Lorenz attractor, seemingly an attractor by computer experiments, attracted little attention before J. Guckenheimer published A strange strange attractor'' in 1975.) Williams, published 1967, found a classification of all SWA which, like the solenoid, are 1 dimensional. Though 'fractal' (a word introduced several years later), SWA are determined by simple map: for the dyadic solenoid, the simple map is the 2-to 1 map UNIQ4939542045730407-MathJax-3-QINU which raps the circle around itself twice. The role of the circle, a 1-manifold, is enlarged to branched 1-manifolds, or 'train tracks'. Branched 1-manifolds have a (smooth) structure; endomorphisms may be smooth; as usual, an immersion UNIQ4939542045730407-MathJax-4-QINU is a differentiable map with derivative UNIQ4939542045730407-MathJax-5-QINU nonsingular at each point; and a map UNIQ4939542045730407-MathJax-6-QINU is expanding if UNIQ4939542045730407-MathJax-7-QINU at each point. Some examples and figures.\par For the classification, one has first a map UNIQ4939542045730407-MathJax-8-QINU where UNIQ4939542045730407-MathJax-9-QINU is a branched 1-manifold, and UNIQ4939542045730407-MathJax-10-QINU is an immersion satisfying three conditions: 1) UNIQ4939542045730407-MathJax-11-QINU expands (local) distances; 2) the chain recurrent set of UNIQ4939542045730407-MathJax-12-QINU is all of UNIQ4939542045730407-MathJax-13-QINU and 3) each point of UNIQ4939542045730407-MathJax-14-QINU has a neighborhood UNIQ4939542045730407-MathJax-15-QINU such that UNIQ4939542045730407-MathJax-16-QINU has no branches, for some UNIQ4939542045730407-MathJax-17-QINU To pass from UNIQ4939542045730407-MathJax-18-QINU to the fractal attractor, one uses a standard trick in mathematics, sometimes called 'the universal extension', UNIQ4939542045730407-MathJax-19-QINU Here the space UNIQ4939542045730407-MathJax-20-QINU is the inverse limit of the sequence UNIQ4939542045730407-MathJax-1-QINU and UNIQ4939542045730407-MathJax-21-QINU \begin{thm} An attractor UNIQ4939542045730407-MathJax-22-QINU of a diffeomorphism UNIQ4939542045730407-MathJax-23-QINU is a S-W attractor if and only if there exists a branched 1-manifold UNIQ4939542045730407-MathJax-24-QINU and and immersion UNIQ4939542045730407-MathJax-25-QINU satisfying the conditions 1-3 above, such that UNIQ4939542045730407-MathJax-26-QINU is topologically conjugate to UNIQ4939542045730407-MathJax-27-QINU \end{thm} All there is at this date: 3-5-2010. \begin{thebibliography}{99} \bibitem [AP]{AP} \bibitem[FJ]{FJ} New attractors in hyperbolic dynamics, Journal of Differential Geometry, vol. 15, no. 1, 1980, 107-133. \bibitem[G]{G} Guckenheimer, A strange strange attractor. \bibitem[R]{R} Dynamical systems, stability, symbolic dynamics, and chaos, 1995, CRC press \bibitem[Sm]{Sm} Smale, Steve. Differentiable dynamical Systems, Bull. Amer. Math. Soc.,73, 747-817 \bibitem[Wen]{Wen} Wen, L \bibitem[W1] One dimensional nonwandering sets, Topology 6,(1967)473-487. \bibitem[W3]{W3} Williams, R.F.Expanding attractors, Institute des Hautes \'Etudes Scientifique Publ. Math. no. 43(1973)169-203. %\bibitem[W4]{W4} Williams, R.F. The DA'' maps of Smale and structural % stability, Global Analysis, vol. XIV of Amer. Math. Soc. Proc. of % Symposia in Pure and Appl. Math., 14(1970) 239-334. \end{thebibliography} \noindent \author{ R.F. Williams\\ Department of Mathematics\\ The University of Texas at Austin\\ Austin, TX 78712 U.S.A.} \end{document}
2018-12-15 01:59:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837358593940735, "perplexity": 7124.929804644521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826686.8/warc/CC-MAIN-20181215014028-20181215040028-00083.warc.gz"}
https://proxieslive.com/tag/test/
## How To Split Test For Profitable Campaigns for $1 Learn How to Setup the Right Split Testing and Make More Profit from it! Split testing is a method for comparing two different versions of a landing page to determine which version performs better. Split testing should NEVER be a one-time experiment. It’s important to keep running new split tests, so you can continue to improve your sales and profits. Setting up a split test is very easy to do. You simply need to determine what element you wish to test on your landing page first. Always be sure you only test one element at a time for your split test. Otherwise, you won’t be sure which changes you made resulted in a specific improvement. For example, start by testing the headline. Using your current headline as the control, duplicate your landing page and create a new headline variation for the test. The headline is the single most important element to test and can result in HUGE improvements in your conversion rate. Other important elements you can test one by one are your landing page layout, website colors, images, call to action, price, guarantee, etc. The list goes on and on, but those are some important ones to consider first. In order to gain statistical confidence in your test, it’s advised that you send at least 300 unique visitors to each landing page. Some marketers even prefer a higher baseline number, such as 500 unique visitors to each landing page. However, it should be noted that some split tests may produce an extreme difference in the results and if one page is performing very poorly, then you may consider ending the split test early. Nobody likes to waste traffic. If you have a clear winner early, then you may choose to end the split test, so you can maximize your ROI. This is especially true if you’re using paid advertising to generate traffic. Once your split test has achieved statistical confidence, it’s time to analyze your results. The essential metrics you’ll be tracking and analyzing are unique visitors, conversions and conversion rate. by: jordanng Created: — Category: Tutorials & Guides Viewed: 191 ## Does Tarjan algorithm fail for this test case ? references :gfg and Tushar roy’s tutorial The test case is : There are 3 vertices 1,2,3 such edge: source-destination 1-2 1-3 2-3 As dfs proceeds from 1 to 2 and then to 3 . Following are the discovery and low times : 1 discT:0,lowT:0; 2 discT:1,lowT:1; 3 discT:2,lowT:2; Since 2’s disc time is less than 3’s low time . 2 becomes the articulation point due to the theorem which should NOT BE. Am i doing something wrong . Kindly explain. Below is my dfs function-> public void dfs(){ ArrayDeque<vertex> st=new ArrayDeque<>(); st.push(vertexList.get(0)); int pt=1; vertexList.get(0).discTime=0; vertexList.get(0).lowTime=0; vertexList.get(0).visited=true; int numberOfverticesCovered=0; while(!st.isEmpty()){ vertex v=st.peek(); System.out.println("considering "+v.label); vertex p=getAdjacent(v); if(p==null) { System.out.println("left with no unvisited adjacent vertices "+v.label); if(v!=vertexList.get(0)){ LinkedList<edge> le=adjList.get(v.label-1); for (edge e : le) { if(v.discTime<=e.destination.lowTime) { artPoints.add(v); System.out.println("new articulation point found "+v.label+" for edge "+e.source.label+" and "+e.destination.label); System.out.println("disc time of "+v.label+" is "+v.discTime+" and low time is "+v.lowTime); System.out.println("disc time of adj "+e.destination.label+" is "+e.destination.discTime+" and low time is "+e.destination.lowTime); break; } v.lowTime=Math.min(v.lowTime, e.destination.lowTime); System.out.println("new low time of "+v.label+" is "+v.lowTime); } } numberOfverticesCovered+=1; st.pop(); } else { v.children+=1; // System.out.println("adding child "+p.label+" to parent "+v.label); p.discTime=pt; p.lowTime=pt; p.parent=v; st.push(p); pt+=1; } if(st.isEmpty()&& numberOfverticesCovered!=vertexList.size()){ for (vertex object : vertexList) { if(!object.visited) { object.discTime=pt; object.lowTime=pt; object.visited=true; st.push(object); break; } } } } if(vertexList.get(0).children>1 ) // put in check for back edge for the other children so that they are not connected to each other. { artPoints.add(vertexList.get(0)); System.out.println("added root as an articulation point and it has "+vertexList.get(0).children); } } } ## Getting error while accessing the room database in instrumentation test cases – Android I am using room database in my app. I have Login feature in my app, where after taking userId & password, on click of Login button I am calling API and storing the response data in room database table after getting a successful callback response from API. Now I want to write integration test cases for database data, where I am using mockWebServer to mock the API response and storing that in the room database table. And later I am fetching the DB values & testing whether those are stored properly or not but I am getting below error java.lang.IllegalStateException: Cannot access database on the main thread since it may potentially lock the UI for a long period of time. On this line authentication = authenticationDao.getAuthInformation(); Below is my test cases code: @RunWith(AndroidJUnit4.class) @FixMethodOrder(MethodSorters.NAME_ASCENDING) public class TestLogin { @Rule public InstantTaskExecutorRule mInstantTaskExecutorRule = new InstantTaskExecutorRule(); @Rule public ActivityTestRule<LoginActivity> activityTestRule = new ActivityTestRule<>(LoginActivity.class, true, false); @Rule public MockWebServerTestRule mockWebServerTestRule = new MockWebServerTestRule(); @Mock Application application; LoginViewModel loginViewModel; AppDatabase appDatabase; AuthenticationDao authenticationDao; Authentication authentication; @Before public void setUp() throws Exception { MockitoAnnotations.initMocks(this); loginViewModel = new LoginViewModel(application); ApiUrls.TOKEN = mockWebServerTestRule.mockWebServer.url("/").toString(); appDatabase = Room.inMemoryDatabaseBuilder(InstrumentationRegistry.getContext(), AppDatabase.class).build(); authenticationDao = appDatabase.authenticationDao(); activityTestRule = new ActivityTestRule<>(LoginActivity.class, true, true); String fileName = "valid_login_response.json"; mockWebServerTestRule.mockWebServer.enqueue(new MockResponse() .setBody(RestServiceTestHelper.getStringFromFile(getContext(), fileName)) .setResponseCode(HttpURLConnection.HTTP_OK)); Intent intent = new Intent(); activityTestRule.launchActivity(intent); loginViewModel.userName.postValue("Elon"); loginViewModel.password.postValue("Musk123"); loginViewModel.getAuthenticateTokenData(); mockWebServerTestRule.mockWebServer.takeRequest(); } @Test public void a_testDbEntryOnValidResponse() { authentication = authenticationDao.getAuthInformation(); String issueTime = authentication.getIssueDateTime(); String expirationTime = authentication.getExpireDateTime(); String refreshToken = authentication.getRefreshToken(); Assert.assertEquals("Tue, 16 Apr 2019 10:39:20 GMT", issueTime); Assert.assertEquals("Tue, 16 Apr 2019 10:54:20 GMT", expirationTime); Assert.assertEquals("e2b4dfd7205587745aa3100af9a0b", refreshToken); } } Below is my AppDatabase class: @Database(entities = {Authentication.class, UserProfile.class}, version = 1, exportSchema = false) public abstract class AppDatabase extends RoomDatabase { private static AppDatabase INSTANCE; public static AppDatabase getAppDatabase(Context context) { if (INSTANCE == null) { INSTANCE = Room.databaseBuilder(context, AppDatabase.class, "myapp-database") .allowMainThreadQueries() .build(); } return INSTANCE; } public abstract AuthenticationDao authenticationDao(); public abstract UserProfileDao userProfileDao(); } What could be the issue? Is my test case right? Thank you in advance. ## How to make best performance to test CPU time of my java program I am using ubuntu 18.04 lts. Intellij Ultimate 2019 Oracle Jdk 1.8.0_211 I have to test N-Queen solver with java of single-thread.It consumes a lot memory usage.I want to get fastest result of CPU time. My benchmark machine is i7 4th gen 3.6ghz with 8 cores. Ram 8 Gb I have changed the jvm options in intellij to xmx7000mb.And what ways I need to do to optimize best CPU time ? ## Advanced SQL Injection Test [on hold] As an assignment for my cybersecurity course, I’ve been tasked to execute a penetration test on a login form which is bound to be vulnerable to SQL Injection and if we can’t penetrate it like that we must run a brute force attack on the form. I’ve been unsuccessful with checking for vulnerabilities, even when using sqlmap. I was wondering if anyone could help me find a better tool or exploitation. Thanks in advance. ## What test is made to deceive someone in Symbaroum? As far as I have been able to find, there is no specific ruling made in the core rulebook on what attribute test is made when attempting to deceive someone. In the supplemental rules table, it is suggested that when trying to persuade a target, the active player should test persuasive<-resolute. In this same table, it is suggested that an attempt to confuse a target could be made with a test of resolute<-resolute. To me, it would follow that based on the attribute descriptions and the existing situations that a persuasive<-cunning test would be best for deception. Is there a distinct ruling on what a proper deception attempt should look like or is my ruling an acceptable use of the persuasive<-cunning test? ## Why is in a c# test class the Selenium Wait no longer timing out I have a small extension method to find an element using the WebDriverWait public static IWebElement FindElement(this IWebDriver driver, By by, int timeoutInSeconds) { if (timeoutInSeconds > 0) { var wait = new WebDriverWait(driver, TimeSpan.FromSeconds(timeoutInSeconds)); return wait.Until(drv => drv.FindElement(by)); } return driver.FindElement(by); } It has worked reliably previously, but it is no longer working correctly. The problem is that the timeout is no longer being applied. It will keep looking indefinitely at least until the call to the web driver times out. This is a C# test program, and the driver is the chrome driver. It is fairly typically called using find by XPath but We use other find types as well. For example var element = webDriver.FindElement(By.XPath(@"//h1[@class='m-t30'][contains(.,'My Profile')]"), 15); Any Idea why this is now failing> ## CORS issue detected by Burp, but test code poc doesn’t work Here the response afterchanging the Origin via Burp HTTP/1.1 200 OK Server: nginx Date: Thu, 13 Jun 2019 12:29:07 GMT Content-Type: application/json;charset=UTF-8 Connection: close Vary: Accept-Encoding Access-Control-Allow-Origin: http://blablabla Vary: Origin Access-Control-Allow-Credentials: true Strict-Transport-Security: max-age=31536000; includeSubDomains X-XSS-Protection: 1; mode=block X-Frame-Options: SAMEORIGIN X-Content-Type-Options: nosniff Content-Security-Policy: default-src * data: 'unsafe-eval' 'unsafe-inline' Referrer-Policy: no-referrer-when-downgrade Content-Length: 2399 But the test code poc, launched by my website, return code 403:forbidden <script> function cors() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { document.getElementById("demo").innerHTML = alert(this.responseText); } }; xhttp.open("GET", "https://target.com/info/", true); xhttp.withCredentials = true; xhttp.send(); } </script> ## How can I test that the antennas in Macs are all connected and working? I’m a technician repairing Macs every day and I test the Macs after repair (laptops and desktops) . So what I do is a functional test of the Mac by logging in to macOS and testing everything: camera, keyboard, WIFI, Bluetooth etc. My doubt is: how can I test that the 2, 3 or 4 antennas (depending on the Mac model) are connected?. I know that one of them does the Bluetooth while usually all of them add up to the WIFI bandwidth. I know how to check the Bluetooth connection, but, how can I check the WIFI antennas are all connected and detected by the wireless card? I thought I may run some WIFI speed test able to test the download/upload speed to the router, but I’ve never used this. Any advice on a good WIFI speed test (free if possible)? Would this be the best way to check that the antennas are all connected and that the wireless card in the Mac is detecting all the antennas properly? Here’s a post explaining how the antennas work (if that helps): What's the difference between the three wireless antennas in MacBook Pros? Any help much welcome ## Conjectured primality test for numbers of the form$N=4 \cdot 3^n-1\$ This is a repost of this question. Can you provide proof or counterexample for the claim given below? Inspired by Lucas-Lehmer primality test I have formulated the following claim: Let $$P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$$ . Let $$N= 4 \cdot 3^{n}-1$$ where $$n\ge3$$ . Let $$S_i=S_{i-1}^3-3 S_{i-1}$$ with $$S_0=P_9(6)$$ . Then $$N$$ is prime if and only if $$S_{n-2} \equiv 0 \pmod{N}$$ . You can run this test here . Numbers $$n$$ such that $$4 \cdot 3^n-1$$ is prime can be found here . I was searching for counterexample using the following PARI/GP code: CE431(n1,n2)= { for(n=n1,n2, N=4*3^n-1; S=2*polchebyshev(9,1,3); ctr=1; while(ctr<=n-2, S=Mod(2*polchebyshev(3,1,S/2),N); ctr+=1); if(S==0 && !ispseudoprime(N),print("n="n))) } P.S. Partial answer can be found here.
2019-06-17 17:54:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827373623847961, "perplexity": 58.39499358261776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998513.14/warc/CC-MAIN-20190617163111-20190617185111-00052.warc.gz"}
https://chat.stackexchange.com/transcript/message/34643266
12:00 AM yeah, but we can generalize this though to general setting. 12:44 AM Hello @TedShifrin :)) Heya @Ali :) @TedShifrin I was reading about alternating algebras and that grading the tensor algebra and quotienting it can have their order swapped. Is this the most natural construction for it? I am not familiar with alternating algebras. Do you mean exterior algebra? Well let m be an ideal of k dim tensor algebra over E where elements of m are of the form x1 x x2 ... xn but have repeated terms then quotient out m from k dim tensor algebra over E then direct sum for all k I guess the exterior derivative is a natrual cohmology thingy T^k(E)/m --> T^(k+1)/m Sorry for not typing in Latex I can't see my keyboard too good No, I'm not talking about exterior derivative. I'm talking about exterior algebra. That seems to be what you're doing. 12:53 AM Oh ok I am confusing things Why was I calling it an alternating algebra sorry for that yes Exterior algebra Basically you skew-symmetrize tensors. You can do it either by quotient or by skew-symmetrizing (summing $\sum (-1)^\sigma \sigma$ applied to the tensor), probably with a $1/k!$ in there. Which one do you prefer? I generally do the latter, but I have certainly used the former too Generally, a sub-thing is easier to work with than a quotient-thing. hmm I can see the former being more difficult to actually work with It depends what you want to do. Anyhow, not much point worrying about it until you confront a practical instance. Certainly for undergraduates I've done the former. Let me look back at notes for my grad geometry course and see what I did there. 12:59 AM Ok thanks Yeah, I used skew-symmetrization there. nice I will read up in 2 weeks after exams I can see in some algebraic cases the latter might be more useful. Also @Ted have you ever seen something like this before? I'm not going to think about that. I'm trying to work on a diff geo question on main. 1:02 AM @MithleshUpadhyay Huh? I'm saying it is text copied from a pdf due to the character style. I don't know where it's from. I'm just telling you what kind of file it was copy/pasted from. ok @Ted I think I will go to bed now anyways, thank you for the discussion :) Night, @Ali. 1:24 AM @TheGreatDuck , I got it, thanks. 1:44 AM hi chat Anybody seen Steamy Root? not recently rehi DogAteMy rehi Duck (waits to see if anyone responds) just as I thought ... invisible. 2:09 AM sorry one handed LOL, I won't ask. ... I know you were too busy ducking. drinking tea hows it going? doing just fine ... winding down ... how're you? ready for school to start up again? yup tomorrow it starts mixed emotions, but in the end you're glad 2:11 AM nah. I'm very happy. what math are you taking this term? abstract math some kind of proofing class What year are you? junior OK ... I used to try to get my advisees to take that sophomore year. 2:16 AM to be fair, I am primarily a CS major. I added the math major last spring. Ohh, so it's half a repeat of the discrete math course you took for CS. But more proofs. also, I got a D+ in calc 1 the first time so that probably threw things off lsightly @TedShifrin no I actually skip discrete math for math. The CS one counted for both. hello guys I will also recommend to you the book I mentioned to mick. Houston's How To Think Like a Mathematician. 2:17 AM LOL ... you can always ignore me :) Professor Elmendorf actually has a book suggested for the class if we need extra material. I don't remember the name off the top of my head. hi @Kushal There are a lot of books for that course. I wasn't very happy with the one I used the last time I taught it. Apparently the class doesn't actually have required books. He just recommended it in the syllabus to anyone interested in extra reading (as in beyond the course content). @TedShifrin How are you doing? Duck, I was tempted to teach the course without a book, to break students of the habit of trying to copy things out of the textbook. But I never quite got there. Doing well, @Kushal, and you? 2:20 AM @TedShifrin to be fair, I hear the professor is a bit smarter than most professors. Just from what other students have said. Rumors are rumors. :p I certainly always tried to be a dumb professor :D @TedShifrin fine. Ok I am looking for a introductory probability book with a good exercise. Do have something in mind? Besides Sheldon Ross, and William Feller. Well I've heard he's made a few important advancements in group theory. And he also has a tendency to teach complex integration in calc 2 from what other students have said. Those are the two totally standard books, @Kushal. There's also one by two electrical engineers at MIT which is quite good. Hold on a sec. Duck, that is cool, but in the intro to proofs course it's part of the task not to blow away most of the students. it's not intro to proofs. It's abstract math. Discrete math was technically the intro to proofs class. unless I misunderstand? 2:22 AM I think you do. @Kushal: Bertesekas & Tsitsiklis. Google it. @TedShifrin Ok me see. Thanks. I heard the main difference between the two was that after predicate logic the math version did a bunch of proofs whereas we learned haskell. I just kind of assumed that was where the intro to proof came in. Duck, as Associate Head, I had to examine a lot of course transfers. I'm just saying that the math version of "abstract math" is more proof oriented than "discrete math," but there are similar contents. 2:25 AM most of the students I know who have said they took the class (with him) said he wasn't too difficult to understand. If anything, he's just very strict about writing in complete sentences, which makes sense. If you haven't written proofs a fair amount, this is the right course for you, and maybe this guy will push you hard, which is good for you. I'm just saying, based on having taught it a few times, that most students slow the course down and need coddling. I'm totally supportive of complete sentences. Totally. @TedShifrin well I'm actually not a course transferer per se. I'm currently a double major. unless you count honors as a third major but I tend to not mention that. Sounds.. self-inflating to mention it. :p I didn't say you were. I just said I dealt with hundreds of transfers and examined course syllabi from lots of places. ah I see no, honors is not a major. 2:27 AM (well at my university it's actually an honors college) you're at one of the branches of Purdue? what gave it away? I looked up the prof's name, silly. Some schools have honors colleges, some have honors programs ... I think your anonymity is safe with Duck. 2:28 AM of course I dont care really I was just curious how you knew. XD I thought maybe I mentioned it at one point. I hope you kick butt in the class. Nope. I try not to pry. of course. I just mean I probably brought it up at some point when talking to other people so you might have just remembered. :p That class is tricky to teach. Research-oriented professors often don't connect with the difficulties mere students have. To be fair, I actually have a pretty decent list of things I've wrote down that at some point i would want to know how to prove (if only to avoid more hand-waving as people put it). BTW, your university website absolutely sucks. 2:31 AM granted, obviously irrelevant to the class. I just mean I am very much interested in learning how to prove things formally rather than just algebraic identities and whatnot. No, no, you won't be doing algebraic identities. @TedShifrin I know I mean that my knowledge of proofing is mostly algebraic type things. You'll be doing different proof methods, induction, understanding functions and relations, some modular arithmetic or introductory analysis stuff. The sort of things I would prefer to prove (that I believe are true) are on the more abstract level. I just mean it's something I am definitely interested in learning. That's all. :p I mean I suppose I'm interested in math in general but I just mean it's a class that interests me in particular. Stay enthusiastic, and be prepared to bug him in office hours. You may have questions and concerns that aren't appropriate for derailing the class :) 2:35 AM trust me. I'm used to that by now. Part of the honors thing is doing stacked courses which are basically extra project things. So I'm used to meeting professors during office hours. :p :) Well, I don't mean to butt in. you're not at all But I had some students who were very disruptive (even though I encouraged disruption to a large extent). I repeatedly had to tell them to stop and come see me. I'm not that type. I'm more the sort that doesn't ask a lot of questions because I usually understand fairly well. and ask maybe 2 or 3 questions to clarify precise things At some point, if you want, I can send you my exams from that course, Duck. 2:37 AM maybe No obligation :) bleh, it's just cold enough inside to be annoying heya @Semiclassic :) @Semiclassical If you don't go outside at night you will not be annoyed by the cold and you won't even notice it. 2:39 AM I escaped Atlanta from the icemageddon that never happened and had 70º+ in San Diego today. So I know your pain. um. I am inside. XD I deduced that from the "inside," @Semiclassic. Duck, that costs lots of money. then grab a blanket I should find a thicker pair of socks. 2:39 AM XD indeed. Thick socks are a must this time of year. OK, I'm disappearing. Time for dinner. Take care, Semiclassic and Duck. later @TedShifrin cya @TedShifrin btw.I know you sometimes acted annoyed when I discussed the weird piecewise constant integral stuff.Mostly my reason for looking at it is because it makes solving differential equations easier most of the time.So I just look at it as recreation.I figure a healthy understanding of valid faster methods is a good thing.Laplace transforms are good for solving equations and work but one has to admit any opportunity to bypass them and use versions of the easier methods is a good one. So I just figure it's something nice and relaxing to do. If I improve my knowledge, great. If I merely spent time in the evening practicing math, also great. It's basically just for fun. I ask about it on here cause I figure other people might enjoy it as an interesting 'different' way to try doing things. :-) @Semiclassical what are you up to on this fine evening? nothing interesting. 2:54 AM you could be watching the mgs2 speedrun @MikeMiller the what? 'the' mgs2 speedrun? idk what that is i know of speedruns but what is mgs2 Hello gentlemen & women. I've asked three questions in the past three months or so about relatively advanced concepts in math (but not really research-level, since I don't understand these concepts well and am hoping for answers that clarify them). All seem Tumbleweed-worthy at this point. Are they too long? Am I asking them wrong? Are they off-topic for Math.SE? Or have they some other problem? 1. https://math.stackexchange.com/questions/1951853/bpsw-primality-test-selection-of-d-q-parameters 3 good evening 3:09 AM Maybe nobody answered them because not many people know the topics -1 Moved from Math Overflow due to not being regarded as a high degree of research Note: I am looking in particular at real valued/real input functions at all values regardless of differentiability. In this question a series of axioms or postulates governing calculus are proposed. Granted, that is... I decided to go all in @IwillnotexistIdonotexist Interesting username, by the way. @AkivaWeinberger I have been thinking about having the questions moved to MO to try for better luck And yes, my name is my way of being self-contradictory TheGreatDuck what are you trying to do @GFauxPas hm? you mean my question? 3:13 AM yes Yeah maybe that's a good idea @IwillnotexistIdonotexist look at alternate versions of the derivative/antiderivate/differential equation rigorously Good luck on deciphering existence you want to have a system where a non-constant function can have a zero derivative? yes 3:14 AM :/ A nonexistent existence is valid if you think carefully about it just because you're curious? here's a good example why try solving the differential equation y'' + 2floor(x)y' + floor(x)^2y = 0 let me guess. You'd use the laplace transform? I'd question whether that has a solution at all I solve the auxilliary equation r^2 + 2r*floor(x) + floor(x)^2 = 0 3:16 AM when I see "floor" and "derivatives" in the same place I get suspicious which has a double solution -floor(x) (different equation I was thinking of) therefore the solution is of the form How would you use Laplace transforms, by introducing an auxilliary variable and treating it as a partial de? $x = t$ or something? I didn't say I was using a laplace transform you suggested I would use the LT I'm thinking if that's possible I'm saying you would. x is the dependent variable. 3:18 AM that's not the point y = C(x)e^{-floor(x)x} + D(x)xe^{-floor(x)x} is the solution where D and C are piece-wise constants and y is continuous consider $y'' =xy$ how would you solve that using LT? use dollar signs for mathjax I don't actually know how via laplace transform Laplace transform is generally for constant coefficients 3:20 AM more generally, if $y$ is a function of $t$, the constants can be constant WRT $t$, I think. my diff eq is rusty y is dependent on x then it's not a Laplace transform situation afaik anyway then so what equation are we dealing with $y'' + 2floor(x)y' + floor(x)^2y = 0$ has the solution $y = ce^{-floor(x)x + \frac {floor(x)^2}{2} + \frac {floor(x)}{2}} + dxe^{-floor(x)x + \frac {floor(x)^2}{2} + \frac {floor(x)}{2}}$ 3:22 AM put it in dollar signs please I guess I'll manage How are you defining $\dfrac {\mathrm d \operatorname{floor}}{\mathrm dx}$ huh? what's the derivative of $e^{-\operatorname{floor}(x)x}$? it's a weak solution what does that mean it is undefined at integer points 3:24 AM then how are you differentiating it I'm not. That's the solution to the differential equation. If it's a solution to a differential equation, it means it satisfies the differential equation thats what a solution is you're telling me there exists a $y$ that satisfies $y'' + 2 \lfloor x \rfloor y' + \lfloor x \rfloor ^2 y = 0$ what is the integral of floor? hold up I'm making a point 3:26 AM are you telling me that there is a function that satisfies that equation? I didn't finish making mine y' = floor(x) has the solution of the indefinite integral of floor, right? let's say yes well no indefinite integrals arent solutions the integral of floor is $x*floor(x) + floor(x)^2/2 + floor(x)/2$ let's make it $\displaystyle \int_0^t \lfloor x \rfloor \, \mathrm dx$ @Semiclassical agdq is on 3:28 AM @GFauxPasby your reasoning how can the heavyside function be in the solution to a differential equation? the heaviside function has a distributional derivative at its point of discontinuity ...it's point of discontinuity is a jump discontinuity which is why you need to change the definition of the derivative if you want to say something like nothing about it's values make it any more special than the jump discontinuities of floor $\dfrac {\mathrm du}{\mathrm dt} = \delta (t)$ are you referring to that? 3:30 AM but the dirac delta function is an object that exists by proof, not by definition. yes isn't it something one derived as the derivative of heavyside? well you define it, and prove that it makes sense and that it does what you want it to do eh there's more than one way to define it, but to define it that way seems very handwavy to me no I mean I thought given nothing about the derivative but the limit definition one can actually derive the dirac function. Is that wrong? Well if you're okay with veering off topci are we? no not really technically the whole solution existing was off topic Then yes, it is wrong 3:32 AM "look at alternate versions of the derivative/antiderivate/differential equation rigorously" ^what we were discussing at first. :p true but anyway, the dirac delta cannot be defined using epsilon-delta I was saying it bypasses the laplace transform sometimes. there are ways to make it work carefully take this differential equation instead: y'' + 2y' + y = u(x) where u is heavyside you either have to change the way you think about functions, or the way you think about limits, or the way you think about derivatives, or something 3:34 AM i dont know for sure but I believe laplace transform might solve it ah fair enough that you can solve using laplace transforms, sure what if you used the method of undetermined coefficients, instead? but if you want to consider $x \in \mathbb R$ instead of just $x > 0$ I've never tried undetermined coefficients with dirac delta, I guess it might work well let me back up a second. I'm getting ahead of myself Let's solve the differential equation in the system where this is true: "if and only if a function is piecewise-constant does it have a derivative of 0 for all real numbers" im pretty sure based on undetermined cooeficients with just a constant on the RHS the coefficient is that constant okay, let's say I believe such a system exists 3:37 AM ...why would you claim it doesn't? you'd have to win me over though because you're saying that it has derivatives at points where it's not continuous and that those derivatives are zero you just won't let me finish will you? XD I said "okay, let's say I believe such a system exists" :P go on the roots of the auxilliary equation would be -1 and -1 no I think you use a first degree polynomial in that case at least 3:40 AM therefore the solution would be $y = C(x)e^{-x} + D(x)xe^{-x} + h(x)$ no y'' + 2y' + y = 0 would be the homogenous equation correct that's a second degree auxilliary equation this isn't the way I'm familiar with those arent constant coefficients y'' + 2y' + y = 0 has constant coefficients ... so it will be $y_h = C_1 e^x + C_2 e^{-x}$, no? 3:42 AM yes except the one flaw you missed okay, I agree which is? piecewise constants have derivative 0 in this system so wouldn't the coefficients be piecewise constant? :-) no I don't think so um yeah $C_1$ and $C_2$ are being pulled from a scalar field not from a space of functions 3:44 AM sigh no? I don't know, I've never done this we are saying $y$ can be expressed in terms of the basis $\{e^x, e^{-x}\}$ the c's come from the logic of antidifferentiation "all these functions fulfill the antiderivative because c has derivation 0 and addition rule" I prefer to look at it from a linear algebra perspective, I guess that's why we're not seeing eye to eye piecewise constants have derivative of 0 so c gets an upgrade constant coefficients means 3:45 AM in that system, it's all piecewise linear combinations we are solving an equation of the form $P(D_x)y = 0$ that's what constant coefficients menas agree or disagree? Huh. Any degree $d$ map $S^2 \to S^2$ with $|d| > 1$ which is furthermore $C^1$ has infinitely many periodic points. That's kind of interesting. $P$ is a polynomial in $D_x$ that's an an ODE with constant coefficients means $P(D_x)y =\text{functions in the space}$ $y = C(x)e^{-x} + D(x)xe^{-x} + u(x)$ is the solution to the equation $y'' + 2y' + y = u(x)$ in the alternate system where u is heavyside. This is a fact. I know it to be so. :-) anyway. wanna know the important bit? sure do piecewise-constants form a scalar field? that's important 3:48 AM the "weak solution" en.wikipedia.org/wiki/Weak_solution to the equation is the solution in that alternate system such that y is continuous if they don't, it's not a useful system not familiar with this topic, reading @BalarkaSen weird i don't know it much either. Basically, it's the solution that best works and we just ignore the issue at the point of discontinuity. I.e. what you said about dirac delta. better still solutions to differential equations are always continuous Wikipedia says to use weak solutions means youre looking for distrbutions I'm okay with that shrugs heard it in an answer to a question once. anyway 3:50 AM @MikeMiller The reference seems to be Shub-Sullivan solutions to a differential equation are always continuous?!? except when diverging or undefined :p it's a fact feel free to think about while I carry on except when undefined, boo 3:51 AM @GFauxPas Depends what you mean by "differential equation" and "solution". this means $y = C(x)e^{-x} + u(x)$ and $y = D(x)xe^{-x} + u(x)$ are each individual continuous solutions for some undetermined C and D if you're dealing with distributions and not just functions I'm not sure what a continuous distribution is small changes in $f$ results in small changes in $L[f]$? i dont even know what a distribution is we're solving the differential equation y'' + 2y' + y = u(x) I'm about to pretty much finish solving a distribution is like a function, but is never evaluated at points. it only works by affecting other functions, such as through multiplication Like let's say $L$ is a distribution on the space of all functions @GFauxPas That's assumed in the definition of distribution (though you should know what you mean by "small changes in $f$"). A continuous distributional solution just means... a distribution that's representable by a continuous function. 3:55 AM you couldn't ask me what $L(3)$ is because that's now how we use distributions But you can ask what $L\sin(3)$ is therefore $ye^{x} = C(x) + u(x)e^{x}$ and $ye^{x} = D(x)x + u(x)e^{x}$ which means $f = C(x) + u(x)e^{x}$ and $f = D(x)x + u(x)e^{x}$ where f is continuous Mike I was being imprecise because I wasn't sure what I meant which means C(x) = -u(x)c where c is an arbitrary constant based on simple comparison of the limit at 0 Like you never evalute $\delta(x)$ at any $x$, it doesn't make sense @GFauxPas but the dirac function isn't in the final solution 3:57 AM but you can multiply it by a function and then integrate it, like $\int_0^{2\pi} \delta(x-\pi/2)\sin x \, \mathrm dx$ I'm just saying what a distributional solution would be it's just a normal impulse differential equation it's used all the time in engineering from what i hear and is solvable these impulses and distributions were used by physicists before mathematicians were able to find a way for them to make sense when engineers used them they used them as an approximation and it gave solutions that seemed to work but mathematicians don't like approximate solutions being treated as exact do you argue that the laplace transform method gives inaccurate results? for several reasons It should be true that as long as $u$ is continuous, the solution to that ODE is $C^1$. 3:59 AM all I am stating is that I can give the same solutions it would give without using it. I'm saying you have to know what you're saying you're saying a constant is a special type of constant, and a function is a special type of function, to which I say "okay, I'll hear you out. but be careful"
2021-05-17 23:35:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7210939526557922, "perplexity": 1540.3113193379645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00104.warc.gz"}
http://physics.stackexchange.com/questions/19788/hilbert-space-and-lie-algebra-in-quantum-mechanics?answertab=active
Hilbert space and Lie algebra in quantum mechanics We are looking for a publication or website that explains the Standard Model in terms of Hilbert space and Lie algebra. We are reading Debnath's Introduction to Hilbert Spaces and Applications and Iachello's Lie Algebras and Applications. Is there a book or website that combines the two approaches (Hilbert and Lie)? If they can't be combined, can you provide a link to articles that compare/contrast them? - We wonder if you are refering to Quantum Field Theories in general, with emphasis on the viewpoint of representations of the Poincare group (a Lie Group, which cerainly admits a Lie algebra) or if you are specifically interested in the gauge group aspects of the Standard model. –  Nikolaj K. Jan 21 '12 at 14:18 Thanks for responding. My wife and I are helping our son make a report to his high-school science club. QFT and gauge theory are valid topics. We'd like to summarize the main points of the SM as a QFT gauge theory and include references to Hilbert space (as a quantum physical system: states, observables/operators, transformations/dynamics), then show how Lie theory (ala boson realizations and fermion realizations in Iachello chapters 7-8) relates to the Hilbert space formulation of the SM. Otherwise we would outline how Hilbert space and Lie algebra cover different aspects of the SM. –  user7234 Jan 21 '12 at 14:52 High-school? QFT and Lie Algebras? I don't understand. How deep will such a science club project go? Are you and your wife mathematicans? If yes, then much Hilbert space stuff follows from using compact Lie Groups (for example as gauge groups like $SU(3)$) alone, and as far as QFTs are concerned, some features are summarized here, although this is quite far away from "applications like the Standard Model". –  Nikolaj K. Jan 21 '12 at 15:06 There is a book of Arnold Neumaier on Lie algebras in CM and QM: mat.univie.ac.at/~neum/ms/QML.pdf –  Vladimir Kalitvianski Jan 21 '12 at 18:06 Thanks for the reference to the Neumaier book. If anyone else knows of a book or website where they use a combination of Hilbert space and Lie theory in a discussion of the Standard Model, please post the title or link. Thanks. –  user7234 Jan 22 '12 at 13:29 show 1 more comment First, a possibly unwelcome comment: You need more than Lie algebras to define the Standard Model's particle content and couplings. You need the representation theory of Lie groups, for a zillion different reasons, e.g.: 1. The difference between $\mathbb{R}$ and $U(1)$ -- which have the same Lie algebra -- is related to the existence of charge quantization. 2. The parity groups in the standard model -- generated by elements $C$, $P$, and $T$ which square to 1 -- play an import role in understanding the character of weak interactions, the difference between fermions and antifermions, and such. 3. You need the representation theory of the Poincare group $ISO(3,1)$ which really isn't the same thing as $ISO(4)$, to understand the classification of the different kinds of spinors fields that exist in nature. Then, in an attempt to be helpful: Chapter 2 of Volume 1 of Weinberg's The Quantum Theory of Fields describes the classification of single particle states in some some detail. Also, Baez & Huerta's article http://arxiv.org/abs/0904.1556 has some nice exposition in it. - I don't think there is any, for a reason: there is a paradigm shift from Quantum Mechanics to Quantum Field Theory. see http://physics.stackexchange.com/a/20387/6432 . In Quantum Mechanics, all the dynamical variables are treated on an equal footing, and the number of particles is fixed. There is no annihilation or pair-creation. The Hilbert space is the space of states of, e.g, 27 electrons, neither more nor less. The observables are operators on that space. But in Quantum Field Theory, the number of particles is treated as an operator, and there are creation and annihilation operators, and one switches to treating the state of a field as a function on space-time whose values are field observables. The observables are no longer studied as operating on the space of such functions, and researchers tend to ignore Hilbert space aspects of it. See also Quantum Field Theory Variants for a discussion of QFT, and What's the exact connection between bosonic Fock space and the quantum harmonic oscillator? for a short discussion of how Fock space replaces, in QFT, the usual one-particle Hilbert Space of QM, in order to allow the number of particles to change as some get annihilated. - Thanks. I'll try to put it in my own words, according to what I understand from the Reality book by Roger Penrose (sections 20.5 and 26.6): There are physical systems (classical, QM and QFT), consisting of states, observables and dynamics. In QM states are organized in a Hilbert space H, observables are operators on H, and dynamics is/are defined in terms of a Lagrangian and an action S (an integral of the Lagrangian). In QFT, could you define the physical system? How is the Lagrangian defined as a function(al) of fields and derivatives of the fields? Are actions and Lie theory involved? –  user7234 Apr 10 '12 at 22:04 About QM you are right on the money. About QFT, there is such a diversity of possible approaches, it is hard to comment on your statement. The approach that Dr. Peter Morgan, quite an expert, likes is that the observable are abstract operators, they do not act on any particular space at all. The states are functions on the whole algebra of these operators: a state is a way of assigning a value to each observable. The dynamics is a one-parameter group operation on the algebra of operators, which can also be made an operation on the states, of course. Lie Groups come in as symmetry gro –  joseph f. johnson Feb 12 '13 at 15:24 ups of the whole algebra of operators. Weinberg does not like this approach. Another approach is that the states are operator-valued functions on space--time, where the operators in fact are concrete operators on a far-out Hilbert space. The Lie groups enter in as groups of symmetries on this space. But Lie algebras are not much used in either approach, in particular, the Lie algebra structure of the observables is not used. The Lie Groups act on the configuration space: space-time plus spin plus colour plus.... the configuration variables. Again, the dynamics is a one-parameter group –  joseph f. johnson Feb 12 '13 at 15:28 QFT is just a mess....I never bothered to learn it...learn Stat Mech instead. –  joseph f. johnson Feb 12 '13 at 15:29 And, yes, the dynamics can be given by an action principle and a Lagrangian (except in the abstract operator algebra approach). The Lagrangian is a functional that acts on the operator-valued functions on space-time, it involves them, various ordered products of these operators, and some derivatives, too. The formulas look like, well, at least have a family resemblance to, the formulas from classical mechanics or quantum mechanics. –  joseph f. johnson Feb 12 '13 at 15:32
2014-03-11 04:22:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7598215341567993, "perplexity": 438.1187746454156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011123461/warc/CC-MAIN-20140305091843-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
https://socratic.org/questions/the-following-reaction-is-at-equilibrium-at-a-particular-temperature-h-2-g-i-2-g
# The following reaction is at equilibrium at a particular temperature:H_2(g) + I_2(g) -> 2HI(g). The [H_2]_(eq) = .012 M, [I_2]_(eq) = .15 M, and [HI]_(eq) = .30 M.What is the magnitude of K_c for the reaction? Dec 10, 2015 ${K}_{c} = 50.$ #### Explanation: Even without doing any calculation, you should be able to look at the values given for the equilibrium concentrations of the three chemical species that take part in the reaction and predict that ${K}_{c}$ will be greater than one. As you know, the equilibrium constant tells you what the ratio is between the equilibrium concentrations of the products and the equilibrium concentrations of the reactants, all raised to the power of their respective stoichiometric coefficients. In this case, notice that the concentration of the product, hydrogen iodide, $\text{HI}$, is bigger than the equilibrium concentrations of the two reactants, hydrogen gas, ${\text{H}}_{2}$, and iodine, ${\text{I}}_{2}$. Moreover, the reaction produces twice as many moles of hydrogen iodide than moles of hydrogen and iodine that take part in the reaction, you can expect ${K}_{c} > 1$. ${\text{H"_text(2(g]) + "I"_text(2(g]) -> color(red)(2)"HI}}_{\textrm{\left(g\right]}}$ the equilibrium constant takes the form ${K}_{c} = \left(\left[{\text{HI"]^color(red)(2))/(["H"_2] * ["I}}_{2}\right]\right)$ Plug in your values to get ${K}_{c} = \left({\left(0.30\right)}^{\textcolor{red}{2}} \textcolor{red}{\cancel{\textcolor{b l a c k}{\text{M"^color(red)(2)))))/(0.012 color(red)(cancel(color(black)("M"))) * 0.15color(red)(cancel(color(black)("M}}}}\right) = \textcolor{g r e e n}{50.}$ Indeed, ${K}_{c} > 1$ as initially predicted.
2019-04-23 06:33:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828790545463562, "perplexity": 825.045578135502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578593360.66/warc/CC-MAIN-20190423054942-20190423080942-00366.warc.gz"}
https://ch.mathworks.com/help/nav/ref/insekf.stateparts.html
# stateparts Get and set part of state vector in `insEKF` ## Syntax ``part = stateparts(filter,stateName)`` ``part = stateparts(filter,sensor,stateName)`` ``stateparts(filter,stateName,value)`` ``stateparts(filter,sensor,stateName,value)`` ## Description example ````part = stateparts(filter,stateName)` returns the components of the state vector corresponding to the specified state name of the filter.``` example ````part = stateparts(filter,sensor,stateName)` returns the components of the state vector corresponding to the specified state name of the specified sensor.``` example ````stateparts(filter,stateName,value)` sets the components of the state vector corresponding to the specified state name of the filter to the specified value.``` example ````stateparts(filter,sensor,stateName,value)` sets the components of the state vector corresponding to the specified state name of the specified sensor to the specified value.``` ## Examples collapse all Create an `insAccelerometer` sensor object and `insGyroscope` sensor object. ```acc = insAccelerometer; gyro = insGyroscope;``` Construct an `insEKF` object using the two sensor objects. `filter = insEKF(acc,gyro);` Set the bias of the accelerometer to `[10 0 1]` $\mathrm{m}/{\mathrm{s}}^{2}$. `stateparts(filter,acc,"Bias",[10 0 1])` Get the bias of the accelerometer via the sensor. `accBias = stateparts(filter,acc,"Bias")` ```accBias = 1×3 10 0 1 ``` Get the bias of the accelerometer via the filter. `accBias2 = stateparts(filter,"Accelerometer_Bias")` ```accBias2 = 1×3 10 0 1 ``` Set the bias of the accelerometer back to `[0 0 0]`. `stateparts(filter,"Accelerometer_Bias",[0 0 0])` ## Input Arguments collapse all INS filter, specified as an `insEKF` object. Name of a part of the state for the filter or the sensor, specified as a string scalar or character vector. Use the `stateinfo` object function to find the names of state parts in the filter. Example: `"AngularVelocity"` Example: `"Bias"` Data Types: `char` | `string` Inertial sensor, specified as one of these objects used to construct the `insEKF` filter object: Value for the filter state or sensor state part, specified as an N-element real-valued vector, where N is the number of elements in the state part. Example: `[.2 .3]` Data Types: `single` | `double` ## Output Arguments collapse all Part of the state vector, returned as a real-valued vector, where N is the number of elements in the state part. ## Version History Introduced in R2022a
2022-09-29 10:18:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8118191957473755, "perplexity": 3901.298363103506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00489.warc.gz"}
https://www.aimsciences.org/article/doi/10.3934/jimo.2018107
# American Institute of Mathematical Sciences October  2019, 15(4): 1517-1534. doi: 10.3934/jimo.2018107 ## An interior point continuous path-following trajectory for linear programming 1 School of Science, Nanjing Audit University, Nanjing 211815, Jiangsu Province, China 2 Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China * Corresponding author: Li-Zhi Liao Received  January 2018 Revised  March 2018 Published  July 2018 Fund Project: The work of Liming Sun was supported in part by the National Natural Science Foundation of China (Grant No. 11701287) and the Natural Science Foundation of Jiangsu Province (Grant No. BK20171071). The work of Li-Zhi Liao was supported in part by grants from the General Research Fund (GRF) of Hong Kong and FRG of Hong Kong Baptist University. In this paper, an interior point continuous path-following trajectory is proposed for linear programming. The descent direction in our continuous trajectory can be viewed as some combination of the affine scaling direction and the centering direction for linear programming. A key component in our interior point continuous path-following trajectory is an ordinary differential equation (ODE) system. Various properties including the convergence in the limit for the solution of this ODE system are analyzed and discussed in detail. Several illustrative examples are also provided to demonstrate the numerical behavior of this continuous trajectory. Citation: Liming Sun, Li-Zhi Liao. An interior point continuous path-following trajectory for linear programming. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1517-1534. doi: 10.3934/jimo.2018107 ##### References: show all references ##### References: Transient behaviors of the continuous path of $x(t)$ and the objective function $c^Tx$ in Example 4.1 with starting point $x_0$ Transient behaviors of the continuous path of $x(t)$ and the objective function $c^Tx$ in Example 4.1 with starting point $x_0^{'}$ Transient behaviors of the continuous path of $x(t)$ and the objective function $c^Tx$ in Example 4.2 with starting point $x_0$ Transient behaviors of the continuous path of $x(t)$ and the objective function $c^Tx$ in Example 4.2 with starting point $x_0^{'}$ [1] Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2021012 [2] Tengfei Yan, Qunying Liu, Bowen Dou, Qing Li, Bowen Li. An adaptive dynamic programming method for torque ripple minimization of PMSM. Journal of Industrial & Management Optimization, 2021, 17 (2) : 827-839. doi: 10.3934/jimo.2019136 [3] Gang Luo, Qingzhi Yang. The point-wise convergence of shifted symmetric higher order power method. Journal of Industrial & Management Optimization, 2021, 17 (1) : 357-368. doi: 10.3934/jimo.2019115 [4] Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020178 [5] Qiang Fu, Xin Guo, Sun Young Jeon, Eric N. Reither, Emma Zang, Kenneth C. Land. The uses and abuses of an age-period-cohort method: On the linear algebra and statistical properties of intrinsic and related estimators. Mathematical Foundations of Computing, 2020  doi: 10.3934/mfc.2021001 [6] Pablo Neme, Jorge Oviedo. A note on the lattice structure for matching markets via linear programming. Journal of Dynamics & Games, 2020  doi: 10.3934/jdg.2021001 [7] Mehdi Bastani, Davod Khojasteh Salkuyeh. On the GSOR iteration method for image restoration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 27-43. doi: 10.3934/naco.2020013 [8] Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020180 [9] Ying Liu, Yanping Chen, Yunqing Huang, Yang Wang. Two-grid method for semiconductor device problem by mixed finite element method and characteristics finite element method. Electronic Research Archive, 2021, 29 (1) : 1859-1880. doi: 10.3934/era.2020095 [10] Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032 [11] Ali Mahmoodirad, Harish Garg, Sadegh Niroomand. Solving fuzzy linear fractional set covering problem by a goal programming based solution approach. Journal of Industrial & Management Optimization, 2020  doi: 10.3934/jimo.2020162 [12] Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 [13] Qing-Hu Hou, Yarong Wei. Telescoping method, summation formulas, and inversion pairs. Electronic Research Archive, , () : -. doi: 10.3934/era.2021007 [14] Li-Bin Liu, Ying Liang, Jian Zhang, Xiaobing Bao. A robust adaptive grid method for singularly perturbed Burger-Huxley equations. Electronic Research Archive, 2020, 28 (4) : 1439-1457. doi: 10.3934/era.2020076 [15] Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang. A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28 (4) : 1487-1501. doi: 10.3934/era.2020078 [16] Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020462 [17] Noah Stevenson, Ian Tice. A truncated real interpolation method and characterizations of screened Sobolev spaces. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5509-5566. doi: 10.3934/cpaa.2020250 [18] Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120 [19] Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097 [20] Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139 2019 Impact Factor: 1.366
2021-01-16 06:15:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4613591134548187, "perplexity": 6052.84343932979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00469.warc.gz"}
https://www.gamedev.net/forums/topic/462040-tolua-tutorial/
# ToLua Tutorial ## Recommended Posts Prozak    898 I'm currently trying to add a Lua binding wrapper to my project, and after testing LuaBind I would like to test toLua. Can anybody point me to a good tutorial on getting started with this tool? ##### Share on other sites Prozak    898 One of the problems with all the Lua wrappers is that there just ins't enough info on the intertubes to implement those libs in our projects. It's unfortunate. I would like to try toLua, but I think I'll just revert back to luaBind... ##### Share on other sites Zorbfish    214 Well the problem is that for most Lua related projects the manual is treated as the definitive documentation. You didn't mention whether or not you had looked at the manual for toLua, but here it is for reference. An updated version can be found as part of the toLua++ package here. Maybe this would be more useful because I think toLua++ is still actively being developed. ##### Share on other sites Atridas    151 I've just readed the info in the reference you linked and I still have no idea how to use ToLua. I mean, i readed somethink like an "executable" but there is nothink ".exe" in the file you download... Please answer to a very beginner level... I have a very good understanding of c/c++ language, but I still get freezed everytime a reference shoots me with source code and tells me "build it" ##### Share on other sites jeroenb    282 I do not know of another tutorial then their documentation page Zorbfish already referenced. I found it good information and got me up and running with Tolua++. What tolua basically does is converting your class prototypes (which you must manually create in a seperate file) to Lua tables which you later can use in your scripts. Thus say that you for example have the following class (the upper): // your real class definitionclass Character {public: Character(); void setPosition(int x, int y); int getXPosition() const; int getYPosition() const;private: int _x, _y;};// tolua class prototypeclass Character{ Character(); void setPosition(int x, int y); int getXPosition() const; int getYPosition() const;}; Then you must create the class prototype as the lower one. Then compile this prototype file with the tolua++ executable to a tolua library (see tolua docs). For example if you called your prototype file you can compile with the following line: tolua++ -o tolua_character.cpp -H tolua_character.h -n character character.cpp - tolua_character.cpp & tolua_character.h are the sources created by tolua - character is the name of the tolue library - character.cpp is the prototype file You now can include the tolua_* files to your project. When you have created your Lua state call the following code (do not forget to include the tolua_character.h file): tolua_open(luaState);tolua_character_open(luaState); You now can create characters and call methods in your Lua scripts: local char = Character:new()char:setPosition(5,5) I hope this explanation helps you getting tolua running.
2017-08-20 04:18:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33106157183647156, "perplexity": 5793.259736013823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105970.61/warc/CC-MAIN-20170820034343-20170820054343-00121.warc.gz"}
http://life.golfdigg.com/iur79esz/how-to-report-multiple-imputation-results-a44c89
I have some questions about multiple imputation (I run MI using SPSS 17). The validity of multiple-imputation-based analyses relies on the use of an appropriate model to impute the missing values. The second step of multiple imputation for … 1. Higher education researchers using survey data often face decisions about handling missing data. But such models are complex and untestable, and they therefore require some well equipped software to perform. Multiple imputation is a simulation-based statistical technique for handling missing data [7]. Step 2: Find B, which is the between-imputation variance, where. In particular, it has been shown to be preferable to listwise … Multiple imputation has become very popular as a general-purpose method for handling missing data. The MI procedure in the SAS/STAT Software is a … Pooling of Tabular Output. How I can ensure that a link sent via email is opened only via user clicks from a mail client and not by bots? By the way, 10 imputations is a really low number. MULTIPLE IMPUTATION IN SAS Analysis with multiple imputation is generally carried out in three steps: 1. Step 3: Find T, which is the variance of Q, where. To learn more, see our tips on writing great answers. Below I illustrate multiple imputation with SPSS using the Missing Values module and R using the mice package. However, instead of filling in a single value, the distribution of the observed data is used to estimate multiple values that reflect the uncertainty around the true value. The idea of multiple imputation for missing data was first proposed by Rubin (1977). To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Articles were located by using search facilities on each journal’s website to search for the phrase “multiple imputation… Colour rule for multiple buttons in a complex platform. The researcher cannot achieve this result from deterministic imputation, which the multiple imputation for missing data can do. Instead I will focus on the process of "imputing" observati… Should I have to report results based on the original dataset or imputed datasets? The result is m full data sets. Does a rotating rod have both translational and rotational kinetic energy? Another question is what else to report, I would certainly expect that somewhere in the methods the multiple imputation approach (what variables were entered, was it some kind of imputation model longitudinally for each time point, or jointly across all times using some joint normality, how many imputations etc.) The following is the procedure for conducting the multiple imputation for missing data that was created by Rubin in 1987: Multiple imputation for missing data has several desirable features: However, there are certain conditions that should be satisfied before performing multiple imputation for missing data. In subsequent sections we will show how this dataset can be imputed using multiple imputation and then present the results of analysis based on multiply imputed data vs. single imputation (all dropouts as non-responders). Multiple imputation (MI) is a simulation-based approach for analyzing incomplete data. Introduction. A new SAS/STAT R procedure, PROC MI, is a multiple … Conditions that should be satisfied before performing multiple imputation for missing data: However, the problem is that it is quite easy for the researcher to violate such conditions while performing multiple imputation for missing data. Each data set will have slightly different values for the imputed data … Is there a difference between a tie-breaker and a regular vote? This multiple imputation for missing data allows the researcher to obtain good estimates of the standard errors. If nothing is pre-specified, then I guess I would put what I consider the most meaningful in the paper. The following is the procedure for conducting the multiple imputation for missing data that was created by Rubin in 1987: The first step of multiple imputation for missing data is to impute the missing values by using an appropriate model which incorporates random variation. Many academic journals now emphasise the importance of reporting … I mean there ara rules or criteria for decision? Background: Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Given that multiple imputation is a widely used method for handling missing data, it is vital that we understand how to appropriately combine multiple imputation with PSs. The second (ii) does the multiple imputation with mice() first and then gives the multiply imputed data to runMI() which does the model estimation based on this data. Multiple imputation inference involves three distinct phases: The missing data are filled inm times to generate m complete data sets. How are scientific computing workflows faring on Apple's M1 hardware. That first page covers the basic issues in the treatment of missing data, so I will not go over that ground here. Although the use of multiple imputation and other missing data procedures is increasing, however many modern missing data procedures are still largely misunderstood. Thanks for your comment. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. By pre-specified I mean what was stated in the research protocol (or analysis plan) that was written prior to seeing the data, which is at least what tends (or ought to be done) for prospective experiments (and certainly any clinical trial). Of course, there are many cases, where people have data available and have a look at it, where that kind of rigor is not applied. Multiple imputation certainly comes in many flavors and variants and it is important for the reader to be able to find out what was done. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Despite the widespread use of multiple imputation, there are few guidelines available for checking imputation … By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. (If I missed it than I apologize). But what about baseline figures and contigency tables? So, it is better to use MI. • The results from the m complete data sets are com-bined for the inference. Nonresponse weighting12is a principled approach for making the subjects included in the analysis representative of the original sample. The typical sequence of steps to do a multiple imputation analysis is: Impute the missing data by the mice function, resulting in a multiple imputed data set (class mids); Fit the model of interest (scientific model) on each imputed data set by the with () function, resulting an object of class mira; From these some reported the MI-, other CC-estimates and others are not clear. Why did DEC develop Alpha instead of continuing with MIPS? In either case, one should be transparent about what is being reported. Results, and Interpretation..... 25 4.1 Introduction ... very low on NSDUH, when multiple variables are being used in an analysis (such as when multiple independent variables are used in a regression analysis), the number of … When trying to fry onions, the edges burn instead of the onions frying up. Excellent advice in this answer. Imputation… By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. See the topic Multiple imputations options for more information. 12.2.1 Reporting guidelines. MathJax reference. 2.8 How many imputations?. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. You're right, it's better to use m>20 (according to Enders and van Buuren). Evaluate each question carefully, and report the answers. How to write a character that doesn’t talk much? Multiple imputation has solved this problem by incorporating the uncertainty inherent in imputation. An analysis of missing data patterns across contributing participants or centres, over time, or between key treatment groups should be performed to establish the mecha… B = 1 m − 1 ∑ i = 1 m ( Q ^ i − Q ¯) 2. 1. Can a Druid in Wild Shape cast the spells learned from the feats Telepathic and Telekinetic? Multiple imputation inference involves three distinct phases: • The missing data are filled in m times to generate m complete data sets. Before discussing methods for handling missing data, it is important to review the types of missingness. Complete case results and multiple imputation results are presented as recommended by Manly and Wells (2015) and Sterne et al. The first (i) uses runMI() to do the multiple imputation and the model estimation in one step. In the case of multiple imputation, researchers could provide information about the imputa… Asking for help, clarification, or responding to other answers. 1. double click on a table in the results (to activate the table and make it editable; you can also right-click and select "edit content") 2. with the table in edit mode, go to the "Pivot" menu (which should've appeared when you switched to edit mode in the table) 3. drag the imputation pivot component (which is probably on the "rows" … By default, when you run a supported procedure on a multiple imputation (MI) dataset… I tend to go for something like 250 to 1000 by default, if it is not computationally too expensive and there is up to a low double-digit percentage of missing data across time points. There is no new procedure for requesting pooled output; instead, a new tab on the Options dialog gives you global control over multiple imputation output. Another thing the researcher should keep in mind is that if ‘missing at random’ is satisfied, then the unbiased estimates obtained by multiple imputation for missing data are not always easy to interpret. Table 3 presents the results from Monte Carlo simulations. They therefore require some well equipped software to perform approach for making the subjects included in the of. Should be in the treatment of missing data are filled in m times to generate m complete sets... % of whites and 50 % of whites and 50 % of whites and %! The edges burn instead of continuing with MIPS, so I will focus on the process . Cookie policy I guess I would put what I consider the most efficient and cost effective way to stop star... Times to generate m complete data sets are analyzed by using standard procedures emphasise the importance of reporting … reporting... Indeed the logical thing to report in case this is because there are three to! Feats Telepathic and Telekinetic object class, short for multiply imputed repeated analysis involves three distinct phases: the!, although there are three ways to use and Sterne et al MI procedure in paper... In lavaan consists of three steps: Create m sets of imputations, and therefore. Would be the most efficient and cost effective way to stop a star 's fusion... 12.2.1 reporting guidelines multiple imputations options for more information Your RSS reader been shown to also! Appropriate model to impute the missing data SAS analysis with multiple imputation for missing data is hindered! In appendix/supplement Input ' colour rule for multiple buttons in a list both... ) and Sterne et al can not achieve this result from deterministic imputation, which is the Buddhist view persistence... Variance of Q, where as a reviewer request some more appropriate analysis to be preferable listwise! Over that ground here m ( Q ^ I − Q ¯ ) 2 simulation-based approach making... A really low number RSS feed, copy and paste this URL into RSS... And R using the missing data are filled in m times to generate m complete data sets are by! Of Q, where tips on writing great answers the question is what belongs to where: in analysis.: • the results of the MI procedure in the paper with Mostly Non-Magical Troop imputation. Unbiased estimates of all the imputations could be one thing to report results based the. By missing data makes it possible for the remark regarding contingency table/baseline: indeed the! Com-Bined for the inference Stack Exchange Inc ; user contributions licensed under cc by-sa way! ( 2015 ) and Sterne et al because there are large differences betw… multiple (! Short for multiply imputed repeated analysis the model for the data that not... Service, privacy policy and cookie policy Q, where when something is,... These at ( Missing-Part-One.html and Missing-Part-Two ) email is opened only via user clicks from a mail client not... Ara rules or criteria for decision all the parameters from the m complete data.... Is there a difference between original and imputed datasets, what do I need my own attorney during refinancing! Many circumstances kinetic energy possible for the inference 1 m − 1 ∑ I 1. Education researchers using survey data often face decisions about handling missing data results of the original sample information. Agree to our terms of service, privacy policy and cookie policy link sent via email is opened only user. The m complete data sets are analyzed by using standard procedures incomplete data really number... For missing data can do the spells learned from the m complete data are... I have to report procedure in the main paper data can do and I is the pre-specified analysis analysis. M > 20 ( according to Enders and van Buuren ) the validity of multiple-imputation-based analyses relies on use..., where unlike Single imputation, since it doesn’t allow additional error to be also reported ) biased estimates way! Criteria for decision it possible for the inference are large differences betw… multiple imputation and the model estimation in step... Or criteria for decision privacy policy and cookie policy the spells learned from feats! B, which the multiple imputation some well equipped software to perform adopted in practice, for dealing missing! ( according to Enders and van Buuren ) contingency table/baseline: indeed, the researcher not... Require some well equipped software to perform deterministic imputation, which is the variance. Higher education researchers using survey data often face decisions about handling missing data errors! I apologize ) have written two web pages on multiple regression with missing where... Awesome books of Enders and van Buuren ) I mean there ara rules or criteria for decision results. Parameters from the m complete data sets Rezvan 2015: the missing data are filled times... Containing high pressure first ( I ) uses runMI ( ) or kNN )... Want one imputed dataset, you can see these at ( Missing-Part-One.html and )... Are guidelines how to write a character that doesn ’ t talk much copy and paste this URL Your. Is this stake in my coffee from moving when I rotate the cup mortgage?. From MI m is the pre-specified analysis the main paper estimation in one step standard errors variance, where,... 2015 ) and Sterne et al options for more information aggregating the analyses of imputation... Goes where Echo ever fail a saving throw these some reported the MI-, other CC-estimates and others are clear... ) 2 or criteria for decision 're right, it 's better to m... Or in appendix/supplement of three steps: 1 looked at some articles from the feats Telepathic and Telekinetic analyzed... Preferable to listwise … Table 3 presents the results from cc or pooled results from m! Own attorney during mortgage refinancing to obtain good estimates of all the imputations could be one thing report! Fought with Mostly Non-Magical Troop Buuren ) the between-imputation variance, where less biased.! Data, so I will not go over that ground here you agree to our terms of service privacy! How to write a character that doesn ’ t talk much need to be introduced by the way 10... My coffee from moving when I rotate the cup are transparent it does not matter too much which goes.. Fail a saving throw my coffee from moving when I rotate the cup process with a random component journals... Case, one should be in the paper the model for the inference cc by-sa report case. Responded in a survey us at 727-442-4290 ( M-F 9am-5pm et ) for describing for! That should be transparent about what is being reported 1 m ( ^... The analyses of each imputation ) are indeed the logical thing to report MI-procedure in particular it. Tie-Breaker and a regular vote then I guess I would as a general-purpose method for handling missing.. A list containing both deterministic imputation, since it doesn’t allow additional error to be introduced by the way 10. Is there a difference between original and imputed datasets, what do I have some questions about imputation. Suppose that 100 % of whites and 50 % of blacks responded in a mira class! ( M-F 9am-5pm et ) the analysis representative of the MI procedure in the SAS/STAT software a. Decisions about handling missing data in any kind of analysis, without well-equipped software Exchange Inc ; user contributions under! Be reported: results from the review of Rezvan 2015: the rise of multiple imputation for missing.... Reviewer request some more appropriate analysis to be the most efficient and cost effective way to stop star! Sets are com-bined for the inference I will focus on the original dataset or imputed datasets what..., one should be in the awesome books of Enders and van Buuren ) all imputations... Figures are for describing and for comparing Input ' of all the could... Making statements based on opinion ; back them up with references or personal experience the percentages! Nothing is pre-specified, then that 's pretty clear that that should be transparent about what is reported... Q ^ I − Q ¯ ) 2 attorney during mortgage refinancing the cookie in coffee! Clubs In Nairobi Cbd, How To Be Diplomatic And Tactful At Work, Neon Text Illustrator, Artificial General Intelligence 2019, Who Would Win A Pack Of Wolves Or A Lion, Khana Banana In English, Flat Cat Scratcher, Where Can I Watch Ingobernable Season 3, African Americans With Parkinson's, Quotes About Ethical Principles, When Can I Go Back To Work After Jaw Surgery, Timeline Of The Ethnic Groups That Came To Jamaica, Neon Green Wallpaper Tumblr,
2021-06-18 08:13:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2658839225769043, "perplexity": 1917.4043512031612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00347.warc.gz"}
https://www.vedantu.com/question-answer/express-the-given-number-in-the-expanded-form-class-5-maths-cbse-5ee4ad07be1b52452d3d03f8
Question Express the given number in the expanded form: $108$A.$\left( {100 \times 1} \right) + 8$ B.$100 + 1 \times 8$ C.$10 \times 1 + 8$ D. None of these $108 = (100 \times 1) + (0 \times 10) + (1 \times 8)$ $108 = (100 \times 1) + 8$
2021-04-17 01:26:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808412194252014, "perplexity": 3062.7313717514207}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00536.warc.gz"}
https://undergroundmathematics.org/glossary/injective
# Injective A function $f$ from set $A$ to set $B$ is called an injection and $f$ is said to be injective if each element in $A$ maps to a different element in $B$. In symbols, $f$ is injective if $f(a)=f(b)$ implies that $a=b$. An injective function is also known as a one-to-one or one-one (both read as ‘one to one’) function.
2022-05-17 23:55:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538081884384155, "perplexity": 155.83489480649894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00424.warc.gz"}
https://www.nag.com/numeric/nl/nagdoc_27cpp/flhtml/f16/f16elf.html
# NAG FL Interfacef16elf (dsum) ## 1Purpose f16elf sums the elements of a real vector. ## 2Specification Fortran Interface Function f16elf ( n, x, incx) Real (Kind=nag_wp) :: f16elf Integer, Intent (In) :: n, incx Real (Kind=nag_wp), Intent (In) :: x(1+(n-1)*ABS(incx)) #include <nag.h> double f16elf_ (const Integer *n, const double x[], const Integer *incx) The routine may be called by the names f16elf, nagf_blast_dsum or its BLAST name blas_dsum. ## 3Description f16elf returns the sum $x1 + x2 + ⋯ + xn$ of the elements of an $n$-element real vector $x$, via the function name. If ${\mathbf{n}}\le 0$ on entry, f16elf returns the value $0$. ## 4References Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001) Basic Linear Algebra Subprograms Technical (BLAST) Forum Standard University of Tennessee, Knoxville, Tennessee https://www.netlib.org/blas/blast-forum/blas-report.pdf ## 5Arguments 1: $\mathbf{n}$Integer Input On entry: $n$, the number of elements in $x$. 2: $\mathbf{x}\left(1+\left({\mathbf{n}}-1\right)×\left|{\mathbf{incx}}\right|\right)$Real (Kind=nag_wp) array Input On entry: the $n$-element vector $x$. If ${\mathbf{incx}}>0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left(\mathit{i}-1\right)×{\mathbf{incx}}+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. If ${\mathbf{incx}}<0$, ${x}_{\mathit{i}}$ must be stored in ${\mathbf{x}}\left(\left({\mathbf{n}}-\mathit{i}\right)×\left|{\mathbf{incx}}\right|+1\right)$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. Intermediate elements of x are not referenced. If ${\mathbf{n}}=0$, x is not referenced. 3: $\mathbf{incx}$Integer Input On entry: the increment in the subscripts of x between successive elements of $x$. Constraint: ${\mathbf{incx}}\ne 0$. ## 6Error Indicators and Warnings If ${\mathbf{incx}}=0$, an error message is printed and program execution is terminated. ## 7Accuracy The BLAS standard requires accurate implementations which avoid unnecessary over/underflow (see Section 2.7 of Basic Linear Algebra Subprograms Technical (BLAST) Forum (2001)). ## 8Parallelism and Performance f16elf is not threaded in any implementation. None. ## 10Example This example computes the sum of the elements of $x = 1.1,10.2,11.5,-2.7,9.2 T .$ ### 10.1Program Text Program Text (f16elfe.f90) ### 10.2Program Data Program Data (f16elfe.d) ### 10.3Program Results Program Results (f16elfe.r)
2021-08-04 06:14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054218530654907, "perplexity": 14747.52639875044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00159.warc.gz"}
https://davidlowryduda.com/blog/
# mixedmath Explorations in math and programming David Lowry-Duda One can also browse posts by category or by tag. New posts are available via RSS. Recent comments and all posts are linked to below. ### Recent Posts • (2021-12-25) Vaskor Basak wrote on Prime rich and prime poor: What polynomials are allowable for prime-poor polynomials? Could I claim that I have found a better example of a prime-poor polynomial than $x^{12}+488669$ by presenting the example[...] • (2021-10-19) davidlowryduda wrote on An intuitive overview of Taylor series: Hi Bob! The behavior of $c$ is actually quite subtle. It's not true that $c$ is actually a constant. For "nice" functions, what is true is that the mean value $c$ varies continuously over[...]
2022-05-19 01:57:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17475160956382751, "perplexity": 2437.7467698042674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00246.warc.gz"}
https://indestat.wordpress.com/category/uncategorized/
## New idea for software ecosystem research Analyse Python ecosystem according to following DOI: 10.1016/j.jss.2017.06.095 ## Displaying HTML in Excel cell Recently I was tasked with coming up with an idea of how to display HTML tags inside an Excel sheet which were in the rows of 1 column. It had to be done automatically with a click of a button (i.e. conversion of each HTML cell into properly formatted cell that displays HTML content with applied styling) and of course VBA had to play a role in it. (Full disclosure: Idea borrowed from here) Well, after a lot of trials & errors, here is the general idea & code: For each cell in the column we first open Internet Explorer and navigate to “about:blank” (blank page). Then, we ‘copy’ the cell’s content to such blank page and ‘select all & copy’ the content into the clipboard. From there, we read it again and paste it into the same cell from which we have first copied it. It might sound fairly complicated because……it is. Plus, one clear disadvantage is that if, during time the macro is executed, anything is copied to clipboard by the user, it will 1000% lead to inconsistent results. And this is not even to talk about speed. Warning: The code below can take up to 4 minutes with 250 rows to execute. This is intentional because if there is no “Wait()” then for some reason it again leads to inconsistent results (e.g. where the same row will be copied up to 4 times -> 4 rows with same HTML content) Before you will be able to execute this VBA macro, read this website on how to load “Microsoft Forms 2.0 Object Library”. Sub DisplayHTMLContentProperly() Dim rng As Range Dim row As Range Dim cell As Range Dim Ie As Object Set Ie = CreateObject("InternetExplorer.Application") Ie.Visible = True 'to be tested Dim DataObj As MSForms.DataObject ' for clipboard Set DataObj = New MSForms.DataObject Dim sht As Worksheet Dim LastRow As Long Set sht = ThisWorkbook.Worksheets("SheetName") LastRow = sht.ListObjects("DataName").Range.Rows.Count Set rng = Range("L2:L" & LastRow) For Each row In rng.Rows For Each cell In row.Cells Application.CutCopyMode = False DataObj.GetFromClipboard myString = " " DataObj.SetText " " If Not IsEmpty(cell) Then With Ie .Visible = True .Navigate "about:blank" While .Busy Or .ReadyState 4: DoEvents: Wend .Document.body.InnerHTML = cell 'update to the cell that contains HTML you want converted .ExecWB 17, 0 'Select all contents in browser .ExecWB 12, 2 'Copy them 'get data from clipboard (due to copy method above) and paste it into the cell DataObj.GetFromClipboard myString = DataObj.GetText(1) 'MsgBox myString - to debug cell = myString 'delete anything from the clipboard DataObj.Clear Application.CutCopyMode = False DataObj.SetText " " End With End If 'Do Something Set HTML = Nothing Application.Wait (Now + 0.000000011574 * 1200) Next cell Next row Ie.Quit Set Ie = Nothing Application.CutCopyMode = False MsgBox "I am now done with proper formatting of HTML column." 'move to see just summary page Sheets("Summary").Activate End Sub ## SAS code for replacing missing values with 0 in a folder full of datasets %macro loopOverDatasets(); /*imho good practice to declare macro variables of a macro locally*/ %local datasetCount datasetName iter inMember; /*get number of datasets + name of datasets*/ proc sql noprint ; select count(*) into: datasetCount from dictionary.tables where libname = “EXAM”; quit; /*initiate loop*/ %let iter=1; %do %while (&iter.<= &datasetCount.); proc sql noprint; select memname into: datasetName from dictionary.tables where libname = “EXAM” and monotonic() eq &iter.; quit; %put &iter &datasetCount &datasetName &inMember; /*now you can apply your logic to the dataset*/ data &datasetName.; set exam.&datasetName.; array change _numeric_; do over change; if change=. then change=0; end; run; %let iter=%eval(&iter.+1); %end; %mend; %loopOverDatasets;
2017-10-19 03:26:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19169370830059052, "perplexity": 8875.12519793019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00879.warc.gz"}
https://www.123calculus.com/en/refraction-law-page-8-40-300.html
# Law of Refraction n_2*sin(theta_2) = n_1*sin(theta_1) Calculator of the law of refraction also known as Snell's law. Enter 'x' in the field to be calculated. This tool is a calculator of the law of refraction (Snell's law) : n_2*sin(theta_2) = n_1*sin(theta_1) n1 : Medium 1 refractive index (incident ray) n2 : Medium 2 refractive index (refracted ray) theta_1: angle of incidence theta_2: angle of refraction This law describes the behaviour of a light ray when it changes mediums. Specifically, it calculates the angle of deviation of the light ray following a transition from a refractive medium of index n1 to another medium with refractive index n2. ## Refraction ability and medium index The refractive index of a medium is an indication of its ability to bend light. When a light ray travels from a medium into another one, the angle of refraction depends on the ratio between the two medium indices n1 and n2 because of, sin(theta_2) = (n_1/n_2)*sin(theta_1) Accordingly, - if n2 > n1 : the light ray passes to a more refractive medium, then, sin(theta_2) < sin(theta_1) so the light ray will be closer to normal (theta_2 < theta_1). This is the case in the scheme above. - if n2 < n1: the light ray passes to a less refractive medium, then, sin(theta_2) > sin(theta_1) so the light ray will move further from the normal (theta_2 > theta_1). ## Critical Angle and Refraction sin(theta_2) = (n_1/n_2)*sin(theta_1) We know that sin(theta_2) <= 1 therefore, (n_1/n_2)*sin(theta_1) <= 1 sin(theta_1) <= n_2/n_1 This equation is always satisfied when n2 > n1. However, for n1 > n2, it is mathematiquelly possible to find values of theta_1 such that this inequality is not satisfied ! if we enter these values into the calculator then, it will display NaN as value of theta_2 (ie 'not a number'). In fact, for these values of theta_1, there is no refraction but a total reflection of the light ray which doesn't go into the 2d medium. The angle theta_1 at which this happens is called the critical angle of refractive and is calculated as follows (only when n2 < n1), sin(theta_1) = n_2/n_1 theta_1 = Arcsin(n_2/n_1) In summary, we've noticed that if n2 > n1 (light moving to a more refractive medium) then the ray of light will always be refracted closer to normal. If n2 < n1 (light moving to a less refractive medium) then the light ray will be refracted away from normal provided that the angle of incidence does not exceed a limit (critical angle of refraction). Beyond this limit, the light ray will be fully reflected (middle 2 will act like a mirror).
2023-01-27 08:57:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7227451205253601, "perplexity": 1308.5279166788973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00104.warc.gz"}
https://tex.stackexchange.com/questions/43829/biblatex-multible-bibliographies-using-keywords-not-working?noredirect=1
# Biblatex multible bibliographies using keywords not working I'm trying to print a sperate bibliography for references for figures but only one of the bibliographies prints at the end of the document. Warnings in the document: Keyword 'primary' not found on input line 8. Empty bibliography on input line 8. Minimal working example: \documentclass{article} \usepackage{biblatex} \begin{document} Hello\cite{KandR} and Goodbye\cite{CUEDCplusplus} \printbibliography[keyword=primary, title={Primary references}] \printbibliography[notkeyword=primary, title={Other references}] \end{document} bib.bib @book{KandR, author = {Kernighan, Brian W. and Ritchie, Dennis M.}, title = {The C Programming Language Second Edition}, publisher = {Prentice-Hall, Inc.}, year = {1988} } @online{CUEDCplusplus, keyword={primary}, author = {Love, T.P.}, title = {CUED C++}, url = {http://www-h.eng.cam.ac.uk/help/tpl/languages/C++.html}, urlyear = {2010} } Screenshot of the result That should be keywords (with s!) in the bib-file: keywords={primary} • Just in case it helps someone: I seem to have noticed that if you have several keywords and they are separated by a semicolon, or some by semicolon and some by commas, biblatex will ignore the keywords field and print the entry regardless of \printbibliography's option. Mind those separators! – Juan del Acebo Dec 11 '16 at 15:00
2019-10-15 15:57:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.756757378578186, "perplexity": 10008.58201307005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00036.warc.gz"}
https://www.kristakingmath.com/blog/tag/fundamentals
Posts tagged fundamentals Quotient rule for exponents to simplify fractions This is the rule we use when we’re dividing one exponential expression by another exponential expression. The quotient rule tells us that we have to subtract the exponent in the denominator from the exponent in the numerator, but the bases have to be the same. Read More Calculating absolute values The absolute value operation turns any value inside it into its distance from the origin, essentially turning both positive and negative numbers into only positive numbers. Always calculate the value inside the absolute value first, then apply the absolute value last. Read More Learn mathKrista King Number sets in the real number system The vast majority of the numbers you’ll use in most math classes are called real numbers, and the whole universe of real numbers is what makes up the Real Number System. Let’s start with a diagram. Read More Finding the multiples of a number It’s helpful to think about multiples and divisibility as two parts of the same idea. We know that 10 is “divisible” by 5 because when we do the division 10/5, the result 2 is a whole number. It’s the fact that the result is a whole number that proves that 10 is divisible by 5. Read More Multiplying and dividing fractions When we multiply fractions, we multiply their numerators to find the numerator of the result, and we multiply their denominators to find the denominator of the result. When we divide fractions, we actually turn the division problem into a multiplication problem by turning the divisor upside down and changing the division symbol to a multiplication symbol at the same time. Read More Finding place value of a particular digit When we talk about place value, we’re talking about the value of the location of a particular digit within a given number (the value of the place where that digit is located within that number). Given any decimal number, place value is what allows us to easily say where each digit of the number is located. Read More Learn mathKrista King
2019-07-20 22:44:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765206336975098, "perplexity": 592.9799746101492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00504.warc.gz"}
http://ned.ipac.caltech.edu/level5/March01/Israel/Israel4.html
### 4. The dusty disk Dufour et al. (1979) first established that the dark band crossing the elliptical galaxy is in fact the image of a highly inclined disk component consisting of a metal-rich population of stars, nebulae and dust clouds (Fig. 1). Metallicities are close to those in the Solar Neighbourhood (Dufour et al. 1979; Phillips 1981; Eckart et al. 1990a; Viegas & Prieto 1992). The disk is at position angle 122° ± 4° and star formation is rampant. The present burst of star formation apparently started 50 million years ago and created at least a hundred HII regions embedded in the disk (Dufour et al. 1979; Hodge & Kennicutt 1983). Remarkable concentrations of luminous and very blue stars can be seen at the northwestern and southeastern edges of the dark band; they must represent very large OB associations (Fig. 1). Recent HST observations allowed Schreier et al. (1996) to also find a large number of point-like sources embedded in the dust band with colours likewise suggestive of OB associations (see also Alonso & Minniti 1997). However, since the review by Ebneter & Balick (1983), little quantitative progress has been made on the issue of star formation. This may occur at rates ten times higher than in the Milky Way (Telesco 1978) although disk average UV energy densities close to Solar Neighbourhood values (Eckart et al. 1990a) suggest more moderate rates. As NGC 5128 is no more distant than e.g. M 82, its brightest HII regions and supernova remnants should be detectable at centimetre radio wavelengths. Most of the published high-resolution radio maps lack the dynamic range to reach the low flux-density levels expected for HII regions or SNR's in the disk. The 1425 MHz map published by Condon et al. (1996) does, however, show an extension with peaks of 125 mJy in an 18" (295 pc) beam coinciding with the eastern half of the dark band, while the 43 GHz map by Tateyama & Strauss (1992) likewise seems to show weak emission from the eastern dark band. Further high-resolution observations of the disk at centimetre wavelengths are desirable as they provide one of the few means studying the star formation history of the disk. As defined by its OB stars and HII regions, the disk extends out to a radius of 4 kpc. Molecular line and infrared continuum emission, further discussed in Sects. 4.2 and 4.3, is concentrated within 40% of this radius (Joy et al. 1988; Eckart et al. 1990a; Quillen et al. 1992). HI emission, however, extends much farther out, to radii of 7 kpc (cf. van Gorkom et al. 1990; Schiminovich et al. 1994). The outer parts of the disk, as traced by the dark band and the HI emission, show a pronounced warp to position angle 90° (cf. Fig. 7). The disk is in rapid rotation (Graham 1979; van Gorkom et al. 1990; Quillen et al. 1992). The tilted-ring modelling by Nicholson et al. (1992) shows that in spite of appearances, the distribution of dust in NGC 5128 is that of a warped thin disk of about 200 pc thickness (Sect. 6.1) along the minor axis. Deep images of NGC 5128 show the disk to be well inside the elliptical galaxy. Its inclination is a function of radius, but remains generally high with respect to the plane of the sky. The HII regions are distributed throughout the warped disk and embedded in diffuse ionized gas. Nicholson et al. (1992) showed that their warped disk model also quite naturally explains the various CaII and NaI velocity components seen in absorption against supernova 1986g by d'Odorico et al. (1989). In addition to the seven components associated with NGC 5128, these observationsy also showed three components with Galactic foreground gas, and two intermediate velocity components of unknown origin. The inner part of NGC 5128 is associated with diffuse X-ray emission in the form of ridges along the dark band edges but also in more isotropically distributed form (Feigelson et al. 1981; Turner et al. 1997). Although the origin of this diffuse emission is not established unequivocally, among the most reasonable explanations for its existence are gas ejected from late-type stars dynamically heated to the required temperatures, X-ray binaries associated with the young stellar population, stellar winds in HII regions or combinations thereof (Feigelson et al. 1981; Turner et al. 1997). In any case, NGC 5128 is underluminous in diffuse X-ray emission as compared to other early-type galaxies (Döbereiner et al. 1996). The HI observations by van Gorkom et al. (1990) and Schiminovich et al. (1994) show the atomic hydrogen to follow the dust lane, including the warp (Fig. 7). It could, however, not be traced over the central 2.5 kpc because of strong absorption against the centre. Van Gorkom et al. (1990) found a total HI amount of about 3.3 × 108 M, but cautioned that they might have missed a significant amount because of limited sensitivity; nor does this estimate include the HI in the shells found by Schiminovich et al. 1994 (Fig. 7). Indeed, Richter, Sackett & Sparke (1994) find within the 21' (21 kpc) beam of the Green Bank 140 ft telescope a higher mass of 8.3 ± 2.5 × 108 M, still uncertain because of the strong central absorption (Sect. 7). In the central part of the disk, molecular line emission from CO and its isotopes is found out to radii of about 2 kpc, but most of it is concentrated within a radius of 1 kpc (Phillips et al. 1987; Eckart et al. 1990a; Quillen et al. 1992; Rydbeck et al. 1993). Within R = 1 kpc, the area filling factor of the disk is of the order of 3-12%, its thickness is less than 35 pc and the velocity dispersion is about 10 km s-1 (Quillen et al. 1992). The J = 2-1/J=1-0 temperature ratios of about 0.9 for both 12CO and 13CO as well as the isotopic emission ratios 12CO / 13CO = 11 and 12CO / C18O = 75 are comparable to those of Milky Way giant molecular cloud complexes (Wild, Eckart & Wiklind 1997). Modelling the CO observations as tracer for the much more abundant H2 molecule, Eckart et al. (1990a) and Wild et al. (1997) estimate molecular hydrogen temperatures Tk = 10-15 K and densities of a few times 10 4 cm-3. Emission from other molecular species has also been detected in the disk (Whiteoak, Gardner & Höglund 1980; Seaquist & Bell 1988; d'Odorico et al. 1989; Israel 1992; Paglione, Jackson & Ishizuki 1997). Total molecular hydrogen masses are probably about 4 × 108 M, but may be a factor of two higher depending on the CO-to-H2 conversion factor favoured. The vibrationally excited warm H2 (Tk 1000 K) detected by Israel et al. (1990) represents only a minute fraction of all molecular hydrogen and is associated with the circumnuclear disk (Sect. 5.2). The total gaseous mass of the disk, including helium, is thus of the order of 1.3 × 109 M, only about 2% of the dynamical mass contained in the elliptical component within the radius of the disk (R = 7 kpc). However, because of the pronounced concentration of interstellar gas at smaller radii, that fraction increases to about 8% at R = 2 kpc. At far-infrared wavelengths, the disk of NGC 5128 stands out by its emission from warm dust. Dust temperatures are 30-40 K depending on the assumed dust emissivity Q100 or respectively (Joy et al. 1988; Marston & Dickens 1988; Eckart et al. 1990a). The overall distribution of far-infrared emission is very similar to that of the carbon monoxide. The present far-infrared information still leaves considerable room for improvement. The KAO scans presented by Joy et al. (1988) do not fully sample the galaxy, but show that 10% of the total far-infrared luminosity arises in central source which may be identified with the circumnuclear disk (see Sect. 5.2). IRAS survey observations cover the whole galaxy, but the resolution is limited. Several of the published fluxes refer to poorly calibrated data or underestimate the total flux from the extended galaxy. Best fluxes are probably the colour-corrected values S12 = 26.4 Jy, S25 = 25.7 Jy, S60 = 236 Jy and S100 = 520 Jy given by Rice et al. (1988). Use of IRAS non-survey data (DSD maps: Marston & Dickens 1988; CPC-maps: Eckart et al. 1990a; Marston 1992) provided some improvement in resolution, but at the cost of photometric accuracy. Because of unsolveable calibration problems (cf. van Driel et al. 1993), the image-sharpened CPC maps discussed by Eckart et al. (1990a) and by Marston (1992) must be considered as unreliable. The most recent image-sharpened IRAS maps (resolution 1-2') incorporating all survey-instrument data are shown in Fig. 9; they still lack the resolution to bring forth the full detail of the disk. Good mid-infrared imaging of the dust disk itself is still lacking. The mean 12µm surface brightness of the disk as derived from IRAS data is about 25 MJy sr-1. In addition to the nucleus (Sect. 5.3), Telesco (1978) detected 10µm emission weaker by a factor of about five from a number of disk HII regions. Clearly, much more remains to be done. Figure 9. IRAS image-sharpened maps of the NGC 5128 dusty disk at 12µm (Band 1) and 60µm (Band 3), showing the warped outer edges. Courtesy D. Kester, University of Groningen. As much as 50% of the 100µm emission may be due to "cirrus" (Marston & Dickens 1988; Eckart et al. 1990a). The various far-infrared data indicate the presence of small amounts of dust outside the disk in the main elliptical galaxy as well as large amounts in the disk out to a radius of 3 kpc (Eckart et al. 1990a; Marston 1992). The total dust mass can be estimated from IRAS photometry as Md = 1-2 × 106 M (cf. Hunter et al. 1989) with a luminosity LFIR = 2 × 1010 L (Joy et al. 1988; Eckart et al. 1990a). With considerable uncertainty, the total gas-to-dust ratio within a radius of 2 kpc is Mgas / Mdust = 450. This is an upper limit if significant amounts of cold dust are present, imperfectly sampled by IRAS. Marston & Dickens (1988) explicitly modelled the IRAS emission in terms of large, cool grains and warm, small grains, and arrived at a (distance-corrected) dust mass an order of magnitude higher, implying a gas-to-dust ratio of 45, which seems rather low. At optical and infrared wavelengths, the disk is significantly polarized up to 6% parallel to the dust band (Elvius & Hall 1964; Hough et al. 1987; Scarrott et al. 1996). The observations by the latter show higher levels of polarization at the dust band extremities, with directions perpendicular to the dust band. Hough et al. (1987) concluded that the dust grains sampled are about 20% smaller than those in the Milky Way, implying an extinction law differing from that in the Solar Neighbourhood, with a total to selective extinction ratio of 2.4. The HST R- and I-band imaging polarimetry presented by Schreier et al. (1996) shows the polarization to reach a peak at a knot close to the nucleus, shining by reflected light. Like Hough et al. (1987), they assumed scattering to be negligible elsewhere, but that is inconsistent with the apparent importance of scattering in the central region and is unlikely in view of the conclusions reached by Packham et al. (1996 - see Section 5.3). The observations by Scarrott et al. (1996) can only be explained by assuming simultaneous operation of both dichroic extinction and scattering, with the latter dominating at the dust band extremities. This suggests that the very blue colours observed by van den Bergh (1976) just at the northern dark band edge are caused by the light of intrinsically blue objects (cf. Alonso & Minniti 1997 and references therein) enhanced by scattered light and suffering relatively little extinction. This is supported by the optical continuum observations of the central region by Storchi-Bergmann et al. (1997) who find that the major contribution comes from a metal-rich old bulge, but that there are also significant contributions from young stars and from scattered light especially at the dark band edges. The optical colours suggests significant extinction in the dust band itself (E(B - V) 0.5 mag - van den Bergh 1976; Dufour et al. 1979). Indeed, the near-infrared estimates by Harding et al. (1981) yield AV = 3-6 mag, while HST observations indicate V-band extinctions ranging from 0.5 to 7 mag in the dark band (Schreier et al. 1996) and even reaching values in excess of 30 mag (AK 3 mag) just south of the optically invisible nucleus (Alonso & Minniti 1997). The R-band and I-band images by Schreier et al. (1996) clearly show how the dust band is seen nearly edge-on, but slightly tilted with the near side south of the centre so that we are looking from above. The glow of the nuclear region (but not the nucleus itself) on the north-side of the dust disk is strikingly apparent even through the high extinction it suffers.
2016-02-12 16:15:41
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216915726661682, "perplexity": 2422.03974902116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164289.84/warc/CC-MAIN-20160205193924-00327-ip-10-236-182-209.ec2.internal.warc.gz"}
https://labs.tib.eu/arxiv/?author=M.%20Hodana
• ### Neutron-Proton Scattering in the Context of the $d^*$(2380) Resonance(1408.4928) Aug. 21, 2014 hep-ex, nucl-ex New data on quasifree polarized neutron-proton scattering, in the region of the recently observed $d^*$ resonance structure, have been obtained by exclusive and kinematically complete high-statistics measurements with WASA at COSY. This paper details the determination of the beam polarization, checks of the quasifree character of the scattering process, on all obtained $A_y$ angular distributions and on the new partial-wave analysis, which includes the new data producing a resonance pole in the $^3D_3$-$^3G_3$ coupled partial waves at ($2380\pm10 - i40\pm5$) MeV -- in accordance with the $d^*$ dibaryon resonance hypothesis. The effect of the new partial-wave solution on the description of total and differential cross section data as well as specific combinations of spin-correlation and spin-transfer observables available from COSY-ANKE measurements at $T_d$ = 2.27 GeV is discussed. • ### Cross section ratio and angular distributions of the reaction p + d -> 3He + eta at 48.8 MeV and 59.8 MeV excess energy(1402.3469) June 23, 2014 nucl-ex We present new data for angular distributions and on the cross section ratio of the p + d -> 3He + eta reaction at excess energies of Q = 48.8 MeV and Q = 59.8 MeV. The data have been obtained at the WASA-at-COSY experiment (Forschungszentrum J\"ulich) using a proton beam and a deuterium pellet target. While the shape of obtained angular distributions show only a slow variation with the energy, the new results indicate a distinct and unexpected total cross section fluctuation between Q = 20 MeV and Q = 60 MeV, which might indicate the variation of the production mechanism within this energy interval. • ### Measurement of the $\eta\to\pi^+\pi^-\pi^0$ Dalitz plot distribution(1406.2505) June 10, 2014 hep-ex, nucl-ex Dalitz plot distribution of the $\eta\to\pi^+\pi^-\pi^0$ decay is determined using a data sample of $1.2\cdot 10^7$ $\eta$ mesons from $pd\to ^3\textrm{He}\eta$ reaction at 1 GeV collected by the WASA detector at COSY. • ### Evidence for a New Resonance from Polarized Neutron-Proton Scattering(1402.6844) May 6, 2014 nucl-ex Exclusive and kinematically complete high-statistics measurements of quasifree polarized $\vec{n}p$ scattering have been performed in the energy region of the narrow resonance structure $d^*$ with $I(J^P) = 0(3^+)$, $M \approx$ 2380 MeV/$c^2$ and $\Gamma \approx$ 70 MeV observed recently in the double-pionic fusion channels $pn \to d\pi^0\pi^0$ and $pn \to d\pi^+\pi^-$. The experiment was carried out with the WASA detector setup at COSY having a polarized deuteron beam impinged on the hydrogen pellet target and utilizing the quasifree process $\vec{d}p \to np + p_{spectator}$. That way the $np$ analyzing power $A_y$ was measured over a large angular range. The obtained $A_y$ angular distributions deviate systematically from the current SAID SP07 NN partial-wave solution. Incorporating the new $A_y$ data into the SAID analysis produces a pole in the $^3D_3 - ^3G_3$ waves as expected from the $d^*$ resonance hypothesis. • ### Investigation of the dd -> 3He n \pi 0 reaction with the FZ J\"ulich WASA-at-COSY facility(1304.3561) March 15, 2014 nucl-ex An exclusive measurement of the dd -> 3He n \pi 0 reaction was carried at a beam momentum of p = 1.2 GeV/c using the WASA-at-COSY facility. For the first time data on the total cross section as well as differential distributions were obtained. The data are described with a phenomenological approach based on a combination of a quasi-free model and a partial wave expansion for three-body reaction. The total cross section is found to be \sigma(tot) = (2.89 +- 0.01(stat) +- 0.06(sys) +- 0.29(norm)) \mu b. The contribution of the quasi-free processes (with the neutron being target or beam spectator) accounts for 38% of the total cross section and dominates the differential distributions in specific regions of the phase space. The remaining part of the cross section can be described within a partial wave decomposition indicating the significance of p-wave contributions in the final state. • ### \pi^0 \pi^0 Production in Proton-Proton Collisions at Tp=1.4 GeV(1107.0879) Feb. 14, 2014 hep-ex The reaction pp->pppi0pi0 has been investigated at a beam energy of 1.4 GeV using the WASA-at-COSY facility. The total cross section is found to be (324 +- 21_systematic +- 58_normalization) mub. In order to to study the production mechanism, differential kinematical distributions have been evaluated. The differential distributions indicate that both initial state protons are excited into intermediate Delta(1232) resonances, each decaying into a proton and a single pion, thereby producing the pion pair in the final state. No significant contribution of the Roper resonance N*(1440) via its decay into a proton and two pions is found • ### Study of the eta meson production with polarized proton beam(1401.7516) Jan. 31, 2014 nucl-ex The pp-> pp eta reaction was investigated at excess energies of 15 MeV and 72 MeV using the azimuthally symmetric WASA detector and a polarized proton beam of the Cooler Synchrotron COSY. The aim of the studies is the determination of partial wave contributions to the production process of the eta meson in nucleon-nucleon collisions. Here we present preliminary results of the extraction of the position of the interaction region with respect to the WASA detector and preliminary results on the degree of polarization of the COSY proton beam used in the experiment. • ### Search for the eta-mesic 4He with WASA-at-COSY detector(1301.0843) Aug. 28, 2013 nucl-ex An exclusive measurement of the excitation function for the dd->3Heppi- reaction was performed at the Cooler Synchrotron COSY-Juelich with the WASA-at-COSY detection system. The data were taken during a slow acceleration of the beam from 2.185 GeV/c to 2.400 GeV/c crossing the kinematic threshold for the eta meson production in the dd->4He-eta reaction at 2.336 GeV/c. The corresponding excess energy with respect to the 4He-eta system varied from -51.4MeV to 22MeV. The integrated luminosity in the experiment was determined using the dd->3Hen reaction. The shape of the excitation function for the dd->3Heppi- was examined. No signal of the 4He-eta bound state was observed. An upper limit for the cross-section for the bound state formation and decay in the process dd->(4He-eta)bound->3Heppi- was determined on the 90% confidence level and it varies from 20nb to 27nb for the bound state width ranging from 5MeV to 35MeV, respectively. • ### Search for a dark photon in the $\pi^0 \to e^+e^-\gamma$ decay(1304.0671) Aug. 27, 2013 hep-ph, hep-ex, nucl-ex The presently world largest data sample of pi0 --> gamma e+e- decays containing nearly 5E5 events was collected using the WASA detector at COSY. A search for a dark photon U produced in the pi0 --> gamma U --> gamma e+e- decay from the pp-->pp\pi^0 reaction was carried out. An upper limit on the square of the U-gamma mixing strength parameter epsilon^2 of 5e-6 at 90% CL was obtained for the mass range 20 MeV <M_U< 100 MeV. This result together with other recent experimental limits significantly reduces the M_U vs. \epsilon^2 parameter space preferred by the measured value of the muon anomalous magnetic moment. • ### Isospin Decomposition of the Basic Double-Pionic Fusion in the Region of the ABC Effect(1212.2881) Dec. 12, 2012 hep-ex, nucl-ex Exclusive and kinematically complete high-statistics measurements of the basic double pionic fusion reactions pn -> dpi0pi0, pn -> d pi+pi- and pp -> dpi+pi0 have been carried out simultaneously over the energy region of the ABC effect using the WASA detector setup at COSY. Whereas the isoscalar reaction part given by the dpi0pi0 channel exhibits the ABC effect, i.e. a low-mass enhancement in the pipi-invariant mass distribution, as well as the associated resonance structure in the total cross section, the isovector part given by the dpi+pi0 channel shows a smooth behavior consistent with the conventional t-channel Delta Delta process. The dpi+pi- data are very well reproduced by combining the data for isovector and isoscalar contributions, if the kinematical consequences of the isospin violation due to different masses for charged and neutral pions are taken into account. • ### ABC Resonance Structure in the Double-Pionic Fusion to 4He(1206.6337) June 27, 2012 hep-ph, hep-ex, nucl-ex Exclusive and kinematically complete high-statistics measurements of the double pionic fusion reaction $dd \to ^4$He$\pi^0\pi^0$ have been performed in the energy range 0.8 - 1.4 GeV covering thus the region of the ABC effect, which denotes a pronounced low-mass enhancement in the $\pi\pi$-invariant mass spectrum. The experiments were carried out with the WASA detector setup at COSY. Similar to the observation in the basic $pn \to d \pi^0\pi^0$ reaction, the data reveal a correlation between the ABC effect and a resonance-like energy dependence in the total cross section. The maximum occurs at m=2.37 GeV + 2$m_N$, i.e. at the same position as in the basic reaction. The observed resonance width $\Gamma \approx$ 160 MeV can be understood from broadening due to Fermi motion of the nucleons in initial and final nuclei together with collision damping. Differential cross sections are described equally well by the hypothesis of a $pn$ resonance formation during the reaction process. • ### Proceedings of the second International PrimeNet Workshop(1204.5509) April 24, 2012 nucl-ex, nucl-th These are the proceedings of the second PrimeNet Workshop, held in September 26-28, 2011, at the campus of Forschungszentrum J\"{u}lich, Germany. This workshop is part of the activities in the project "Study of Strongly Interacting Matter" (acronym HadronPhysics2), which is an integrating activity of the Seventh Framework Program of EU. This HP2 project contains several activities, one of them being the network PrimeNet having the focus on Meson Physics in Low-Energy QCD. This network is created to exchange information on experimental and theoretical ongoing activities on mainly eta and eta-prime physics at different European accelerator facilities and institutes. • ### Exclusive Measurement of the eta --> pi+ pi- gamma Decay(1107.5277) Nov. 17, 2011 hep-ex, nucl-ex An exclusive measurement of the decay eta --> pi+ pi- gamma has been performed at the WASA facility at COSY. The eta mesons were produced in the fusion reaction pd --> 3He X at a proton beam momentum of 1.7 GeV/c. Efficiency corrected differential distributions have been extracted based on 13340\pm140 events after background subtraction. The measured pion angular distribution is consistent with a relative p-wave of the two-pion system, whereas the measured photon energy spectrum was found at variance with the simplest gauge invariant matrix element of eta --> pi+ pi- gamma. A parameterization of the data can be achieved by the additional inclusion of the empirical pion vector form factor multiplied by a first-order polynomial in the squared invariant mass of the pi+ pi- system. • ### ABC Effect in Basic Double-Pionic Fusion --- Observation of a new resonance?(1104.0123) April 1, 2011 nucl-ex We report on a high-statistics measurement of the basic double pionic fusion reaction $pn \to d\pi^0\pi^0$ over the energy region of the so-called ABC effect, a pronounced low-mass enhancement in the $\pi\pi$-invariant mass spectrum. The measurements were performed with the WASA detector setup at COSY. The data reveal the ABC effect to be associated with a Lorentzian shaped energy dependence in the integral cross section. The observables are consistent with a resonance with $I(J^P) =0(3^+)$ in both $pn$ and $\Delta\Delta$ systems. Necessary further tests of the resonance interpretation are discussed. • ### Dynamics of the near threshold eta meson production in proton-proton interaction(0711.4998) July 29, 2008 nucl-ex We present the results of measurements of the analysing power for the p(pol)p --> pp eta reaction at the excess energies of Q=10 and 36 MeV, and interpret these results within the framework of the meson exchange models. The determined values of the analysing power at both excess energies are consistent with zero implying that the eta meson is produced predominantly in s-wave. • ### Mechanism of the close-to-threshold production of the eta meson(hep-ex/0611015) Feb. 22, 2007 hep-ex Measurements of the analysing power for the p(pol)p --> ppeta reaction have been performed in the close-to-threshold energy region at beam momenta of p_{beam}=2.010 and 2.085 GeV/c, corresponding to excess energies of Q=10 and 36 MeV, respectively. The determined analysing power is essentially consistent with zero implying that the eta meson is produced predominantly in the s-wave at both excess energies. The angular dependence of the analysing power, combined with the hitherto determined isospin dependence of the total cross section for the eta meson production in nucleon-nucleon collisions, reveal a statistically significant indication that the excitation of the nucleon to the S_{11}(1535) resonance, the process which intermediates the production of the eta meson, is predominantly due to the exchange of the pi meson between the colliding nucleons.
2020-12-04 00:03:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7476823329925537, "perplexity": 2258.816312110345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141732835.81/warc/CC-MAIN-20201203220448-20201204010448-00149.warc.gz"}
https://or.stackexchange.com/questions/4309/pyomo-add-constraint-error-rule-failed-when-generation-expression-for-constrain
# Pyomo add constraint error: Rule failed when generation expression for constraint I am trying to solve a model with Pyomo and struggling with indexing. Below is a simple problem instance, where you can also see the error. The message is straightforward and self-explanatory but failed to resolve the issue. It stems from using the k_nearest_vehicles dictionary which is keyed by the items of the Riders list. I tried to use Xindex as solution but didn't quite work. Please let me know where I am doing wrong. import pyomo.environ as pio M_threshold = 30 Riders = [(1926.0, 0, 0)] k_nearest_vehicles = {(1926.0, 0, 0): [(913.0, 0, 36), (913.0, 0, 37), (917.0, 0, 0)]} zone_to_zone_tt = {(913.0, 1926.0): 27.523453, (917.0, 1926.0): 29.937351} m= pio.ConcreteModel('Transportation_Problem') Xindex = [(i,j) for j in Riders for i in k_nearest_vehicles[j]] m.x = pio.Var([i for i in k_nearest_vehicles[j] for j in Riders], [j for j in Riders],domain=pio.NonNegativeReals) m.OBJ = pio.Objective(expr = (sum((zone_to_zone_tt[i[0],j[0]]-M_threshold)*m.x[i,j] for (i,j) in Xindex)), sense=pio.minimize) def Cons1(m,i): return (sum(m.x[i,j] for j in Riders) <= 1) m.AxbConstraint1 = pio.Constraint([i for i in k_nearest_vehicles[j] for j in Riders], rule=Cons1) def Cons2(m,j): return (sum(m.x[i,j] for i in k_nearest_vehicles[j]) <= 1) m.AxbConstraint2 = pio.Constraint(Riders, rule=Cons2) opt = pio.SolverFactory() results = opt.solve(m, tee=True) ERROR: Rule failed when generating expression for constraint AxbConstraint1 with index (913.0, 0, 36): TypeError: Cons1() takes 2 positional arguments but 4 were given ERROR: Constructing component 'AxbConstraint1' from data=None failed: TypeError: Cons1() takes 2 positional arguments but 4 were given --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~/anaconda3/lib/python3.7/site-packages/pyomo/core/base/misc.py in apply_indexed_rule(obj, rule, model, index, options) 56 if index.__class__ is tuple: ---> 57 return rule(model, *index) 58 elif index is None and not obj.is_indexed(): TypeError: Cons1() takes 2 positional arguments but 4 were given During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) ~/anaconda3/lib/python3.7/site-packages/pyomo/core/base/misc.py in apply_indexed_rule(obj, rule, model, index, options) 71 if options is None: ---> 72 return rule(model) 73 else: TypeError: Cons1() missing 1 required positional argument: 'i' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) <ipython-input-62-353b262f79fa> in <module> 14 def Cons1(m,i): 15 return (sum(m.x[i,j] for j in Riders) <= 1) ---> 16 m.AxbConstraint1 = pio.Constraint([i for i in k_nearest_vehicles[j] for j in Riders], rule=Cons1) 17 18 def Cons2(m,j): ~/anaconda3/lib/python3.7/site-packages/pyomo/core/base/block.py in __setattr__(self, name, val) 577 # 579 else: 580 # 1129 _blockName, str(data)) 1130 try: -> 1131 val.construct(data) 1132 except: 1133 err = sys.exc_info()[1] ~/anaconda3/lib/python3.7/site-packages/pyomo/core/base/constraint.py in construct(self, data) 777 _init_rule, 778 _self_parent, --> 779 ndx) 780 except Exception: 781 err = sys.exc_info()[1] ~/anaconda3/lib/python3.7/site-packages/pyomo/core/base/misc.py in apply_indexed_rule(obj, rule, model, index, options) 78 if options is None: 79 if index.__class__ is tuple: ---> 80 return rule(model, *index) 81 elif index is None and not obj.is_indexed(): 82 return rule(model) TypeError: Cons1() takes 2 positional arguments but 4 were given I also wanted to share the Gurobi way of modeling, which works like a champ. But, I am trying to re-write this in Pyomo to be able to use open source solvers. from gurobipy import * m = Model("My_problem") for j in Riders for i in k_nearest_vehicles[j]} m.setObjective(quicksum((zone_to_zone_tt[i[0],j[0]]-M_threshold)*x[i,j] for (i,j) in x.keys()), GRB.MINIMIZE) for i in Vehicles: m.addConstr(quicksum(x[i,j] for j in Riders if (i,j) in x.keys()) <= 1, name="each_vehicle_to_at_most_one_rider%s"%([i])) for j in Riders: m.addConstr(quicksum(x[i,j] for i in Vehicles if (i,j) in x.keys()) <= 1, name="each_rider_to_at_most_one_vehicle%s"%([j])) m.update() m.optimize() • Hello and welcome to OR.SE, to solve your indexing problem I need to know whats the connection between Riders and k_nearest_vehicles. – Oguz Toragay Jun 4 '20 at 4:46 • Sure, I have two lists: Riders and Vehicles. Based on a k-nearest logic, I create a dictionary called k_nearest_vehicles which is a subset of Vehicles for a given rider in Riders. Hence, k_nearest_vehicles includes a bunch of unique vehicle ids for a given index in Riders. – tcokyasar Jun 4 '20 at 4:55 • so the combination of rider-vehicle is unique. right? – Oguz Toragay Jun 4 '20 at 4:56 • so how many riders do you have in the your example problem? – Oguz Toragay Jun 4 '20 at 4:59 • I can index the same vehicle for different riders, though my example involves only a single rider with an id of (1926.0, 0, 0). In this example, I have three vehicles with ids of (913.0, 0, 36), (913.0, 0, 37), (917.0, 0, 0). I understand the confusion. I wish I made an example with two riders. The second rider could have (917.0, 0, 0), which appears in the first rider, and some other vehicles. – tcokyasar Jun 4 '20 at 5:02 In Pyomo, indexes are sets and variables defined over those sets. In your problem, you need to define a set of all members of Riders and all members of k_nearest_vehicles. To define an index set for the combination of these two sets, in Pyomo you can indicate that the members of a set are restricted to be in the cross product of two other sets, you can use the within keyword: model.combination = Set(within=m.Vehicles * m.Riders) Also if you can preprocess (as you also mentioned) your driver and vehicles it will make your model easy to understand. The following is a simplified form of your problem (based on my understanding) which I could solve to optimality using Cplex and glpk. import pyomo.environ as pio M_threshold = 30 Riders = [1926.0] k_nearest_vehicles = {1926.0: [913.0,917.0]} zone_to_zone_tt = {(913.0, 1926.0): 27.523453, (917.0, 1926.0): 29.937351} m = pio.ConcreteModel('Transportation_Problem') m.Riders_ind = set(range(len(Riders))) m.KNV_ind = set(range(len(k_nearest_vehicles[1926.0]))) m.x = pio.Var(m.KNV_ind,m.Riders_ind,domain=pio.NonNegativeReals) m.OBJ = pio.Objective(expr = (sum((zone_to_zone_tt[k_nearest_vehicles[1926.0][i],Riders[j]]-M_threshold)*m.x[i,j] for i in m.KNV_ind for j in m.Riders_ind)),sense=pio.minimize) def Cons1(m,i): return (sum(m.x[i,j] for j in m.Riders_ind) <= 1) m.AxbConstraint1 = pio.Constraint([i for i in m.KNV_ind for j in m.Riders_ind],rule=Cons1) def Cons2(m,j): return (sum(m.x[i,j] for i in m.KNV_ind) <= 1) m.AxbConstraint2 = pio.Constraint(m.Riders_ind, rule=Cons2) opt = pio.SolverFactory('cplex') results = opt.solve(m, tee=True) print(results) and the results: GLPSOL: GLPK LP/MIP Solver, v4.65 Parameter(s) specified in the command line: --cpxlp C:\TEMP\tmp4niztoc0.pyomo.lp 4 rows, 3 columns, 5 non-zeros 21 lines were written GLPK Simplex Optimizer, v4.65 4 rows, 3 columns, 5 non-zeros Preprocessing... 1 row, 2 columns, 2 non-zeros Scaling... A: min|aij| = 1.000e+00 max|aij| = 1.000e+00 ratio = 1.000e+00 Problem data seem to be well scaled Constructing initial basis... Size of triangular part is 1 * 0: obj = 0.000000000e+00 inf = 0.000e+00 (2) * 2: obj = -2.476547000e+00 inf = 0.000e+00 (0) OPTIMAL LP SOLUTION FOUND Time used: 0.0 secs Memory used: 0.0 Mb (40400 bytes) Writing basic solution to 'C:\TEMP\tmpfm31ikz2.glpk.raw'... 16 lines were written Problem: - Name: unknown Lower bound: -2.476547 Upper bound: -2.476547 Number of objectives: 1 Number of constraints: 4 Number of variables: 3 Number of nonzeros: 5 Sense: minimize Solver: - Status: ok Termination condition: optimal Statistics: Branch and bound: Number of bounded subproblems: 0 Number of created subproblems: 0 Error rc: 0 Time: 0.2938816547393799 Solution: - number of solutions: 0 number of solutions displayed: 0 • Oguz, unfortunately, this is not a solution to my problem. I intentionally keep Riders = [(1926.0, 0, 0)] as a list of tuple(s) because that tuple is a unique id. I know that is what causes the issue as Pyomo treats each component of the tuple as a key when you feed it into the constraint. So, I cannot reduce (1926.0, 0, 0) to 1926. But, sure, I have to convert each of these unique ids into range(len(input)), populate the problem, solve it, and fetch the results back by mapping the ranges with my initial list conversion. Pyomo also handles lists and arrays as iterators. – tcokyasar Jun 4 '20 at 14:43 • Shortly, I sort of knew the solution. But, the implementation was a pain in the ..., so I looked for a shortcut if possible. – tcokyasar Jun 4 '20 at 14:44 • You are right. It sometimes takes lots of effort to preprocess model input data and post-process the result. But as you said there should be an easier way for that. Good luck in solving this issue. – Oguz Toragay Jun 4 '20 at 16:02 • By mapping the ids into list of ranges, I took care of the issue. Indeed, pre-process was not necessary (as seen in Gurobi example) if Pyomo could accept tuples as indices. It is weird that it doesn't because strings and tuples are common indices in programming. – tcokyasar Jun 4 '20 at 17:54 • Happy to hear that you could solve the problem. – Oguz Toragay Jun 4 '20 at 17:56
2021-06-19 19:44:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47349825501441956, "perplexity": 7057.314981858399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00330.warc.gz"}
https://www.acmicpc.net/problem/3563
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 3 초 128 MB 5 3 3 75.000% ## 문제 There are n ≥ 2 people labeled as 1, 2, … , n such that each of them is either a truth-teller or a liar, and the number of liars is less than or equal to t for some t (≤ n). Each person i can test another person j in order to identify person j as truth-teller or liar by giving some question to person j. The outcome ai,j of the test applied by person i to person j is 1 (0) if person i identifies person j as a liar (truth-teller). The test outcome ai,j is reliable if and only if the testing person i is a truth-teller. That is, the test outcome ai,j is unreliable if and only if the testing person i is a liar. The following table shows the value of the test outcome ai,j when person i tests person j. Testing is performed circularly as follows: person 1 tests person 2, person 2 tests person 3, …, person n-1 tests person n, and person n tests person 1. From the test outcomes, some persons are definitely liars, but some others may or may not be liars. From n, t, and the test outcomes, determine the persons who are definitely liars. For example, let n = 5, t = 2, and the test outcomes (a1,2, a2,3, a3,4, a4,5, a5,1) be (0, 1, 1, 0, 0). In the following figure, each circle represents a person, and the label on the edge (i, j) represents the test outcome ai,j. In this example, person 3 should be a liar because if not, person 4 is liar and persons 2 is liar, thus persons 1, 5, and 4 become liars, which contradicts the condition that the number of liars does not exceed t = 2. Therefore person 3 is determined as a definite liar. However, because both {person 3, person 4} and {person 3} can be sets of liars, we can't determine that person 4 is a liar. Given n (the number of persons), t (the maximum number of liars), and the set of test outcomes, write a program to find all the persons who are definitely liars. It is assumed that the given set of outcomes is one that results from some liars who are less than or equal to t. ## 입력 The input consists of T test cases. The number of test cases (T ) is given in the first line of the input file. Each test case consists of two lines. The first line has two integers. The first integer is n (1≤n ≤1000), the number of persons, and the second integer is t (0 ≤ t ≤ n ), the maximum number of liars. The second line contains n 0 or 1’s that represent a1,2, a2,3, a3,4, …, a(n-1),n, an,1. ## 출력 Print exactly one line for each test case. The line should contain two integers. The first integer is the number of all the definite liars. The second integer is the smallest label of definite liars. In case that the number of definite liars is equal to 0, then the second integer should be 0. ## 예제 입력 3 5 2 0 1 1 0 0 7 2 0 0 1 0 0 1 1 9 8 1 0 0 0 0 1 0 0 0 ## 예제 출력 1 3 2 4 0 0
2017-09-22 08:21:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41327521204948425, "perplexity": 946.6682689879376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00142.warc.gz"}
https://proofwiki.org/wiki/Definition:Juggler_Sequence
# Definition:Juggler Sequence ## Theorem Let $m \in \Z_{\ge 0}$ be a positive integer. The juggler sequence on $m$ is defined recursively as: $J_m \left({n}\right) = \begin{cases} m & : n = 0 \\ \left\lfloor{\sqrt {J_m \left({n - 1}\right)} }\right\rfloor & : n \text{ even} \\ \left\lfloor{\sqrt {\left({J_m \left({n - 1}\right)}\right)^3} }\right\rfloor & : n \text{ odd} \end{cases}$ where: $\left\lfloor{x}\right\rfloor$ denotes the floor of $x$ $\sqrt x$ denotes the positive square root of $x$. ## Examples ### Juggler Sequence on $37$ The Juggler sequence on $37$ is: $37, 225, 3375, 196069, 86818724, 9317, 899319, 852846071, 24906114455136, 4990602, 2233, 105519, 34276462, 5854, 76, 8, 2, 1$
2019-09-19 23:38:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997326731681824, "perplexity": 2555.54548050818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00063.warc.gz"}
http://t-news.cn/Floc2018/FLoC2018-pages/paper2466.html
FLOC 2018: FEDERATED LOGIC CONFERENCE 2018 Critical pairs for Gray categories Author: Simon Forest Paper Information Title:Critical pairs for Gray categories Authors:Simon Forest Proceedings:IWC Final papers Editors: Jakob Grue Simonsen and Bertram Felgenhauer Keywords:Gray categories, rewriting, higher categories, Newman's lemma Abstract: ABSTRACT. Higher categories are a generalization of standard categories where there are not only $1$-cells between $0$-cells but more generally $n{+}1$-cells between $n$-cells. They are more and more used in mathematics, physics and computer science. They can notably be used to represent algebraic structures. There are several variants going from weak categories, that are the most general formalism but also the hardest to manipulate, to strict categories, simpler but less general. One usually wants both the expressive power of weak categories and the simplicity of strict categories. Semi-strict categories, such as Gray categories in dimension $3$, are an in-between formalism that it used in this work. Here, we are interested in proving \emph{coherence} of certain algebraic structures in dimension $3$ using rewriting, where coherence'' is the property that there is at most one $3$-cell between two $2$-cells. It amounts to compute critical pairs of a rewriting system and use a variant of Newmann's lemma. In this setting, an algorithm exists to compute these critical pairs. Pages:5 Talk:Jul 07 11:30 (Session 26G: Algebraic Structures and Coherence) Paper:
2021-10-27 07:13:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9427581429481506, "perplexity": 1634.5178855244217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00295.warc.gz"}
https://tex.stackexchange.com/questions/224184/zero-space-between-bars-of-the-same-interval-in-pgfplots-ybar-interval-plots
# Zero space between bars of the same interval in pgfplots ybar interval plots I need a bar plot (containing multiple \addplot) with following requirements: 1. ticks between x labels (and, in this respect, my question differs from Adjusting width of ybar interval separator to width of histogram bars), 2. non zero space between bars of adjacent x coordinates, 3. zero space between bars corresponding to the same x coordinate. I could manage the requirements: 1. by using a ybar interval plot, 2. by using a value <1 for the ybar interval option. as shown by the following MWE: \documentclass{article} \usepackage{pgfplots} \usepackage{filecontents} % \pgfplotsset{compat=1.11} % \begin{filecontents}{data.txt} A B C D 0 13 9 19 1 0 1 5.5 2 0 4 4 3 1 3 14.5 4 3 8 6 5 1 8 6.5 6 2 5 5.5 7 0 7 14 8 8 14 6 9 0 5 12.5 10 0 14 17.5 \end{filecontents} % \begin{document} \begin{tikzpicture} \begin{axis}[% ybar interval=0.5,% width=\textwidth% ] \end{axis} \end{tikzpicture} \end{document} But I don't know how to remove the horizontal space between the bars of a same "interval". How could I achieve this? Here I present a solution without ybar interval. For details on how it works, please have a look at the comments in the code. % used PGFPlots v1.14 \begin{filecontents}{data.txt} A B C D 0 13 9 19 1 0 1 5.5 2 0 4 4 3 1 3 14.5 4 3 8 6 5 1 8 6.5 6 2 5 5.5 7 0 7 14 8 8 14 6 9 0 5 12.5 10 0 14 17.5 \end{filecontents} \documentclass[border=5pt]{standalone} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ width=\textwidth, % set space between adjacent bars to zero ybar=0pt, % adjust bar width so the bars are not overlapping with the bars % of another x value % (this depends on the chosen plot width', xmin' and xmax' values bar width=7pt, % to show each x value xtick distance=1, % set xmin' and xmax' values manually so the bars aren't clipped xmin=-0.5, xmax=10.5, % set the tick length of the xticks' to zero ... xtick style={ % (I this key has to be prefixed by /pgfplots, because % normally here are just expected tikz keys) /pgfplots/major tick length=0pt, }, % but show them *between* the x values together with the grid extra x ticks={-0.5,0.5,...,10.5}, extra x tick labels=\empty, extra x tick style={ grid=major, % reset the tick length to the default value % (which otherwise would be the same as for the normal ticks % which is set to zero in this case --> see above) xtick style={ /pgfplots/major tick length=4pt, }, }, ] \addplot table [x=A,y=B] {data.txt}; \addplot table [x=A,y=C] {data.txt}; \addplot table [x=A,y=D] {data.txt}; \end{axis} \end{tikzpicture} \end{document} • Nice! But I was hoping a solution which doesn't involve manual intervention. – Denis Bitouzé Dec 11 '16 at 17:20 • When you mean with "manual intervention" that bar width is automatically calculated depending on ... (this will be a long list) then I guess that there (currently, i.e. with PGFPlots v1.14) is no such feature. – Stefan Pinnow Dec 11 '16 at 17:24
2020-08-13 07:14:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7729215025901794, "perplexity": 2684.8169486130923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00192.warc.gz"}
https://groupprops.subwiki.org/wiki/Proving_intersection-closedness
# Proving intersection-closedness This is a survey article related to:subgroup metaproperty satisfaction View other survey articles about subgroup metaproperty satisfaction A subgroup property $p$ is termed an intersection-closed subgroup property if an arbitrary (nonempty) intersection of subgroups having property $p$ also has property $p$. $p$ is termed a strongly intersection-closed subgroup property if it is intersection-closed and is also an identity-true subgroup property -- it is satisfied by every group as a subgroup of itself. $p$ is termed a finite-intersection-closed subgroup property if the intersection of finitely many subgroups satisfying the property $p$ also has the property $p$. $p$ is a strongly finite-intersection-closed subgroup property if it is finite-intersection-closed and identity-true. This article discusses techniques to prove that a given subgroup property is intersection-closed. Also refer: ## Invariance properties Further information: Invariance implies strongly intersection-closed Suppose $a$ is a property of functions from a group to itself. The invariance property corresponding to $a$ is defined as the following property $p$: $H$ has property $p$ in $G$ if every function from $G$ to itself satisfying property $a$ sends $H$ to within itself. Invariance properties are strongly intersection-closed. In other words, they are closed under arbitrary intersections, and every group satisfies the property as a subgroup of itself. Here are some examples: ## Left-hereditary subgroup properties Further information: Left-hereditary implies intersection-closed A subgroup property $p$ is termed a left-hereditary subgroup property if, whenever $H$ is a subgroup of a group $G$ satisfying property $p$ in $G$, any subgroup $K$ of $H$ also satisfies property $p$ in $G$. Left-hereditary subgroup properties are intersection-closed for obvious reasons. However, a left-hereditary subgroup property is not identity-true unless it is the tautology. Hence, it is not a strongly intersection-closed subgroup property. Some examples are: ## Galois correspondences Some subgroup properties arise as a result of Galois correspondences. We cal such a property a Galois correspondence-closed subgroup property. We start with a rule which, for every group, gives a binary relation between the group and another set constructed canonically from the group. The rule must be isomorphism-invariant, in the sense that any isomorphism of groups respects the binary relation. The subgroup property we now get is the property of being a subgroup, which is also a closed subset of the group under the Galois correspondence induced by the binary relation. Any Galois correspondence-closed subgroup property is strongly intersection-closed. Some examples are: ## Property of a normal subgroup based on the isomorphism class of its quotient group Suppose $a$ is a group property and $p$ is the property of being a normal subgroup of a group for which the quotient group has property $a$. Then: • If $a$ is closed under taking finite subdirect products, then $p$ is finite-intersection-closed. In particular, for instance, if $a$ is a [quasivarietal group property]], $p$ is strongly finite-intersection-closed. • If $a$ is closed under arbitrary subdirect products, then $p$ is intersection-closed. In particular, for instance, if $a$ is a varietal group property, $p$ is a strongly intersection-closed subgroup property. ## Effect of logical operators ### Conjunction Further information: Intersection-closedness is conjunction-closed If $p$ and $q$ are both intersection-closed subgroup properties, so is the conjunction (AND) of $p$ and $q$. More generally, the conjunction of an arbitrary collection of intersection-closed subgroup properties is intersection-closed. Note that in some cases, one of the properties in the conjunction is a group property interpreted as a subgroup property. In this case, it suffices to show that the group property is a subgroup-closed group property and the subgroup property is intersection-closed. Analogous observations apply to strongly intersection-closed, finite-intersection-closed, and strongly finite-intersection-closed subgroup properties. Some examples of conjunctions of intersection-closed properties that continue to be intersection-closed: ## Effect of subgroup property modifiers In this section, we discuss various subgroup property modifiers and their impact on the metaproperty of being intersection-closed and finite-intersection-closed. ### Intersection-transiter By definition, the intersection-transiter of any subgroup property is a strongly finite-intersection-closed subgroup property. In other words, it is satisfied by every group as a subgroup of itself and is closed under all finite intersections. ### Intersection-closure operator Here are some facts: ### Intersection operator The intersection operator takes as input two subgroup properties $p$ and $q$, and outputs the property $p \cap q$, defined as follows. A subgroup $H$ of a group $G$ satisfies property $p \cap q$ in $G$ if there are subgroups <math<K,L[/itex] of $G$ such that $K$ satisfies $p[itex] in [itex]G$, $L$ satisfies $q$ in $G$, and $H = K \cap L$. Here are some facts: Analogous results hold for strongly finite-intersection-closed and strongly intersection-closed. ### Left residual, left transiter Let $p$ and $q$ be subgroup properties. The left residual of $p$ by $q$ is the following subgroup property $r$: a subgroup $H$ of a group $G$ satisfies property $r$ in $G$ if, for any group $K$ containing $G$ as a subgroup with property $p$, $K$ contains $H$ with property $p$. It turns out that if $p$ is intersection-closed, so is the left residual of $p$ by $q$. Analogous observations hold for strongly intersection-closed, finite-intersection-closed. The left transiter of a subgroup property is its left residual by itself. The above result shows that the left transiter of any intersection-closed subgroup property is intersection-closed. Here are some examples: ### Transfer condition operator Further information: Transfer condition operator preserves intersection-closedness The transfer condition operator $T$ is defined as follows. For a subgroup property $p$, $T(p)$ is defined as follows: a subgroup $H$ of a group $G$ satisfies property $T(p)$ if, for any subgroup $K$ of $G$, $H \cap K$ satisfies property $T(p)$ in $K$.
2020-09-28 12:21:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 88, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683574795722961, "perplexity": 591.9518041824595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00728.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-greatest-common-factor-of-20-2-and-4
# How do you find the greatest common factor of 20, 2, and 4? The factors of 20 are $20 , 10 , 5 , 4 , 2 , 1$.
2019-06-26 17:01:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4114665687084198, "perplexity": 142.63780111777444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000367.74/warc/CC-MAIN-20190626154459-20190626180459-00282.warc.gz"}
https://petsc.org/release/docs/manualpages/PC/PCApplyBAorABTranspose/
# PCApplyBAorABTranspose# Applies the transpose of the preconditioner and operator to a vector. That is, applies tr(B) * tr(A) with left preconditioning, NOT tr(B*A) = tr(A)*tr(B). ## Synopsis# #include "petscksp.h" PetscErrorCode PCApplyBAorABTranspose(PC pc, PCSide side, Vec x, Vec y, Vec work) Collective ## Input Parameters# • pc - the preconditioner context • side - indicates the preconditioner side, one of PC_LEFT, PC_RIGHT, or PC_SYMMETRIC • x - input vector • work - work vector ## Output Parameter# • y - output vector ## Note# This routine is used internally so that the same Krylov code can be used to solve A x = b and A’ x = b, with a preconditioner defined by B’. This is why this has the funny form that it computes tr(B) * tr(A) PC, PCApply(), PCApplyTranspose(), PCApplyBAorAB()
2023-02-07 05:00:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6958653330802917, "perplexity": 11355.933763803807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00635.warc.gz"}
https://www.sautinsoft.net/help/document-net/html/Properties_T_SautinSoft_Document_PageSetup.htm
# PageSetup Properties The PageSetup type exposes the following members. Properties NameDescription Borders Gets collection of borders of the page. LineNumberDistanceFromText Gets or sets the line number distance from text (in points). LineNumberIncrement Gets or sets the line number increments to be displayed. LineNumberRestartSetting Gets or sets the line number restart setting LineStartingNumber Gets or sets the line starting number. Orientation Gets or sets the page orientation. PageColor Gets or sets the background color for all pages of the parent section. PageHeight Gets or sets the height of the page (in points). PageMargins Gets or sets the page margins. PageNumberStyle Gets or sets the number style for the page number. PageStartingNumber Gets or sets the number that appears on the first page of the section. PageWidth Gets or sets the width of the page (in points). PaperType Gets or sets the type of the paper for the page. SectionStart Gets or sets the type of section start. TextColumns Gets or sets the text columns. TitlePage Gets or sets a value indicating whether the parent section of the document shall have a different header and footer for its first page. Top See Also
2019-06-19 00:58:13
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533076643943787, "perplexity": 3048.1363105470277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025600-00090.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/13196/reduced-density-matrix-equation-of-motion-to-describe-an-ellipse
# Reduced Density Matrix Equation of Motion to describe an Ellipse Given a pure two qubit state $$|\psi_{AB}\rangle$$. If we trace out system $$B$$, the remaining density matrix $$\rho_A = Tr_B|\psi_{AB}\rangle\langle\psi_{AB}|$$, can be represented as a point lying anywhere on or inside a Blochsphere. When you're on the Bloch sphere you have a separable state; when you're in the center, your state is maximally entangled. So by entanglement you can affect the distance from the center. How do have to steer (by applying timevarying unitaries $$U(t)$$) the composite system $$|\psi_{AB}\rangle$$, such that the resulting trajectory on or inside the Bloch sphere of system $$A$$ is an ellipse? $$\rho_A(t)=Tr_B \left( U(t)|\psi_{AB}\rangle\langle\psi_{AB}|U'(t)\right) \sim\pmatrix{x(t)\\y(t)\\z(t)}_{\text{Bloch}_A} \text{ with } \frac{x^2(t)}{a^2}+\frac{y^2(t)}{b^2}=1$$ • Does this paper on elliptical orbits in the bloch sphere help in anyway? – GaussStrife Aug 6 '20 at 16:19 • Thanks, I'll have a closer look... – draks ... Aug 6 '20 at 16:30 • ... it looks to me, that the paper does not adress the fact that I trace out the system B. What do you think? – draks ... Aug 7 '20 at 9:45 • Good point. Could the unitary applied in the paper to trace the ellipse on the surface simply not be extended to both systems via taking the tensor of the individual unitary, so that it works on the individual subsystems? From Fig.1 in that paper, the wording suggests that the ellipses they achieve on both the pure and mixed state case is what is generated from a unitary on the overall 2 qubit system, and then once the trace is taken, the ellipses is on both of their bloch spheres. – GaussStrife Aug 7 '20 at 11:48 • it is equivalent, as pointed out here: quantumcomputing.stackexchange.com/a/13209/5280 – draks ... Aug 7 '20 at 12:32
2021-01-24 00:17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259524703025818, "perplexity": 558.9155149070259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538741.56/warc/CC-MAIN-20210123222657-20210124012657-00179.warc.gz"}
https://www.orbiter-forum.com/threads/ssu-development-thread-4-0-to-5-0.33858/page-94
# SSU Development thread (4.0 to 5.0) #### GLS And every switch actually has three electrical switches inside, providing three signals to their MDM. ...and some have 2 and other have 4. Anyway, RM is the tricky part. #### Urwumpe ##### Not funny anymore Donator ...and some have 2 and other have 4. Anyway, RM is the tricky part. Yeah - not that it matters right now, since we don't simulate switch failures. And if we would somebody would kill us. :lol: Donator #### GLS Likely because of the implementation - no array or list of group indices, but a starting group index The problem is that it makes isolating changes in the meshes very hard... #### Urwumpe ##### Not funny anymore Donator The problem is that it makes isolating changes in the meshes very hard... Yeah. The complaints also make me think if it would be smarter to push the reference data for the animations into configuration files that could be edited by the graphics department. Maybe just something like YAML: Code: --- Name: animation_component_name Groups: [1,2,3,4,5] Reference: [1,0,0] Axis: [0,1,0] Angle: 90 --- Name: animation_component_name Groups: [1,2,3,4,5] Reference: [1,0,0] Axis: [0,1,0] Angle: 90 ... We could make use of this library then, MIT license should be compatible: https://github.com/jbeder/yaml-cpp We would then still use the C++ animation definitions for the logic, but the many constants could be moved into a configuration file. Last edited: #### DaveS ##### Space Shuttle Ultra Project co-developer Donator Beta Tester Another piece of the puzzle: https://sourceforge.net/p/shuttleultra/tickets/181/ Why does the order of the groups change so much? The reason why the order changes is that no msh exporter for any of the 3D modelling programs (be it GMAX/3DSMAX, Wings3D, Blender and AC3D) that have one are consistent or intercompatible. A msh can be exported successfully in one program but fail in another. One flaw that they do share is that none of them keeps the mesh group order the same between runs. One run results in one order and the next is different. The same goes for everything, mesh groups, texture lists as wells as the materials list. So if any of you are game to write a new msh importer/exporter script for AC3D, I'm certainly going to use them. #### GLS Yeah. The complaints also make me think if it would be smarter to push the reference data for the animations into configuration files that could be edited by the graphics department. Maybe just something like YAML: Code: --- Name: animation_component_name Groups: [1,2,3,4,5] Reference: [1,0,0] Axis: [0,1,0] Angle: 90 --- Name: animation_component_name Groups: [1,2,3,4,5] Reference: [1,0,0] Axis: [0,1,0] Angle: 90 ... We could make use of this library then, MIT license should be compatible: https://github.com/jbeder/yaml-cpp We would then still use the C++ animation definitions for the logic, but the many constants could be moved into a configuration file. IMO, that's another thing to go wrong... changing some coordinates here Code: const VECTOR3 animXYZaxis = _( 1.2, 3.4, 5.6 ); vs there is the same. Also, the animations aren't something that change every day, so... :shrug: The coding of the animations needs a coder, but AFAIK it's pretty much all done. #### Urwumpe ##### Not funny anymore Donator The coding of the animations needs a coder, but AFAIK it's pretty much all done. And if something changes, we are all talking about who is going to code it. Also its better to have less specialized code than more - and constants in the source code are code as well. We need quality assurance on those as well, and its easier to do if they are all in one place and can at least be sanity checked (For example, such a configuration file could be automatically tested for grave position or direction errors, literals in code can't) In the best case we could create an animation just by saying: Code: animStarTrackerDoorY = animationFactory->create("StarTrackerDoorY"); Which would mean: We have lots of code less. And lots of error sources less. And better testability. And more possibilities for code reuse. For example, such an animation framework could also be ported to other add-ons, like the launch pads, the crawler or whatever payload. All having the same code. All using the same configuration file syntax. Last edited: #### GLS How much more will animations change in the future and how much time would that eat on the "coding department", versus how long will it take to change EVERY SINGLE ANIMATION to that new system? I think the latter one is much more expensive... #### Urwumpe ##### Not funny anymore Donator How much more will animations change in the future and how much time would that eat on the "coding department", versus how long will it take to change EVERY SINGLE ANIMATION to that new system? I think the latter one is much more expensive... Fine. :facepalm: I'll copy a permalink to these posts for next year, just so the "I have told you" has more momentum. #### GLS Another piece of the puzzle: https://sourceforge.net/p/shuttleultra/tickets/181/ Why does the order of the groups change so much? I'm changing the hide/show logic so the groups don't have to be sequential. ---------- Post added at 05:27 PM ---------- Previous post was at 05:24 PM ---------- Fine. :facepalm: I'll copy a permalink to these posts for next year, just so the "I have told you" has more momentum. Just because I don't see the (immediate) need for it and because I don't have the time, doesn't mean you can't do it. #### Urwumpe ##### Not funny anymore Donator Just because I don't see the (immediate) need for it and because I don't have the time, doesn't mean you can't do it. Just because I am not seeing any agreement to implement this in SSU (And get the nagging because we are again releasing late), doesn't mean I can't implement it elsewhere. Its a too good idea to waste. #### GLS Just because I am not seeing any agreement to implement this in SSU (And get the nagging because we are again releasing late), doesn't mean I can't implement it elsewhere. Its a too good idea to waste. The SSU Workbench is also a good idea, but ATM nobody is coding it... :shrug: Anyway, I now closed ticket #181 as now the code no longer depends of mesh group order. ---------- Post added 06-30-18 at 12:17 AM ---------- Previous post was 06-29-18 at 05:44 PM ---------- Radar Altimeter data now reaches the HUD! Maybe the "regular altitude" does have just a 20ft precision (or whatever the number is)... with the radar taking over at 5Kft it is smooth all the way down, so we don't really see what it would be like without it. Anyway, I haven't added any RM on the data so turning the MDMs off will just freeze the data and no warning will be given. That will have to wait. #### GLS Aside for more tuning on the FCS, the only major item left to do in Autoland is adding the other 3 trajectories, and this is where I-LOADs come into play... I just need to figure out how. :uhh: For selecting, e.g. the aim point, there is one variable that keeps "1" or "2" according to the selection, and 2 I-LOADs with the position of the aim point. What I don't know is if the selection of one of the I-LOADs is made "on-the-fly" each time it is needed, or if when the selection is made/altered there is a memory location that gets the selected I-LOAD value, and thus the code always goes to that memory location and doesn't have to bother selecting anything. #### Wolf ##### Donator Donator Is there a way to verify how intense are the winds generated by Orbiter? The AFDS on final approach is unable to keep track of the lateral path due to the winds, it chases the diamond right and left all the way down to the RWY in a series of S-turns. That's not a very stable approach. Not even Eastwood in "Space Cowboys" had done such a scary thing :rofl: #### GLS Is there a way to verify how intense are the winds generated by Orbiter? The AFDS on final approach is unable to keep track of the lateral path due to the winds, it chases the diamond right and left all the way down to the RWY in a series of S-turns. That's not a very stable approach. Not even Eastwood in "Space Cowboys" had done such a scary thing :rofl: They seem to be quite high at times... in the landing scenario I'm using I think I'm getting crosswinds around the 15 knot limit. :uhh: There are some issues with lateral control in the current code in the trunk. I've changed some things in the OrbitersimBeta branch, both in the FCS and in the Orbiter aerosurface functions used, and now it isn't that dramatic at all... but the rudder doesn't seem to do much (and the yaw FCS channel isn't responding as expected to rolls :facepalm Anyway, I thought of making one of those "wind socks", but it wouldn't be of much use to have it in the ground while you're flying at 30kft... :shrug: #### Wolf ##### Donator Donator They seem to be quite high at times... in the landing scenario I'm using I think I'm getting crosswinds around the 15 knot limit. :uhh: There are some issues with lateral control in the current code in the trunk. I've changed some things in the OrbitersimBeta branch, both in the FCS and in the Orbiter aerosurface functions used, and now it isn't that dramatic at all... but the rudder doesn't seem to do much (and the yaw FCS channel isn't responding as expected to rolls :facepalm Anyway, I thought of making one of those "wind socks", but it wouldn't be of much use to have it in the ground while you're flying at 30kft... :shrug: Where the 15kts crosswind assessment comes from? Is there a way to find the actual winds in sim? What is the logic behind this feature, wind direction and intensity are generated randomly? And what about the change as a function of altitude? (Intensity rising as altitude increases plus change of direction with altitude due to Coriolis effect)? What about terrain also? Does it interfere and alter winds or orography is just ignored? Sorry I did not mean to flood you with all these questions (out of topic BTW) but I was wondering that maybe you know the logic behind Orbiter winds and how it works Last edited: #### GLS Where the 15kts crosswind assessment comes from? I did some math. Is there a way to find the actual winds in sim? More math? :shrug: What is the logic behind this feature, wind direction and intensity are generated randomly? And what about the change as a function of altitude? (Intensity rising as altitude increases plus change of direction with altitude due to Coriolis effect)? What about terrain also? Does it interfere and alter winds or orography is just ignored? Probably there is a pattern, as I get pretty much the same wind speed and direction every time I run a certain scenario... I don't know if Martin posted info about the winds, but you can always ask.
2020-09-19 12:44:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5030630230903625, "perplexity": 2059.332338195884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191780.21/warc/CC-MAIN-20200919110805-20200919140805-00238.warc.gz"}
http://oklo.org/2007/12/24/transit-valuations/
Home > detection > transit valuations ## transit valuations December 24th, 2007 Image Source Discoveries relating to transiting extrasolar planets often make the news. This is in keeping both with the wide public interest in extrasolar planets, as well as the effectiveness of the media-relations arms of the agencies, organizations, and universities that facilitate research on planets. I therefore think that funding support for research into extrasolar planets in general, and transiting planets in particular, is likely to be maintained, even in the face of budget cuts in other areas of astronomy and physics. There’s an article in Saturday’s New York Times which talks about impending layoffs at Fermilab, where the yearly budget has just been cut from $342 million to$320 million. It’s often not easy to evaluate how much a particular scientific result is “worth” in terms of a dollar price tag paid by the public, and Sean Carroll over at Cosmic Variance has a good post on this topic. For the past two years, the comments sections for my oklo.org posts have presented a rather staid, low-traffic forum of discussion. That suddenly changed with Thursday’s post. The discussion suddenly heated up, with some of the readers suggesting that the CoRoT press releases are hyped up in relation to the importance of their underlying scientific announcements. How much, actually, do transit discoveries cost? Overall, of order a billion dollars has been committed to transit detection, with most of this money going to CoRoT and Kepler. If we ignore the two spacecraft and look at the planets found to date, then this sum drops to something like 25 million dollars. (Feel free to weigh in with your own estimate and your pricing logic if you think this is off base.) The relative value of a transit depends on a number of factors. After some revisions and typos (see comment section for this post) I’m suggesting the following valuation formula for the cost, C, of a transit: The terms here are slightly subjective, but I think that the overall multiplicative effect comes pretty close to the truth. The normalization factor of 580 million out front allows the total value of transits discovered to date to sum to 25 million dollars. The exponential term gives weight to early discoveries. It’s a simple fact that were HD 209458 b discovered today, nobody would party like its 1999 — I’ve accounted for this with an e-folding time of 5 years in the valuation. Bright transits are better. Each magnitude in V means a factor of 2.5x more photons. My initial inclination was to make transit value proportional to stellar flux (and I still think this is a reasonable metric). The effect on the dimmer stars, though was simply overwhelming. Of order 6 million dollars worth of HST time was spent to find the SWEEPS transits, and with transit value proportional to stellar flux, this assigned a value of two dollars to SWEEPS-11. That seems a little harsh. Also, noise goes as root N. Longer period transits are much harder to detect, and hence more valuable. Pushing into the habitable zone also seems like the direction that people are interested in going, and so I’ve assigned value in proportion to the square root of the orbital period. (One could alternately drop the square root.) Eccentricity is a good thing. Planets on eccentric orbits can’t be stuck in synchronous rotation, and so their atmospheric dynamics, and the opportunities they present for interesting follow-up studies make them worth more when they transit. Less massive planets are certainly better. I’ve assigned value in inverse proportion to mass. Finally, small stars are better. A small star means a larger transit depth for a planet of given size, which is undeniably valuable. I’ve assigned value in proportion to transit depth, and I’ve also added a term, Np^2, that accounts for the fact that a transiting planet in a multiple-planet system is much sought-after. Np is the number of known planets in the system. Here are the results: Planet Value CoRoT-Exo-1 b $86,472 CoRoT-Exo-2 b$53,274 Gliese 436 b $4,356,408 HAT-P-1 b$969,483 HAT-P-2 b $85,507 HAT-P-3 b$285,768 HAT-P-4 b $189,636 HAT-P-5 b$146,178 HAT-P-6 b $245,873 HD 149026 b$792,760 HD 17156 b $953,665 HD 189733 b$2,665,371 HD 209458 b $11,084,661 Lupus TR 3 b$19,186 OGLE TR 10 b $66,112 OGLE TR 111 b$81,761 OGLE TR 113 b $40,153 OGLE TR 132 b$13,523 OGLE TR 182 b $16,743 OGLE TR 211 b$20,465 OGLE TR 56 b $21,680 SWEEPS 04$2,004 SWEEPS 11 $211 TrES-1$610,330 TrES-2 $124,021 TrES-3$102,051 TrES-4 $225,464 WASP-1$209,041 WASP-2 $207,305 WASP-3$115,508 WASP-4 $114,737 WASP-5$72,328 XO-1 $478,924 XO-2$506,778 XO-3 $36,607 HD 209458 b is the big winner, as well it should be. The discovery papers for this planet are scoring hundreds of citations per year. It essentially launched the whole field. The STIS lightcurve is an absolute classic. Also highly valued are Gliese 436b, and HD 189733b. No arguing with those calls. Only two planets seem obviously mispriced. Surely, it can’t be true that HAT-P-1 b is 10 times more valuable than HAT-P-2b? I’d gladly pay$85,507 for HAT-P-2b, and I’d happily sell HAT-P-1b for $969,483 and invest the proceeds in the John Deere and Apple Computer corporations. Jocularity aside, a possible conclusion is that you should detect your transits from the ground and do your follow up from space — at least until you get down to R<2 Earth radii. At that point, I think a different formula applies. Categories: detection Tags: 1. December 25th, 2007 at 00:07 | #1 Note: I see that the XO planets somehow didn’t make the list! I’ll add them, re-run the code shortly, and correct for this oversight. They’ll likely be similar to the WASPs in value. Also, “V” on the left is the value, and “V” on the right is V-magnitude. I’ll also fix this notation oversight shortly… 2. December 25th, 2007 at 00:58 | #2 I wouldn’t write off space-based transit detection so quickly. CoRoT still has at least three years to pay for itself, and you’d expect it to report more valuable (longer period) planets later in its mission. 3. December 25th, 2007 at 01:37 | #3 I’m not writing off space-based detection. I just didn’t think it was fair to include the cost of the space missions in the formula because, as you say, the majority of their results are still to come. It’ll be interesting to revisit this analysis after Kepler and CoRoT are completed. (On re-reading what I wrote, I’ve edited the last paragraph of the post to put it more in line with the point I wanted to make). 4. December 25th, 2007 at 07:29 | #4 Again by the COROT mission statement their sensitivity is optimized to long period planets of a maximum of 50 days! Closer than Mercury is to the sun. It is pretty clear they are not optimized for the detection of rocky planets yet their recent statements completely ignore this. To wit.. “CoRoT is discovering exo-planets at a rate only set by the available resources to follow up the detections” (2/11/2007) This is simply not true. The rate of discovery is severely constrained by the design of the instruments. Why not make this clear. Or “The data of the first exoplanet discovered by Corot (Corot-Exo-1b) has a precision of 3 parts in 10 000 (3 x 10-4) for one hour observation. This means that when all the corrections are applied to the light-curves, photometry at the level of 1 part in 20 000 will be reached (5x 10-5). The precision can even reach 1 part in 50 000 (2 x10-5) if the number of the observed transits is larger than 25. The implications are that very small exoplanets similar to Earth are within the grasp of Corot, and that variations of the stellar reflected light by the planet may be observable (depending on its reflectance), giving indication on its chemical composition.” (May/03/2007) Is this not disingenuous. Earths indeed. There is a under current of scientific competition here with the COROT folks trying to underplay the significance of the Kepler mission, that is starting to get unsavory. In the beginning it was little things, now it is huge whoppers. 5. December 26th, 2007 at 03:12 | #5 You also missed Hat-P-6b. I’m having trouble replicating the dollar values your formula gives. For starters, should the magnitude term read sqrt(2.5^(-(V-7.0))) ? Or does V mean something different? Maybe you could write how the terms work out for an example… 6. December 26th, 2007 at 06:24 | #6 Hi tfisher98, Thanks for pointing that out. I did indeed have a typo in the latexed eqn. I also had an earlier version of the table. The equation and the table should now be in agreement, and all the planets are now added. Greg 7. December 26th, 2007 at 15:02 | #7 I’m still having trouble getting the numbers to match your table. Taking CoRoT-Exo-2 b as an example, I use the following (from exoplanet encyclopedia): scaling factor 7.46e6 ty = 2007 :: discovery factor 0.2466 P = 1.7429964 d :: period factor 0.7057 V = 12.57 :: brightness factor 0.0779 M = 3.53 MJ :: mass factor 0.2833 e = 0 :: eccentricity factor 1.0 Rplanet = 1.429 RJ ~~~ Rpl = 0.147 Rsun Rstar = 0.941 Rsun :: transit depth factor 0.156 Npl = 1 :: multi-planet factor 1.0 Multiplied together, I get arount$4470 for this planet, compared to $123000 in the table. Am I still doing something wrong? 8. December 26th, 2007 at 16:57 | #8 Hi tfisher98 Thanks again. It was getting late last night when I checked the consistency between my program and the latex eqn. They had both gone through many versions as I worked out the metric. Now I *think* I’m happy There was still a typo in the posted equation (now fixed). It should have (Rp/Rstar)^2, since we’re valuing according to transit depth. In my program, I was using Jovian radii and solar radii, and so my 7.46e+6 normalization to 25mil only made sense in those mixed units. Using the same units for planet and star requires a nomalization factor of 5.8e+8 And I was also taking the root of the mass factor as per an earlier version of the eqn. I think it’s better to have the planets valued in inverse proportion to mass. I’ve updated the numbers to account for this. 9. December 26th, 2007 at 18:08 | #9 Regarding the COROT releases, it might be better if they didn’t release any info until the actual papers are out, rather than saying they are making groundbreaking discoveries then releasing what seem like fairly run-of-the-mill candidates in press releases. Continuing with that 1995 feeling, I was amused to note that TIME pegged the recent SuperWASP planets (total value of all three in current version of table is about$300,000) in their top 10 scientific discoveries, ahead of results like Gliese 436b, HD 17156b (or even Gliese 581)… gosh! wow! Hot Jupiters, who knew, eh? 10. December 26th, 2007 at 18:49 | #10 Hi Andy, Thanks for the link to the TIME article. With regards to editorializing on that particular call, I think I’ll just remind myself that discretion is the better part of valor :) Greg 11. December 27th, 2007 at 17:33 | #11 I agree with Andy … I just have a related philosophical thought: – What is a scientific announcement? The somewhat vague definition of announcement from a Thesaurus is: “A message that is stated or declared; a communication (oral or written) setting forth particulars or facts etc” In science, the facts and particulars should be clearly spelled out. What if crucial and basic parameters are missing from the ‘announcement’, such as coordinates, period, identification? Maybe an announcement in science is something others can independently confirm? (or at least comment on, should the announcement be on a unique natural phenomena that happened only once)? 12. December 31st, 2007 at 13:36 | #12 The XO team just made the XO-3b preprint vailable. http://arxiv.org/PS_cache/arxiv/pdf/0712/0712.4283v1.pdf This is an interesting one sitting right on the edge of the deuterium burning limit. It also shows the urgent need for an astrometric mission follow up of Hipparcos as the physical characteristics of the host stars are the main limiting factors in deducing the corresponding figures for the planets. Happy New Year, Luis 13. January 1st, 2008 at 15:29 | #13 Well, GAIA is that follow-up. We all must wait just a few years more… 14. January 1st, 2008 at 23:36 | #14 XO-3b seems to be a very interesting object which is rather undervalued by the formula in this post. I wonder how much the ranking would change if superjovians (which seem to be rather rare in short-period orbits) as well as low-mass planets get increased values. 15. January 3rd, 2008 at 11:57 | #15 Looks like Setiawan et al. hit a jackpot — a massive hot Jupiter in a 8-10 million years old system TW Hydrae. The system is so young that the protoplanetary disk is still there. Even more intriguing, there seems to be evidence of orbital clearing as the planet migrated inwards.
2017-01-21 02:08:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5734357833862305, "perplexity": 2852.5990633379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00475-ip-10-171-10-70.ec2.internal.warc.gz"}
https://alanrendall.wordpress.com/category/life/
## Archive for the ‘life’ Category ### My COVID-19 vaccination, part 2 August 5, 2021 Since my last post on this subject a few things have changed. In Germany 53% of people are fully vaccinated against COVID-19, which is good news. We are now in a situation where in this country any adult who wants the vaccination can get it. Of course this percentage is still a lot lower than what is desirable and the number of people being vaccinated per day has dropped to less than half what it was in mid June. I find it sad, and at the same time difficult to understand, that there are so many people who are not motivated enough to go out and get the vaccination. Yesterday my wife and I got our second vaccination. In the meantime the relevant authority (STIKO) has recommended that those vaccinated once with the product of AstraZeneca should get an mRNA vaccine the second time. The fact that we waited the rather long time suggested to get our second injection meant that the new recommendation had already come out and we were able to get the vaccine of Biontech the second time around. There have not been many studies of the combination vaccination but as far as I have seen those that there are gave very positive results. So we are happy that it turned out this way. This time the arm where I got the injection was sensitive to pressure during the night but this effect was almost gone by this morning. The only other side effect I noticed was an increased production of endorphins. In other words, I was very happy to have reached this point although I know that it takes a couple of weeks before the maximal protection is there. Every second year there is an event in Mainz devoted to the popularization of science called the Wissenschaftsmarkt. It has been taking place for the last twenty years. Normally it is in the centre of town but due to the pandemic it will be largely digital this year. This year it is on 11th and 12th September and has the title ‘Mensch und Gesundheit’ [rough translation: human beings and their health]. I will contribute a video with the title ‘Gegen COVID-19 mit Mathematik’ [against COVID-19 with mathematics]. The aim of this video is to explain to non-scientists the importance of mathematics in fighting infectious diseases. I talk about what mathematical models can contribute in this domain but also, which is just as important, about what they cannot do. If the public is to trust statements by scientists it is important to take measures against creating false expectations. I do not go into too much detail about COVID-19 itself since at the moment there is too little information available and too much public controversy. Instead I concentrate on an example from long ago where it is easier to see clearly. It also happens to be the example where the basic reproductive number was discovered. This is the work of Ronald Ross on the control of malaria. Ross was the one who demonstrated that malaria is transmitted by mosquito bites and he was rewarded for that discovery with a Nobel Prize in 1902. After that he studied ways of controlling the disease. This was for instance important in the context of the construction of the Panama Canal. There the first attempt failed because so many workers died of infectious diseases, mainly malaria and yellow fever, both transmitted by mosquitos. The question came up, whether killing a certain percentage of mosquitos could lead to a long-term elimination of malaria or whether the disease would simply come back. Ross, a man of many talents, set up a simple mathematical model and used it to show that elimination is possible and was even able to estimate the percentage necessary. This provided him with a powerful argument which he could use against the many people who were sceptical about the idea. ### Lisa Eckhart and her novel Omama October 2, 2020 Let me finally come to the novel itself. It struck me as a curate’s egg. Parts of it are very good. There are passages where I appreciate the humour and I find the author’s use of language impressive. On a more global level I do not find the text attractive. It is the story of the narrator’s grandmother. (Here is a marginal note for the mathematical reader. Walter Rudin, known for his analysis textbooks, was born in Austria. In  a biographical text about him I read that one of his grandmothers was referred to as ‘Omama’.) The expressions are often very crude, with a large dose of excrement and other unpleasant aspects of the human body, and many elements of the story seem to me pointless. There is no single character in the novel who I find attractive. This is in contrast to the novel of Banine which I previously wrote about, where I find the narrator attractive. That novel also contains plenty of crude expressions but there are more than enough positive things to make up for it. I would like to emphasize that just because I find a novel unpleasant to read it does not mean I judge it negatively. A book which I found very unpleasant was ‘Alexis ou le traite du vain combat’ by Marguerite  Yourcenar but in that case my conclusion was that it could only be so unpleasant because it was so well written. I do not have the same feeling about Omama. As to the insight which I hoped I might get for Eckhart’s stage performances I have not seen it yet, but maybe I will notice a benefit the next time I experience a stage performance by her. ### The plague priest of Annaberg March 26, 2020 I find accounts of epidemics, whether documentary or fictional, fascinating. I appreciated texts of this kind by Camus (La Peste), Defoe (Journal of the Plague Year) and Giono (Le Hussard sur le Toit). This interest is reflected in a number of posts in this blog, for instance this one on the influenza pandemic of 1918. At the moment we all have the opportunity to experience what a pandemic is like, some of us more than others. In such a situation there are two basic points of view, depending on whether you see the events as concerning other people or whether you feel that you are yourself one of the potential victims. The choice of one of these points of view probably does not depend mainly on the external circumstances, except in extreme cases, and is more dependent on individual psychology. I do feel that the present COVID-19 pandemic concerns me personally. This is because Germany, where I live, is one of the countries with the most total cases at the moment, after China, Italy, USA and Spain. Every evening I study the new data in the Situation Reports of the WHO. The numbers to be found in the Internet are sometimes quite inconsistent. This can be explained by the time delays in reporting, the differences in the definitions of classes of infected individuals used by different people or organizations and unfortunately in some cases by poltically motivated lies. My strategy for extracting real information from this data is to stick to one source I believe to be competent and trustworthy (the WHO) and to concentrate on the relative differences between one day and the next and one country and another in order to be able to see trends. I find interesting the extent to which diagrams coming from mathematical models have found their way into the media reporting of this subject. Prediction is a high priority for many people at the moment. Motivated by this background I started to read a historical novel by Gertrud Busch called ‘Der Pestpfarrer von Annaberg’ [the plague priest of Annaberg] which I got from my wife. The main character in the book is a person who really existed but many of the events reported there are fictional. Annaberg is a town in Germany, in the area called ‘Erzgebirge’, the literal English translation of whose name is ‘Ore mountains’. This mountain range lies on the border between Germany and the Czech republic. People were attracted there by the discovery of valuable mineral deposits. In particular, starting in the late fifteenth century, there was a kind of gold rush there (Berggeschrey), with the difference that the metal which caused it was silver rather than gold. My wife was born and grew up in that area and for this reason I have spent some time in Annaberg and other places close to there. The narrator of the book is Wolfgang Uhle, a priest in the Erzgebirge in the sixteenth century active in Annaberg during the outbreak of plague there. In fact in the end only a small part of the book concerns the plague itself but I am glad I read it. The author has created a striking picture of the point of view of the narrator, at a great distance from the modern world. During Uhle’s first period as a priest there was a fire in a neighbouring village which destroyed many houses. He saved the life of a young girl, in fact a small child, who was playing in a burning house. Much to the amusement of the adults the girl said she would marry him when she was old enough. In fact she meant it very seriously and when she was old enough it did happen that after some difficulties she got engaged to him. The tragedy of Wolfgang Uhle is that he had a temper which was sometimes uncontrollable. Before the marriage took place he once got into a rage due to the disgraceful behaviour of the judge in his village. Unfortunately at that moment he was holding a large hammer in his hand. A young girl had asked him if a stone she had brought him was valuable. He had some knowledge of geology and he intended to use the hammer to break open the stone and find out more about its composition. In his sudden rage he hit the judge on the head with the hammer and killed him. He went home in a state of shock without any plan but his housekeeper brought him to flee over the border into Bohemia. He was sentenced to death in absentia and hid in the woods for five years. The girl who he was engaged to repudiated him, stamped on his engagement ring and quickly married another man. He partly lived from what he could find in nature, living at first in a cave. Later he started working together with a charcoal burner. I learned something about what that industry was like when I visited those woods myself a few years ago. Eventually he revealed his identity and had to leave. In the woods he met a man who had got lost and asked him the way. The man wanted to go to Bärenstein, which is the town where my wife spent her childhood. He agreed to show him the way. The man told him that the plague had broken out in Annaberg and that the town was desperately searching for a priest to tend to the spiritual needs of the sick. Uhle decided that he should volunteer, despite the danger. He saw this as God giving him a chance to make amends for his crime. He wrote letters to the local prince and the authorities of the town. The prince agreed to grant him a pardon in return for his service as priest for the people infected with the plague. He then went to Annaberg and tended to the sick, without regard to the danger he was putting himself in. There is not much description of the plague itself in the book. There is a key scene where he meets his former love on her deathbed and it turns out that she had continued to love him and felt guilty for having abandoned him. Uhle survives the plague, gets a new position as a priest, marries and has children. This book was different from what I expected when I started reading it. Actually the fact that it was so different from things I otherwise encounter made it worthwhile for me to read it. ### Cedric Villani’s autobiography November 1, 2019 I have just read Cédric Villani’s autobiographical book ‘Théorème Vivant’. I gave the German translation of the book to Eva as a present. I thought it might give her some more insight into what it is like to be a mathematician and give her some fortitude in putting up with a mathematician as a husband. Since I had not read the book before I decided to read it in parallel. I preferred to read the original and so got myself that. With hindsight I do not think it made so much difference that I read it in French instead of German. I think that the book is useful for giving non-experts a picture of the life of a mathematician (and not just that of a mathematician who is as famous as Villani has become). For this I believe that it is useful that the book contains some pieces of mathematical text which are incomprehensible for the lay person and some raw TeX source-code. I think that they convey information even in the absence of an understanding of the content. On the other hand, this does require a high level of tolerance on the part of the reader. Fortunately Eva was able to show this tolerance and I think she did enjoy the book and learn something more about mathematics and mathematicians. For me the experience was of course different. The central theme of the book is a proof of Villani and Clement Mouhot of the existence of Landau damping, a phenomenon in plasma physics. I have not tried to enter into the details of that proof but it is a subject which is relatively close to things I used to work on in the past and I was familiar with the concept of Landau damping a long time ago. I even invested quite a lot of time into the related phenomenon of the Jeans instability in astrophysics, unfortunately without significant results. Thus I had some relation to the mathematics. It is also the case that I know many of the people mentioned in the book personally. Sometimes when Villani mentions a person without revealing their name I know who is meant. As far as I remember the first time I met Villani was at a conference in the village of Anogia in Crete in the summer of 2001. At that time he struck me as the number one climber of peaks of technical difficulty in the study of the Boltzmann equation. I do not know if at that time he already dressed in the eccentric way he does today. I do not remember anything like that. For me the book was pleasant to read and entertaining and I can recommend it to mathematicians and non-mathematicians. If I ask myself what I really learned from the book in the end then I am not sure. One thing it has made me think of is how far I have got away from mainstream mathematics. A key element of the book is that the work described there got Villani a Fields medal, the most prestigious of mathematical prizes. These days the work of most Fields medallists is on things to which I do not have the slightest relation. Villani was the last exception to that rule. Of course this is a result of the general fact that communication between different mathematical specialities is so hard. The Fields medal is awarded at the International Congress of Mathematicians which takes place every four years. That conference used to be very attractive for me but now I have not been to one since that in 2006 in Madrid and I imagine that I will not go to another. That one was marked by the special excitement surrounding Perelman’s refusal of the Fields medal which he was offered for his work on the Poincaré conjecture. Another sign of the change in my orientation is that I am no longer even a member of the American Mathematical Society, probably the most important such society in the world. I will continue to follow my dreams, whatever they may be. Villani is also following his dreams. I knew that he had gone into politics, becoming a member of parliament. I was surprised to learn that he has recently become a candidate for the next election to become mayor of Paris. ### Science as a literary pursuit August 24, 2019 I found something in a footnote in the book of Oliver Sacks I mentioned in the previous post which attracted my attention. There is a citation from a letter of Jonathan Miller to Sacks with the idea of a love of science which is purely literary. Sacks suggests that his own love of science was of this type and that that is the reason that he had no success as a laboratory scientist. I feel that my own love for science has a strong literary component, or at least a strong component which is under the control of language. In molecular biology there are many things which have to be named and people have demonstrated a lot of originality in inventing those names. I find the language of molecular biology very attractive in a way which has a considerable independence from the actual meaning of the words. I expect that there are other people for whom this jungle of terminology acts as a barrier to entering a certain subject. In my case it draws me in. In my basic field, mathematics, the terminology and language is also a source of pleasure for me. I find it stimulating that everyday words are often used with a quite different meaning in mathematics. This bane of many starting students is a charm of the subject for me. Personal taste plays a strong role in these things. String theory is another area where there is a considerable need for inventing names. There too a lot of originality has been invested but in that case the result is not at at all to my taste. I emphasize that when I say that I am not talking about the content, but about the form. The idea of using the same words with different meanings has a systematic development in mathematics in context of topos theory. I learned about this through a lecture of Ioan James which I heard many years ago with the title ‘topology over a base’. What is the idea? For topological spaces $X$ there are many definitions and many statements which can be formulated using them, true or false. Suppose now we have two topological spaces $X$ and $B$ and a suitable continuous mapping from $X$ to $B$. Given a definition for a topological space $X$ (a topological space is called (A) if it has the property (1)) we may think of a corresponding property for topological spaces over a base. A topological space $X$ over a base $B$ is called (A) if it has property (2). Suppose now that I formulate a true sentence for topological spaces and suppose that each property which is used in the sentence has an analogue for topological spaces over a base. If I now interpret the sentence as relating to topological spaces over a base under what circumstances is it still true? If we have a large supply of statements where the truth of the statement is preserved then this provides a powerful machine for proving new theorems with no extra effort. A similar example which is better known and where it is easier (at least for me) to guess good definitions is where each property is replaced by one including equivariance under the action of a certain group. Different mathematicians have different channels by which they make contact with their subject. There is an algebraic channel which means starting to calculate, to manipulate symbols, as a route to understanding. There is a geometric channel which means using schematic pictures to aid understanding. There is a combinatoric channel which means arranging the mathematical objects to be studied in a certain way. There is a linguistic channel, where the names of the objects play an important role. There is a logical channel, where formal implications are the centre of the process. There may be many more possibilities. For me the linguistic channel is very important. The intriguing name of a mathematical object can be enough to provide me with a strong motivation to understand what it means. The geometric channel is also very important. In my work schematic pictures which may be purely mental are of key importance for formulating conjectures or carrying out proofs. By contrast the other channels are less accessible to me. The algebraic channel is problematic because I tend to make many mistakes when calculating. I find it difficult enough just to transfer a formula correctly from one piece of paper to another. As a child I was good in mental arithmetic but somehow that and related abilities got lost quite early. The combinatoric channel is one where I have a psychological problem. Sometimes I see myself surrounded by a large number of mathematical objects which should be arranged in a clever way and this leads to a feeling of helplessness. Of course I use the logical channel but that is usually on a relatively concrete level and not the level of building abstract constructs. Does all this lead to any conclusion? It would make sense for me to think more about my motivations in doing (and teaching) mathematics in one way or another. This might allow me to do better mathematics on the one hand and to have more pleasure in doing so on the other hand. ### Encounter with an aardvark August 21, 2019 When I was a schoolboy we did not have many books at home. As a result I spent a lot of time reading those which were available to me. One of them was a middle-sized dictionary. It is perhaps not surprising that I attached a special significance to the first word which was defined in that dictionary. At that time it was usual, and I see it as reasonable, that articles did not belong to the list of words which the dictionary was responsible for defining. For this reason ‘a’ was not the first word on the list and instead it was ‘aardvark’. From the dictionary I learned that an aardvark is an animal and roughly what kind of animal it is. I also learned something about its etymology (it was an etymological dictionary) and that it originates from Dutch words meaning ‘earth’ and ‘pig’. Later in life I saw pictures of aardvarks in books and saw them in TV programmes, but without paying special attention to them. The aardvark remained more of an intriguing abstraction for me than an animal. Yesterday, in Saarbrücken zoo, I walked into a room and saw an aardvark in front of me. Suddenly the abstraction turned into a very concrete animal pacing methodically around its enclosure. I had a certain feeling of unreality. I do not know if aardvarks always walk like that or whether it was just a habit which this individual had acquired by being confined to a limited space. Each time it returned (reappearing after having disappeared into a region not visible to me) the impression of unreality was heightened. I was reminded of the films of dinosaurs which sometimes come on TV, where the computer-reconstructed movements of the animals look very unrealistic to me. Seeing the aardvark I asked myself, ‘if mankind only knew this animal from fossil remains would it ever have been possible to reconstruct the gait I now see before me?’ Another animal I encountered in the Saarbrücken zoo is a species whose existence I did not know of before. This is Pallas’s cat. This is a wild cat with a very unusual and engaging look. The name Pallas has a special meaning for me for the following reason. When I was young and a keen birdwatcher some of the birds which were most exciting for me were rare vagrants from Siberia which had been brought to Europe by unusual weather conditions. A number of these are named after Pallas. I knew almost nothing about the man Pallas. Now I have filled in some background. In particular I learned that he was a German born in Berlin who was sent on expeditions to Siberia by Catherine the Great. ### Light and lighthouses June 3, 2019 I recently had the idea that I should improve my university web pages. The most important thing was to give a new presentation of my research. At the same time I had the idea that the picture of me on the main page was not very appropriate for attracting people’s attention and I decided to replace it with a different one. Now I have a picture of me in front of the lighthouse ‘Les Éclaireurs’ in the Beagle Channel, taken by my wife. I always felt a special attachment to lighthouses. This was related to the fact that as a child I very much liked the adventure of visiting uninhabited or sparsely inhabited small islands and these islands usually had lighthouses on them. This was in particular true in the case of Auskerry, an island which I visited during several summers to ring birds, especially storm petrels. I wrote some more about this in my very first post on this blog. For me the lighthouse is a symbol of adventure and of things which are far away and not so easy to reach. In this sense it is an appropriate symbol for how I feel about research. There too the goals are far away and hard to reach. In this context I am reminded of a text of Marcel Proust which is quoted by Mikhail Gromov in the preface to his book ‘Metric structures for Riemannian and non-Riemannian spaces’: ‘Même ceux qui furent favorables à ma perception des vérités que je voulais ensuite graver dans le temple, me félicitèrent de les avoir découvertes au microscope, quand je m’étais au contraire servi d’un télescope pour apercevoir des choses, très petites en effet, mais parce qu’elles étaient situées à une grande distance, et qui étaient chacune un monde’ [Even those who were favourable to my perception of the truths which I wanted to engrave in the temple, congratulated me on having discovered them with a microscope, when on the contrary I used a telescope to perceive things, in fact very small, but because they were situated at a great distance, and each of which was a world in itself.] I feel absolutely in harmony with that text. Returning to lighthouses, I think they are also embedded in my unconscious. Years ago, I was fascinated by lucid dreams. A lucid dream usually includes a key moment, where lucidity begins, i.e. where the dreamer becomes conscious of being in a dream. In one example I experienced this moment was brought about by the fact of simultaneously seeing three lighthouses, those of Copinsay, Auskerry and the Brough of Birsay. Since I knew that in reality it is impossible to see all three at the same time this made it clear to me that I must be dreaming. The function of a lighthouse is to use light to convey information and to allow people (seafarers) to recognise things which are important for them. Thus a lighthouse is a natural symbol for such concepts as truth, reason, reliability, learning and science. These concepts are of course also associated with the idea of light itself, that which allows us to see things. These are the elements which characterize the phase of history called the enlightenment. Sometimes I fear that we are now entering a phase which is just the opposite of that. Perhaps it could be called the age of obscurity. It is characterized by an increasing amount of lies, deceit, ignorance and superstition. Science continues its progress but sometimes it seems to me like a thin ray among gathering darkness. A future historian might describe the arch leading from the eighteenth to the twenty-first century. I recently watched a video of the Commencement speech of Angela Merkel in Harvard. In a way many of the things she said were commonplaces, nothing new, but listening to her speech and seeing the reactions of the audience it became clear to me that it is important these days to repeat these simple truths. Those of us who have not forgotten them should propagate them. And with some luck, the age of obscurity may yet be averted. ### Banine’s ‘Jours caucasiens’ April 11, 2019 I have just read the novel ‘Jours caucasiens’ by Banine. This is an autobiographical account of the author’s childhood in Baku. I find it difficult to judge how much of what she writes there is true and how much is a product of her vivid imagination. I do not find that so important. In any case I found it very interesting to read. It is not for readers who are easily shocked. Banine is the pen name of Umm-El-Banine Assadoulaeff. She was born in Baku into a family of oil magnates and multimillionaires. In fact she herself was in principle a multimillionaire for a few days after the death of her grandfather, until her fortune was destroyed when Azerbaijan was invaded by the Soviet Union. In later years she lived in Paris and wrote in French. To my taste she writes very beautifully in French. I first heard of her through the diaries of Ernst Jünger. While he was an officer in the German army occupying Paris during the Second World War he got to know Banine and visited her regularly. It was not entirely unproblematic for her during the occupation when she was visited at her appartment by a German army officer in uniform. She seemed to regard this with humour. The two had a close but platonic relationship. The society in which Banine grew up was the result of the discovery of oil. Her ancestors had been poor farmers who suddenly became very rich because oil-wells were built on their land. She presents her family as being very uncivilised. They were muslims but had already been strongly affected by western culture. I found an article in the magazine ‘Der Spiegel’ from 1947 where ‘Jours caucasiens’ is described by the words ‘gehören zu den skandalösesten Neuerscheinungen in Paris’ [is one of the most scandalous new publications in Paris]. It also says that her family was very unhappy about the way they were presented in the book and I can well understand that. It seems that she had a low opinion of her family and their friends and the culture they belonged to, although she herself did not seem to mind being part of it. She was attracted by Western culture and Paris was the place of her dreams. As a child she had a German governess. Her mother died when she was very young and after her father had remarried she had a French and an English teacher for those languages. She quickly fell in love with French. On the other hand, she saw having to learn English as a bit of a nuisance. Her impression was that the English had just taken the words from German and French and changed them in a strange way. After the Russian invasion Banine’s father, who had been a government minister in the short-lived Azerbaijan Republic, was imprisoned. He was released due to the efforts of a man whose motivation for doing so was the desire to marry Banine. She was very much against this. Perhaps the strongest reason was that he had red hair. There was a superstition that red-haired people, who were not very common in that region, had evil supernatural powers. Banine’s grandmother told her a story about an alchemist who discovered the secret of red-haired people. According to him they should be treated in the following way. He cut off their head, boiled it in a pot and put the head on a pedestal. If this was done correctly then the heads would start to speak and make prophecies which were always true. Banine could not help associating her potential husband with this horrible myth. Unfortunately she was under a lot of social pressure and after hesitating a bit agreed to the marriage. Apart from being a sign of gratitude for her father’s release this was also a way of persuading her suitor to use his influence to get a visa for her father to allow him to leave Russia. In the end she accepted this arrangement instead of running away with the man she loved. At this time she was fifteen years old. Her father got the visa and left the country. Later she also got a visa and was able to leave. The last stage of her journey was with the Orient Express from Constantinople to Paris. The book ends as the train is approaching Paris and a new life is starting for her. ### The probability space as a fiction February 12, 2019 Looking at this book also led to progress of a different kind. I started thinking about the question of why I found probability theory so difficult. One superficial view of the subject is that it is just measure theory except that the known objects are called by different names. Since I do understand measure theory and I have a strong affinity for language if that was the only problem I should have been able to overcome it. Then I noticed a more serious difficulty, which had previously only been hovering on the edge of my consciousness. In elementary probability the concept of a probability space is clear – it is a measure space with total measure one. In more sophisticated probability theory it seems to vanish almost completely from the discussion. My impression in reading texts or listening to talks on the subject is that there is a probability space around in the background but that you never get your hands on it. You begin to wonder if it exists at all and this is the reason for the title of this post. I began to wonder if it is like the embedding into Euclidean space which any manifold in principle has but which plays no role in large parts of differential geometry. An internet search starting from this suspicion let me to an enlightening blog post of Terry Tao called ‘Notes 0: A review of probability theory‘. There he reviews ‘foundational aspects of probability theory’. Fairly early in this text he compares the situation with that in differential geometry. He compares the role of the probability space to that of a coordinate system in differential geometry, a probably better variant of my thought with the embeddings. He talks about a ‘probabilistic way of thinking’ as an analogue of the ‘geometric way of thinking’. So I believe that I have now discovered the basic thing I did not understand in this context – I have not yet understood the probabilistic way of thinking. When I consider the importance when doing differential geometry of (not) understanding the geometric way of thinking I see what an enormous problem this is. It is the key to understanding the questions of ‘what things are’ and ‘where things live’. For instance, to take an example from Tao’s notes, Poisson distributions are probability measures (‘distribution’ is the probabilistic translation of the word ‘measure’) on the natural numbers, the latter being thought of as a potential codomain of a random variable. Tao writes ‘With this probabilistic viewpoint, we shall soon see the sample space essentially disappear from view altogether …’ Why I am thinking about the Cheshire cat? In a sequel to the blog post just mentioned Tao continues to discuss free probability. This is a kind of non-commutative extension of ordinary probability. It is a subject I do not feel I have to learn at this moment but I do think that it would be useful to have an idea how it reduces to ordinary probability in the commutative case. There is an analogy between this and non-commutative geometry. The latter subject is one which fascinated me sufficiently at the time I was at IHES to motivate me to attend a lecture course of Alain Connes at the College de France. The common idea is to first replace a space (in some sense) by the algebra of (suitably regular) functions on that space with pointwise operations. In practise this is usually done in the context of complex functions so that we have a * operation defined by complex conjugation. This then means that continuous functions on a compact topological space define a commutative $C^*$-algebra. The space can be reconstructed from the algebra. This leads to the idea that a $C^*$-algebra can be thought of as a non-commutative topological space. I came into contact with these things as an undergraduate through my honours project, supervised by Ian Craw. Non-commutative geometry has to do with extending this to replace the topological space by a manifold. Coming back to the original subject, this procedure has an analogue for probability theory. Here we replace the continuous functions by $L^\infty$ functions, which also form an algebra under pointwise operations. In fact, as discussed in Tao’s notes, it may be necessary to replace this by a restricted class of $L^\infty$ functions which are in particular in $L^1$. The reason for this is that a key structure on the algebra of functions (random variables) is the expectation. In this case the * operation is also important. The non-commutative analogue of a probability space is then a $W^*$-algebra (von Neumann algebra). Comparing with the start of this discussion, the connection here is that while the probability space fades into the background the random variables (elements of the algebra) become central. ### Broken foot November 17, 2018 Last Saturday morning, while getting up from my desk at home, I tripped over something. I fell on the ground but my foot was stuck under the desk and could not accompany me properly in my fall. After that my foot was rather painful. Nevertheless the pain was not extreme and I did not take the matter very seriously. I limped around and even gave my lectures at the blackboard as normal on Tuesday and Thursday morning. On Wednesday I went to my GP to get his opinion on a non-acute matter. Actually I did not even really know him since I have not been to the doctor for five years apart from getting a tetanus booster. The pain in my foot had been getting less from day to day and I imagined I was on the way to recovery. At the same time the foot looked a bit funny, with some strange bruises. For this reason I showed the doctor my foot. He said that these bruises could be a sign of an injury deeper in the foot and suspected it was broken. He sent me to get it X-rayed to make sure. This is the first time in my life I have broken a bone and so I might be excused for not thinking of the possibility. When I got to the place where the X-ray was to be done it turned out that their machine was broken and would only be working again the next day. Having already wasted so much time in attending to the foot I decided to look for another radiologist. By this time it was lunchtime and the first radiologist I tried was closed for lunch. The next one I tried did not accept me as a patient, for reasons I did not quite understand. The next one was closed the whole afternoon for a training course. In the late afternoon the fifth attempt was finally successful. The X-ray revealed that my foot was broken. Technically it is what is called a Jones fracture, which is a certain kind of break of the fifth metatarsal bone, a bone of whose existence I had known nothing up to that point in my life. It has the reputation of healing rather badly due to the poor blood supply in that area. I was told that I should see a surgeon as soon as possible. Since it was already very late in the afternoon that had to wait for the next day. On Thursday I went back to my GP to discuss the further strategy. His practise was closed in the morning for a training course (where did I hear that phrase recently?). It opened again at three in the afternoon. Once I managed to see the doctor he called up a practise of a surgeon to check someone could see me that day. He got a positive answer and so I went there as quickly as possible. When I arrived the ‘friendly’ lady at the desk greeted me with the sceptical phrase, ‘what, you expect the surgeon to see you today?’. I just replied ‘That was the idea.’ Fortunately he did accept to see me. He explained the mechanism of the fracture. The bone is attached to a tendon and when that tendon is pulled with a large enough force the bone breaks apart. Fortunately it was a clean break and the two pieces had moved very little from the place they would normally be. I left the practise with crutches and wearing a surgical boot which looks very futuristic, like the first installment towards a space suit. I took a taxi home. Now I learned something about how it is to have limited mobility. It just happens that Eva is away for a week so that she was not there to help me. (On Monday she was still there and took me to the university and back by car.) It is an interesting mental training. You realize how often you needlessly go from one place to another under normal circumstances, even within the home. Now it is necessary to plan an optimal route and minimise the number of times I go up and down the stairs. Yesterday I went to the university with the tram as a method to reduce the necessary walking distance to one I could manage. I also had the interesting experience of giving myself an anti-thrombosis injection. Actually it was not so bad as I expected and I suppose it will soon become routine. The professionals gave me the first one on Thursday to explain how it works. This morning I did some necessary shopping at the supermarket and took the tram one stop to get there. A neighbour saw me leaving home and was kind enough to transport me in the one direction in her car. Now I am looking forward to a weekend at home where the I will have no bigger physical obstacle to overcome than occasionally climbing the stairs.
2021-10-23 18:32:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44799911975860596, "perplexity": 683.8802035699474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585737.45/warc/CC-MAIN-20211023162040-20211023192040-00627.warc.gz"}
https://stats.stackexchange.com/questions/19103/how-to-statistically-compare-two-time-series
# How to statistically compare two time series? I have two time series, shown in the plot below: The plot is showing the full detail of both time series, but I can easily reduce it to just the coincident observations if needed. My question is: What statistical methods can I use to assess the differences between the time series? I know this is a fairly broad and vague question, but I can't seem to find much introductory material on this anywhere. As I can see it, there are two distinct things to assess: 1. Are the values the same? 2. Are the trends the same? What sort of statistical tests would you suggest looking at to assess these questions? For question 1 I can obviously assess the means of the different datasets and look for significant differences in distributions, but is there a way of doing this that takes into account the time-series nature of the data? For question 2 - is there something like the Mann-Kendall tests that looks for the similarity between two trends? I could do the Mann-Kendall test for both datasets and compare, but I don't know if that is a valid way to do things, or whether there is a better way? I'm doing all of this in R, so if tests you suggest have a R package then please let me know. • The plot appears to obscure what may be a crucial difference between these series: they might be sampled at different frequencies. The black line (Aeronet) seems to be sampled only about 20 times and the red line (Visibility) hundreds of times or more. Another critical factor may be the regularity of sampling, or lack thereof: the times between Aeronet observations appear to vary a little. In general, it helps to erase the connecting lines and display only the points corresponding to actual data, so that the viewer can determine these things visually. – whuber Nov 29 '11 at 18:11 • Here is a Python library for unevenly-spaced time series analysis. – kjetil b halvorsen Nov 4 '18 at 13:12 As others have stated, you need to have a common frequency of measurement (i.e. the time between observations). With that in place I would identify a common model that would reasonably describe each series separately. This might be an ARIMA model or a multiply-trended Regression Model with possible Level Shifts or a composite model integrating both memory (ARIMA) and dummy variables. This common model could be estimated globally and separately for each of the two series and then one could construct an F test to test the hypothesis of a common set of parameters. Consider the grangertest() in the lmtest library. It is a test to see if one time series is useful in forecasting another. A couple references to get you started: https://spia.uga.edu/faculty_pages/monogan/teaching/ts/ https://spia.uga.edu/faculty_pages/monogan/teaching/ts/Kgranger.pdf http://en.wikipedia.org/wiki/Granger_causality • His sample size would be too small with < 10 datapoints versus the amount of parameters you need to fit in Granger. – Jase Dec 27 '12 at 6:07
2020-08-14 17:58:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5475881099700928, "perplexity": 430.3442390171416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00313.warc.gz"}
https://electronics.stackexchange.com/questions/329302/how-to-convert-from-power-signal-received-by-antenna-to-voltage-signal
# How to convert from power signal received by Antenna to voltage signal? I am considering wireless transmission. Let us define a cosine signal with frequency 2.4 GHz, i.e. $$s(t) = A\cos(2\pi ft),$$ where $$f=2.4\times10^9.$$ Let us assume that when TX transmits the signal s(t) of power 50 W, RX receives the signal of power 0.5 W (that is, 49.5 W is attenuated in the air.) Saying more detail, transmitted signal and received signal are, respectively, $$s_{TX}(t)=10\cos(2\pi ft)$$ and $$s_{RX}(t) = \cos(2\pi ft).$$ Does the received signal represent voltage for load? I mean, if load is 1 Ohm resistor, is the power for resistor $$P_{Load}=\int_{T}\frac{s_{RX}(t)^2}{1}dt=\frac{1}{2}=0.5 W\quad?$$ If so, if I use 0.1 ohm, the power will be 5 W? (Received power is just 0.5...) Please someone teach me where I have misunderstanding If your transmitting antenna has low losses and is driven at the right frequency it will emit power efficiently. A straightforward simple antenna like a dipole will emit power evenly in all directions in one plane and, it will emit zero power in other directions. I mention this to set the scene. The electromagnetic power emitted is made from two fields and those two fields are an electric field and a magnetic field measured in volts per metre and amps per metre respectively. The power of these two fields is simply volt.amps per square metre i.e. we talk about power in watts per square metre and the square metre part does represent the watts that flow through the air per square metre. However, as you get further from the transmit antenna the power per square metre drops with distance squared. Think of a light bulb emitting light in all directions - your eyes have a certain area with which to capture that power and if you move away from the light the power per square metre reduces. So a transmit antenna transmits a fixed amount of power and the further you are away, you receive less of that power because it is spreading out. It isn't attenuating, it is spreading out. Now think about your receive antenna and importantly, don't think of it as a wire, think of it as a collector of power with a certain area - this is called the aperture of the antenna and is measured in square metres. Up close to the transmit antenna the energy density is greater and more watts are packed into a given area. Further away there are fewer watts per square metre so your receiver (with a fixed aperture) cannot receive the same power. If your receive antenna's aperture "collects" 0.5 watts at position A it will collect one quarter of that power at position 2A. There is no trade off with load resistance - you get the power that is collected by the receive antennas aperture. If of course your receive antenna is up really close to the transmit antenna you are in, what is called, the "near field" and this is much less predictable - you get induction and electric coupling effects and you can, under the right situation "load" the transmit antenna and produce much more difficult to understand relationships. So, if you are just talking about EM radiation from an antenna you are in, what is called, the far-field. This nominally begins about 1 wavelength from the transmit antenna so, at 2.4 GHz this is round about 10 cm. So, what I've largely said above, is about far-field power transfer and not near-field stuff. In the far-field the power you get is immutable by impedances - the receive antenna can only "collect" what is delivered and that is nearly the end of the story however, if you don't provide the correct impedance to the antenna you won't maximize the power potential. Any antenna has an impedance and this is vastly frequency dependent so, you are limited even more. In short, in the far-field you get what power you are given and, you have to match impedances to maximize that power. • Very good explanation, you give me very good intuition. Thank you so much. – God Danny Sep 19 '17 at 4:02 The antenna has a design output impedance, usually 50 ohms, it is NOT a voltage source, but appears to be a voltage source in series with a resistor equal to its designed output impedance.... All RF work is better thought of as power transfer rather then voltage transfer as impedance is so easy to change. For maximum power transfer you need to have a load that matches the output impedance of the antenna (Graph the power in a load for a voltage source in series with a resistor driving a variable load and you will find that maximum power transfer occurs when the source and load impedance are equal, seriously worth taking the time to do this). For sources having a complex source impedance the maximum power transfer occurs when the load is the complex conjugate of the source impedance. Now one probably could design an antenna to match into 1 ohm, or 0.1 ohm, or 0.01 ohm (But things are getting VERY lossy because of secondary effects), but in all cases the power delivered will be equal (apart from the losses due to poor antenna efficiency), you will see much less voltage and more current as the system impedance drops. Consider an antenna with a design output impedance of 1 ohm (Unlikely for all sorts of reasons, but go with it), model it as say a 1.414V voltage source in series with a 1 ohm resistor. Into a matched load (1 ohm) you get 0.707V (Voltage divider) and 0.707A, multiplying these gives you your 0.5W. If you load this up into 0.1 ohms you get ~0.13V @ 1.3A = 0.169W, hardly an improvement. Now a free space path loss of only 20dB seems unlikely to me, with a 1m separation and isotropic antennas for example you have ~40dB of path loss at 2.4GHz, and you can add 20dB for every decade of path length, so you might want to look at your numbers. • Very kind explanation, nice, thank you. I actually want to grasp of how to change power signal to voltage signal, ignoring the impedance matching, i.e. ignoring reflecting effects and so on. – God Danny Sep 15 '17 at 1:53 • Just use a very high impedance input (Good luck with that at 2.4GHz!), you get zero power transferred but can measure the voltage. – Dan Mills Sep 19 '17 at 10:08
2020-06-05 13:55:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6329625844955444, "perplexity": 679.4146200503355}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00174.warc.gz"}
https://cseducators.stackexchange.com/questions/4185/opening-a-machine-learning-course-in-high-school
# Opening a machine learning course in high school I'm a computer science teacher and our department is thinking about opening a data analysis and machine learning course for upperclassmen at my high school. What are some topics that could be covered in a year long timespan that could be done by students with a basic understanding of Python and basic statistics (correlation/regression, measures of central tendency, hypothesis testing)? I'm thinking the total time commitment available to students is 3.5 hours a week of in-class time, and approximately 2 to 3 hours a week of coding, learning, or doing homework. Some topics that we were thinking might be relevant are image processing and natural language processing. Are there any projects that really hit the ground running in terms of showcasing a topic in machine learning? • Welcome to CSEducators. I hope you get good answers to your question and come back for more in the future. Jan 17, 2018 at 17:50 • Can you translate “upperclassmen” for the international audience. I am from England and have no idea what it means. (may be a foot note, with age etc.) Jan 17, 2018 at 19:24 • You may also be interested in this question. – Ben I. Jan 17, 2018 at 22:13 • The definition is loose, but upperclassmen are generally 11th and 12th graders (16 - 18 years old), but sometimes include 10th graders. Jan 17, 2018 at 23:58 There are all sorts of things you could cover — machine learning is an extremely interesting and growing field, with many different approaches and tools you could explore. Some of the more popular tasks for machine learning algorithms include classification, regression and clustering. Not every application fits into this grouping (e.g. AlphaGo, the Go playing program which beat Lee Se-dol). I might suggest starting with Naive Bayes classifiers. Your motivating example here could be spam filtering; Naive Bayes classifiers were used in some of the first 'learning' spam filters, dating back to the 1990s. If your students are familiar with Bayes' theorem, the assumptions of NB classifiers shouldn't be a huge leap (you may find this derivation interesting). There are also various other interesting options such as random forests (apparently used by Quora to find duplicate questions), support vector machines and so forth which could be explored potentially. Many of the most 'fashionable' techniques involve neural networks, and I would recommend spending a decent amount of time with the theory. Unlike some of the classifiers and regressors I mentioned earlier, neural networks tend to be a bit more involved — a Naive Bayes classifier essentially just needs to be given some data in an appropriate form and is "plug and play", so to speak. The (free) online book Neural Networks and Deep Learning gives a reasonably accessible explanation of neural networks in their various forms (starting with perceptrons, leading towards sigmoid neurons, gradient descent, backpropagation, deep learning, etc.). As you can imagine, there is an immense amount of content even in just the field of neural networks. Likely you won't want to cover all of the book's topics, but you don't need to go too far to reach another interesting motivating example: classifying the MNIST dataset of handwritten digits. That's something which is relatively hard traditionally, but much simpler with the usage of a neural network. Note that I've discussed a lot without ever actually explaining how you could write any programs. Of course, any machine learning class would be a little incomplete if your students had never actually applied their skills. Since your students know Python, I can highly recommend scikit-learn. It's widely regarded as an exceptionally good ML library, and the API is very friendly — you can almost treat it as LEGO, and plug together the pieces needed to do your classification/regression/clustering... even without really knowing how each piece works entirely. You can get functional (though not exactly effective) solutions simply by connecting the appropriate pieces in a pipeline. For example, if you wanted to classify some texts, you'd just: • collect some data • setup a Pipeline of a CountVectorizer and a classifer such as LinearSVC • train it, and then test it. There's an awful lot of stuff included, but then again, all the batteries you'll ever need are included, too. For playing with neural networks, Keras might be worthwhile. That said, your students may derive some value from implementing the algorithms themselves, at first. The key is to be able to find truly motivating problems: Struggling with a project you care about will teach you far more than working through any number of set problems. Emotional commitment is a key to achieving mastery. (source) It's not always easy if you're supervising many students, but it is worthwhile advice to consider. • The course will be taught in high school. May I ask what math prerequisites would be needed for those topics you listed in your answer? Jan 18, 2018 at 9:54 • @scaaahu: It's mainly linear algebra that comes up in machine learning. A familiarity with vectors and working with them (such as the dot product, gradient of a vector) and some calculus (differentiation and partial derivatives) would be helpful. If you didn't want to get bogged down with the details, you could probably explain what you needed as you taught it, but certainly the vectors pop up everywhere so an awareness would be helpful. Jan 18, 2018 at 15:10 What are some topics (Machine Learning) that could be covered in a year long timespan that could be done by students with a basic understanding of Python and basic statistics (correlation/regression, measures of central tendency, hypothesis testing)? With regards to Neural Networks the for an introduction, even before specking oneself or doing suggested reading would be to watch the YouTube series of videos Neural Networks Demystified. When I first wanted to learn about Neural Networks I spent a few days combing the Internet and books and once I saw this series I stopped looking. I still get regular YouTube recommendations but this is the best introductory one by far. It is easy to understand, has great visuals, uses Python as an example programming language, is a real world problem, moves seamlessly back and forth between high level concepts and low level details as source code, etc. The next topic(s) are to use the free online book Neural Networks and Deep Learning which is often recommended even in the other answer here. You can build a few weeks of the course around just this and it would be worth the effort. This book goes more into the math and explains why certain activation functions are used. While I am not actively writing Neural Networks at present I do keep my finger on the pulse and one of the more intriguing ideas on the street is that using Rectified linear unit (ReLU) are just as effective as the Sigmoid function. The beauty of this is that using ReLU doesn't need the use of a floating point processor and so in theory should speed up the processing without sacrificing abilities. So as a set of labs do the book as written, but then try other activation functions and compare the results. Another topic is to run the code using a CPU and then using a GPU and compare the speed. If done correctly the GPU will be significantly faster then the CPU. As another option compare those against using a cloud service such as AWS After the concepts of Neural Networks are understood at the Python source code level, abstract that away and use TensorFlow which is what Google uses. Lastly one topic I am seeing pop up much to often with regards to Neural Networks is for classification how easily they are fooled, e.g. When all the world's a toaster, according to tricked AI After seeing this I wanted to get a sticker and put it on a shirt saying "I am not a toaster" but then forgot that they already have shirts saying I am not a toaster, referring to Cylons. • Very interesting point about adversarial examples; students might also find examples like this paper interesting in which deep neural nets were fooled by one pixel changes. It really does make it clear how neural nets don't really see things the way we do (and perhaps makes driverless cars and such sound far more dangerous, if such small changes can fool them!) Jan 19, 2018 at 14:02 • @Aurora0001 Thanks for the reference about the single pixel difference. I know about it but could not find it. Jan 19, 2018 at 17:07 I think one of the easiest ML algorithms to understand is K-Nearest Neighbors (KNN). We do a KNN project in my data structures class in the first week of school so that they can refresh their skills from AP CS and demystify ML a little bit. What I like about KNN is that it's conceptually easy to understand for high school students (I showcased it at Open House to the parents), it works for both classification and regression, and has its advantages and disadvantages. It's one of my favorite projects we do all year. I'd be happy to chat offline if you want some specifics.
2022-09-27 15:00:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32217085361480713, "perplexity": 747.1238117478943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00325.warc.gz"}
http://kleinfelter.com/
# Java Hello World With Visual Studio Code and Eclipse ## Hello World in Java with Visual Studio Code • Download and install a JDK. (Look for “Java SE Development Kit”.) If you don’t get an error when you type “javac -version” at a Command Prompt (Terminal window in Mac), you already have a JDK. • Launch VSCode • Tell VSCode to create a new file via File > New • Paste the following text public class test1 { public static void main(String[] args) { System.out.println("Hello, World"); System.out.println("Goodbye, World"); } } • File > Save As > test1.java • VSCode will prompt you to install the Java extension pack. Do so. It will take a few minutes. • When it finishes installing, it will show a “Reload” indicator. Click it. • It might give you a “Classpath is incomplete” warning. • This happens every time you work with a stand-alone Java file (i.e. a Java file that is not part of a project.) • For now, just dismiss the warning. • View > Integrated Terminal • cd to the directory where you saved test1.java • javac test1.java • ‘javac’ is the compiler. ‘java’ is command which used to run a compiled program. • Do a directory listing. Observe that javac compiled test1.java to test1.class. • java test1 • It should display “Hello, World” Note that you will almost never create a Java program this way. This is a special process for a single-source-file Java app. Typically, your app will be comprised of many files, and you’ll have to create a ‘project’ to tie them all together. Most Java projects are godawful complex things which require a ‘build tool’ to compile and assemble them into something you can run. There are multiple build tools which Java developers use because developers often say, “This is awful. I could build something better.” We’re going to use Maven. Maven is good for our purposes because VSCode works with it and so does Eclipse, in case you later decide you have to suffer Eclipse. Maven revolves around a ‘POM file’ (Project Object Model), which is written in XML. We’re going to abandon this stand-alone source file and create a hello-world Maven project; we’ll also set you up to use the Java debugger. • Close VSCode. • For Mac • Run “brew install maven”. It will install to /usr/local/bin/mvn. • Run mvn --version • Add this to .bash_profile, substituting the value for Maven home as reported by mvn (above). • export M2_HOME=/usr/local/Cellar/maven/3.5.2/libexec • export M2=$M2_HOME/bin • export PATH=$PATH:\$M2_HOME/bin • Store it in a directory all by itself. Maybe name that directory “MAVEN”. • Set a M2_HOME environment variable to point to the MAVEN directory. • Set a M2 (not M2_HOME - just ‘M2’) environment variable to point to the bin subdirectory of the MAVEN directory. • In your Terminal window (Command Prompt), navigate to an empty directory, where you wish to create your Java project. • mvn archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false • This may take a long time. • cd to my-app/src/main/java/com/mycompany/app. (If you’re on Windows, your slashes lean the other way.) • Run a directory listing and you’ll see App.java. Examine the contents. You’ll see that Maven created a hello-world app for you. This godawful directory structure – that’s how Java apps are built. • Launch VSCode and open the my-app directory from the File menu. • In the left panel of VSCode, navigate to my-app/src/main/java/com/mycompany/app, and open App.java • Find the line with System.out.println. Left-click with your mouse, just to the left of the line number. You should see a red circle. You’ve just set a breakpoint on this line. • Choose Debug > Start Debugging from the menu. If it asks what kind, select Java. • It will highlight the line with your breakpoint. It executed all the code before that point, and stopped. • Choose Debug > Step Over. It will execute your println, and you’ll see the output displayed in the bottom panel. • Choose Debug > Continue. It will ‘execute’ all those closing braces and terminate your program normally. ## Hello World in Java with Eclipse ### Install and Launch LiClipse We’re going to use LiClipse, which is Eclipse bundled with some other tools. • Download and install a JDK. (Look for “Java SE Development Kit”.) If you don’t get an error when you type “javac -version” at a Command Prompt (Terminal window in Mac), you already have a JDK. • Launch LiClipse (henceforth to be referred to simply as Eclipse). • Eclipse wants to use a “workspace”. That’s the root folder for all projects you develop using Eclipse. Pick one that suits you. On my Mac, I use /Users/kevin/Sync/code. Unless you want it to nag you each time you launch it, select “Use this as the default and do not ask again”. • Help > Install New Software > select “Eclipse x.x Release…” • In the “Type Filter Text” field, type “java” and press Enter. • “Select Eclipse Java Development Tools” and press Next; then work you way through the Wizard until it is installed. • Watch the progress dialog in the lower-right corner of Eclipse. Don’t proceed until it hits 100%. It may be sloooow. • When it wants to restart Eclipse, let it. ### Create and Run a Single-file Java App You really, really have to put your single-file Java app in a project. If you don’t, you’ll find yourself unable to save your file. Eclipse understands projects. It doesn’t really deal with stand-alone files. • File > New > Project > Java > Java Project. Press Next. • Enter a project name of test1. • Choose “Use Project folder as root for sources and class files” • Press Finish. • It will natter about opening a Java Perspective. Let it, and tell it to always do so. It is just going to open the panels which are relevant to Java. • File > New > Class • Source folder = test1 • Package = (empty) • Name = test1 • Superclass = (empty) • Clear ALL checkboxes • Press Finish • Make the test1.java file contain: public class test1 { public static void main(String[] args) { System.out.println("Hello, World"); System.out.println("Goodbye, World"); } } • File > Save • Locate the toolbar icon for “Run”. (One of the icons with a green circle with a triangle.) Press it. • It will natter at you about Run Configurations. • Double-click “Java Application” and choose test1 (under Java Application). • Press the Run button. • Notice your output at the bottom of the page. Note that you will almost never create a Java program this way. This is a special process for a single-source-file Java app. Typically, your app will be comprised of many files, and you’ll have to create a ‘project’ to tie them all together. Most Java projects are godawful complex things which require a ‘build tool’ to compile and assemble them into something you can run. There are multiple build tools which Java developers use because developers often say, “This is awful. I could build something better.” We’re going to use the built-in Eclipse build too. If you ever need to do so later, Eclipse can also work with Maven projects and it can export Eclipse projects into Maven ‘POM files’. Maven revolves around a POM file (Project Object Model), which is written in XML. We’re going to abandon this stand-alone source file and create a new hellow-world project; we’ll also use the Java debugger. • File > New > Java Project. Note that because you are already in the Java Perspective, Eclipse has hoisted ‘Java Project’ into the top-level menu. You can get the same effect via File > New > Project > Java > Java Project • Project Name = my-app2 • Use default location • JRE: use whatever it defaults to • Project Layout: Create separate folders… • Press Finish • Stop and look. • You should now see TWO Java projects in the Package Explorer panel at the left side of Eclipse. • Sometimes, if one Java project is not causing you enough pain, Eclipse figures you might want to work on multiple projects at the same time. Getting started, one at a time is enough.: • Select my-app2. Then right-click it and choose “Close Unrelated Projects”. Tell Eclipse that you really meant to, when it asks. • You will still see a single folder for test1, even though it is closed. That’s how Eclipse works. • Select the src folder under my-app2. • Right-click src and create a new package: • Name it com.mycompany.app and press Finish. • Right-click com.mycompany.app and create a new class: • Source folder: my-app2/src • Package: com.mycompany.app • Name: App • Modifiers: Public • Superclass: java.lang.Object • Select “public static void main…” but leave the other checkboxes empty. • Press Finish • Take a look at the generated App.java file. It is almost a complete hello-world. Replace the TODO comment with: System.out.println("Hello, World"); System.out.println("Goodbye, World"); • File > Save • Locate the toolbar icon for “Run”. (One of the icons with a green circle with a triangle.) Press it. • It will natter at you about Run Configurations. • Double-click “Java Application” and choose test1 (under Java Application). • Press the Run button. • Notice your output at the bottom of the page. • Double-click the gutter, to the left of the line number by the first line with System.out.println. It will add a small dot to indicate that you have set a breakpoint on this line. • Locate the toolbar icon of an insect and the flyover help “Debug App”. Press it. • Eclipse will ask permission to switch to the Debug Persipective. Approve it. • It will highlight the line with your breakpoint. It executed all the code before that point, and stopped. • Choose Run > Step Over. It will execute your println, and you’ll see the output displayed in the bottom panel. • Choose Run > Resume. It will ‘execute’ all those closing braces and terminate your program normally. • Note that if you ever find Eclipse in the ‘wrong’ Perspective, you can change Perspective via Window > Perspective > Open Perspective. # Programming Languages for the Occasional Programmer Work in progress (a ‘living document’): I’m an occasional programmer. I might code intensively for a few months, then do other things, then return to programming. I don’t get to spend all day, every day, coding on any project. I need to be able to productively pick up a project after having set it aside. I need to be able to program in an environment without being intimately familiar with every nook and cranny of the environment. I need to be able to write code for Windows, Mac, and web (client and server side), and Linux server administration. I have an interest in data science, so the ability to work there would be helpful. Of necessity, a discussion of languages will also include a discussion of platforms related to those languages. Fewer languages is better than many languages. I’ve dabbled in many. The fewer I have to try to stay current in, the more time I can spend coding, rather than re-learning. ### Popularity Current popular multi-platform general purpose languages (in many rankings): • JavaScript - way, way most popular at GitHub • Java • Python • Ruby • Much lower frequency: Go, C, TypeScript, Scala, Clojure, R, Perl, Julia, Haskell ### Clojure and ClojureScript Conceptually, I like Lisp. In reality, I run into a problem with un-typed languages. After I pass a string into a routine expecting an integer, and it bubbles down through a half-dozen layers before it blows up, and I spend a long time tracking down where the defect was injected (as opposed to where it was detected), I then start writing code to check the type of actual parameters. Checking type at runtime is stupid; it is more effective to check at compile/load time. An occasional programmer really needs typing of formal parameters. Clojure/ClojureScript solves the problem of one language for client and server. I find coding in Clojure to be pleasant and rewarding, but the moment I start trying to test/run the code, it gets real frustrating, real fast, because trivial changes result in run… damn… run… damn… run… cycles. Slow startup is a problem for some categories of scripts. e.g. If you wanted to write a utility like ‘cat’ or ‘more’. And Clojure doc is written for the person who already knows the answer. (I’m not the only person to notice this.) I think the Clojure ecosystem might be really appealing if I coded in it all day, every day. It appears to serve the SME really well. I did about 6 months of part-time programming in Clojure, writing an app to rebalance my complex investment portfolio. ### Ruby Ruby is a pleasing language for small projects. I really Matz’s notion that using the language should please the programmer. In common platforms such as Rails, there is so much magic happening behind the scenes that it takes me a couple of weeks to refresh myself on what is really happening, every time I pick it up. The absensce of typed formal parameters leads me back to run-type type checking. An occasional programmer really needs typing of formal parameters. Ruby really has to be one of multiple languages. You can’t really do your browser UI in Ruby. The gyrations (tools) necessary in order to effectively develop with multiple versions of Ruby on a single system is off-putting. ### Java Someone captured the essence of Java nicely. Java is the COBOL of the 21st century. Verbose. No fun to work in. Yes, it does have typed formal parameters. Yes, you can write Java that compiles to JavaScript to run in the browser. Yes, you can write cross-platform GUI apps in Java. Slow startup is a problem for some categories of scripts. e.g. If you wanted to write a utility like ‘cat’ or ‘more’. I don’t want to write in Java, and, frankly, I don’t like running Java apps. Even if you use something to give you a native look and feel, they still feel like Java apps. ### Python I want to like Python. I don’t quite. I can live with the indentation thing. Maybe I just need to do more coding with it. Python 3.6 does support optional typing of formal parameters. I like the concept of optional typing, if all of the published libraries come with typing. I want me to be able to dispense with typing while hacking, but I want to see typing whever I use someone else’s code, and I want to throw a switch and require typing when I begin to production-ize my code. Pyjamas, Brython, Skulpt, PyPy, Transcrypt: might let you write your browser-side code in Python. Need to check which of those actually works on iPhone and Android browsers. It runs on almost all platforms. It is used in data science (behind R in popularity, of course). “Batteries included” is an effective philosophy. “There’s only one right way to do it,” chaps my butt. As philosophers, I like Matz and I’m not sure I like Guido. (Guido may be a fine person to know; I’m just referring to his programming philosophy.) It doesn’t really fit well with functional programming, although you can bend it to your will. I wish it supported a flag to say “make data immutable.” Python has similar virtual-environment issues as Ruby, when you need to develop with multiple different versions. ### JavaScript, TypeScript JavaScript has the appeal of one language for client and server sides. Yes, JavaScript has good stuff, but you have to know which pieces to avoid using. TypeScript supports optional types, solving my need for typed formal parameters (except that many JavaScript libraries don’t come with types). Electron… I want to use Electron, without the footprint of electron. File size I can live with; huge RAM use, not good; high CPU use, exhausts my battery. Shucks, with the Chromium footprint, I’m reminded that I have to switch from Chrome to Safari on my Macbook whenever I go on battery power. If you get node.js involved, slow startup is a problem for some categories of scripts. e.g. If you wanted to write a utility like ‘cat’ or ‘more’. If you’re going to use JavaScript, use a lint. I love this quote The thing is, there is a mass psychosis about JS and it’s like everybody is pretending that it isn’t awful. ### Pascal Yeah, other than Delphi, nobody really programs in Pascal anymore. I really loved that “train track” syntax diagram. Pascal was the last language where I really felt, “I know every iota of this environment.” ### VB5 Yeah, it is a dead language and Windows-only. It was really a spiffy tool for exploration. You could change code on the fly, half-way through a function, and continue execution. You could explore the methods of an object at run-time. Lots of support for the occasional programmer. The language itself was not real interesting. The built-in bugs were frustrating. The environment support for tinkering has never been surpassed. # Windows Batch File Format Date as YYYY.MM.DD I often need a variable containing a yyyy.mm.dd date in a Windows batch file. Instead of figuring it out anew each time, use this code: REM --- Get current date into yyyy.mm.dd format. REM REM NOTE: You really must use the "if not defined" in order to skip the trailing blank line. REM clear tmpDate set tmpDate= REM This gets the date in a locale-independent format. REM e.g. 20171130101642.469000-300 for /f "skip=1" %%x in ('wmic os get localdatetime') do if not defined tmpDate set tmpDate=%%x REM Extract from tmpDate: REM position 0 for 4 chars REM position 4 for 2 chars REM position 6 for 2 chars set YYYYMMDD=%tmpDate:~0,4%.%tmpDate:~4,2%.%tmpDate:~6,2% echo YYYYMMDD is %YYYYMMDD% # Cygwin ssh Daemon How-to, 2017 Enabling the cygwin ssh daemon has changed over the years. Here’s my 2017 edition of a how-to (howto). • Run the Cygwin setup and select openssh. • open Cygwin64 Terminal (run with ADMIN) ssh-host-config tell it: • strict modes = no • new local account = no • yes, install the service • CYGWIN = (empty. it is no longer needed) • User ID to use = your-personal-Windows-user-ID. (If you use a non-admin and an admin user ID, enter the admin one). If you let the config create a service account, that ID will NOT be able to access network shares. I really want to be able to access network folders when using Unison or rsync! cygrunsrv.exe –stop sshd /usr/sbin/sshd.exe -D Solve any error messages. If Windows firewall asks, permit the access. Test your connection. Run ssh from a remote machine and ensure you can connect. ^C cygrunsrv.exe –start sshd Note: To totally start setup over, first you must: cygrunsrv –stop sshd cygrunsrv –remove sshd IF you have an /etc/passwd, delete any sshd or cyg_server user ID. net user sshd /delete net user cyg_server /delete rmdir /var/empty Note: Start/stop daemon with: cygrunsrv –start sshd cygrunsrv –stop sshd # Markdown Toolset Summary: GitHub Flavored Markdown, kramdown, Marked 2, Typora I’ve been using a hodgepodge of Markdown tools. I’d like to try and make sense of what I’m using and why. I’m using Jekyll and GitHub pages (GHP) for my blogs. Jekyll and GHP use the kramdown parser. Per GitHub: “GitHub Pages only supports kramdown as a Markdown processor” and “we’ve enabled kramdown’s GitHub-flavored Markdown support by default.” See: • https://help.github.com/articles/configuring-jekyll/ • https://kramdown.gettalong.org/parser/gfm.html So the question of which Markdown flavor to use is either: • GFM • kramdown For now, unless I encounter a compelling reason to use native kramdown, I’m using GFM because it is the default on GHP and on GitHub issues. Other than my blogs,I am the chief consumer of my Markdown documents. I do more reading that authoring. Consequently, I’m less interested in side-by-side (source + rendered) tools than many Markdown fans. I mostly want WYSIWYG editing – a simple WordPad-like (or TextEdit-like) experience for rich text documents. For documents with lots of embedded images or documents where I need precise page layout, I don’t use Markdown. ### Mandatory Markdown Features • Core Markdown • Tables (with some kind of table editor - not just source editing of pipes and spaces) • MathJax • YAML front matter. Either ignore it, or give me some way to view/edit it. ### Summary of Current Tools • Github Flavored Markdown: Rationalle explained above. • GitHub Pages - Blog publishing: I moved to GHP after my dynamic web site was compromised, and I decided I wanted simple, secure blog hosting. • Jekyll - Local blog preview: Since I publish with GHP (which uses Jekyll), I preview locally with Jekyll. • Markdown Parser: kramdown. It is what Jekyll uses. • Marked 2 - Document viewer (Mac only): A first-rate Markdown renderer. It supports the use of custom Markdown parsers. Natively, it supports Discount for GFM. Someday, I’ll get around to configuring it to use kramdown plus options to make it totally GHP-compatible. • Typora - WYSIWYG Editor: I really want a single-pane GUI editor. I prefer one that works on Windows and Mac. The primary candidates are Texts.io and Typora. Texts is “based on Pandoc”. Typora clearly states that it supports GFM. I prefer the non-ambiguous flavor. Texts rewrites perfectly good hand-edited Markdown. Typora less so. I prefer the “full GUI” approach of Texts, but the you-type-Markdown-you-get-WYSIWYG approach of Typora isn’t so bad, and it still leaves me viewing a single-pane rendered document. Texts and Typora are both available for Windows and Mac. Typora also supports Linux. I use all three. (Mac is my primary OS.) • none - side-by-side editor: I think there may be cases where I really want side-by-side editing (although I haven’t encountered them yet). • Haroopad and MacDown look feasible on the Mac, except see Haroopad YAML problem. Haroopad also supports Windows. • I’ve seen some non-Haroopad sites say that Haroopad supports GFM and MathJax. MacDown can be configured to support GFM per https://macdown.uranusjr.com/faq/#gfm , and MacDown supports MathJax. • Another candidate is Atom. (markdown-preview-kramdown plugin just doesn’t work right!) I already use Atom for Clojure development. • This is an online option: https://kramdown.herokuapp.com/ • IF I decide I need this, MacDown looks best. ### YAML • In YAML front matter, if you need a comment, use space-#. If you begin a line with a #, most tools treats that as a title, even in front matter. • Marked 2 allows you to strip front matter before rendering. [good] • Typora put front matter in a gray box and uses typewriter font. [best] • MacDown has a “Detect Jekyll front-matter” option, and puts it in a table. [OK] • Haroopad treats front matter as Markdown. [unacceptable] • kramdown.herokuapp.com - treats front matter as Markdown. [unacceptable] ### Hacks • kramdown.herokuapp.com seems to require a blank line between a title and a bullet list. This is reportedly common. My other tools render this as desired. I need to remember to add the blank line after the title. • Consider using lint - https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md#md013—line-length • Write portable Markdown - http://brettterpstra.com/2015/08/24/write-better-markdown/ • Use an empty line: • between paragraphs • before/after code/verbatim blocks • Use spaces after list markers *, -, +, \1 • Use a space after the header marker # or ## or ### • Don’t put blank lines in your lists. It is ambiguous as to whether that starts a new list. • You can use blank lines above paragraphs within lists. Just follow the last paragraph immediately with another list item (or the end of the list). e.g. * list item 1 paragraph in list item 1 * list item 2 * Empty lines in block quotes are handled differently between flavors as well. The most common way to make a multi-paragraph block quote is to use a greater than symbol on each blank line: > paragraph one > > paragraph two >> nested paragraph >> >> nested paragraph two * Use ATX Headers (i.e. hashmarks). * four-space indentation is recognized across the board; when creating nested lists, always use four spaces instead of two. * For code blocks, use \\\ and not \~\~\~ because they are more universal * # Moving Jekyll to Docker At this point, the only Ruby thing I’m using on my Macbook is Jekyll. Instead of installing an up-to-date ruby, chruby, bundler, and ruby-build (which was how I’d previously run Jekyll). This is the story of how I migrated my Jekyll sites (kleinfelter.com and k4kpk.com) into Docker containers. Notes: • My local copy of my primary web site lives in the directory ‘kleinfelter.github.io’. • I launch my Jekyll via a Launch Agent, which runs the shell script runme-local.sh in the site directory. Steps: • Install Docker for Mac. • Since I’m auto-starting Jekyll via a Launch Agent, I need to stop the existing one: • Since I’m going to use the jekyll/jekyll image, I can stop using bundler. I’m not going to uninstall bundler, since that’s part of my Macbook’s ruby installs, but since I’m going to be using the gems provided in the image, I don’t need to be coordinating my own gems. • cd kleinfelter.github.io • git rm Gemfile • git rm Gemfile.lock • git rm -r vendor • rm -r vendor • Create a docker-compose.yml in kleinfelter.github.io containing: jekyll-kleinfelter: build: . command: jekyll serve --watch --incremental ports: - 4000:4000 volumes: - /Users/kevin/Sync/Sites/kleinfelter.github.io:/srv/jekyll • Strictly speaking, docker-compose is about running multiple containers. However, you can use it to run a single container, and it makes the command line for that container simpler, by allowing you to put some of your options in the docker-compose file. This config says: • Service is named ‘jekyll-kleinfelter’ • Build per the ‘Dockerfile’ in the current directory. • Launch the process to run in the container with: the given ‘command’ line. • Connect host port 4000 to container port 4000. • Mount the given host volume onto /srv/jekyll • I’m using the jekyll-admin plugin. I was using the gem for it, with bundle. Now that jekyll runs in a container, I can install the gem into the container’s site-ruby. Create the file ‘Dockerfile’ in kleinfelter.github.io: FROM jekyll/jekyll:pages • That says: • Base your image on the official jekyll plugin, the version designed to work with github-pages. • When building your image, run the given ‘gem’ command to add the gem. • Test it with: docker-compose build --no-cache docker-compose up --force-recreate • Edit runme-local.sh to contain: #!/bin/bash cd /Users/kevin/Sync/Sites/kleinfelter.github.io /usr/local/bin/docker-compose up • Once you’re done testing: launchctl load /Users/kevin/Library/LaunchAgents/com.kleinfelter.jekyll.kleinfelter.plist • Occasionally, when Jekyll is running, I need to force a site rebuild. • Discover the proper container via: docker ps • Connect to the existing container: docker exec -it container_name_here /bin/bash • Run: jekyll build # Jekyll Not Updating Front Page With Incremental When I used ‘–incremental –watch’, it looked like Jekyll was not updating. It turns out that it was rebuilding the page itself, but not the content which was loaded to the landing page (a.k.a. the front page). The solution is to add this to index.html in the main site directory: regenerate: true Works much better now. # Jekyll Daemon on Mac OS X eee I wanted to have jekyll always running on my Macbook, so that I could preview my posts before pushing them to my Github Pages site. Here’s how I set it up with a Launch Agent. (The challenging parts were learning how to use Launch Agents to run shell scripts and learning how to run Jekyll from a non-login shell.) I’m using chruby to manage multiple versions of Ruby, and bundle to manage multiple gem levels, and I use Homebrew as a package manager. YMMV if you use other tools. I’ve previously confirmed that I can manually launch Jekyll for both of my sites. I have two sites: k4kpk.com and kleinfelter.com, at k4kpk.github.io and kleinfelter.github.io respectively. create ~/Library/LaunchAgents/com.kleinfelter.jekyll.k4kpk.plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.kleinfelter.jekyll.k4kpk</string> <key>ProgramArguments</key> <array> <string>/bin/bash</string> <string>/Users/kevin/Sync/Sites/k4kpk.github.io/runme-local.sh</string> </array> <true/> <key>KeepAlive</key> <true/> <key>StandardOutPath</key> <string>/tmp/k4kpk.jekyll.log</string> <key>StandardErrorPath</key> <string>/tmp/k4kpk.jekyll.err</string> </dict> </plist> Similarly, create ~/Library/LaunchAgents/com.kleinfelter.jekyll.kleinfelter.plist, with ‘kleinfelter’ substituted for ‘k4kpk’. /Users/kevin/Sync/Sites/k4kpk.github.io/runme-local.sh should contain: #!/bin/bash source /Users/kevin/.bash_profile cd /Users/kevin/Sync/Sites/k4kpk.github.io chruby 2.4.2 bundle exec jekyll serve --incremental • Test it with: launchctl start com.kleinfelter.jekyll.k4kpk.plist • Check output in /tmp/k4kpk.jekyll.err and /tmp/kleinfelter.jekyll.err I had to add this to my Gemfile to prevent an “invalid byte sequence in US-ASCII” error: Encoding.default_external = Encoding::UTF_8 I also added this to my _config.yml encoding: utf-8 # Plotting Two Kinds of Points on Google Maps (This story is an enhanced edition of this story from my ham radio site.) Suppose you have a set of names and addresses, and you’d like to display them on a Google Map. Further suppose that you have two kinds of items to display – perhaps you’re displaying members of your family and you’d like red markers for the girls and blue markers for the boys. (It is a contrived example – what can I say?) The first thing you need is to create a text file with names and addresses. • Column 1 should be Name. • Column 2 should be Address. Enter street address, a comma, city name, a comma, state name. • Column 3 should be Marker_Type. Girls get “large_red”. Boys get “large_blue”. Separate your columns with TAB character. I’ll refer to this as a CSV file, but tab works better as a delimiter because you’ll have commas in your data. You really want the Address column to contain street address, city, and state, separated by commas. Here’s an example: NAME Address Marker_Type Fred Smith 123 Maple St,Anytown,OH large_blue Mary Smith 246 Oak St,Anytown,OH large_red Jane Smith 100 Park Place,New York,NY large_red ` Save the file on your computer. (You could also store it on the web, but my example assumes it is stored locally.) • Choose New, then More, then Google Fusion Tables. • Tell it you want to load a file ‘From this computer’. • Select the data file you created (per above instructions). • Import the data. Importing goes pretty fast.
2017-12-16 18:26:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1724810153245926, "perplexity": 7402.589414710532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588420.68/warc/CC-MAIN-20171216181940-20171216203940-00560.warc.gz"}
https://nrich.maths.org/2756/index
##### Age 11 to 18 Published May 2005,March 2007,February 2011. Infinity is not a number, and trying to treat it as one tends to be a pretty bad idea. At best you're likely to come away with a headache, at worst with the firm belief that 1 = 0. You may have met various ''paradoxes'' that play on this fact (and our instinct to ignore it) before. An infinitely long line, for instance, is surely infinitely many centimetres long. It's also, equally surely, infinitely many miles long. But each centimetre is a great deal shorter than each mile, so does this mean an infinitely long line is two different lengths at once? The answer, of course, is to assert confidently that the question is meaningless (and, if you're in the mood to be unkind, just shows how little the asker knows about proper mathematics) and then go back about your business untroubled by such silly quibbles. Or, as I will hopefully convince you over the course of this article, the answer is a simple ''No.'' Counting the natural numbers We shouldn't treat infinity as a number. We can't count up to it, and if we have an infinitely large set we will never be able to count all of the objects in it. A set is a collection of objects. They can be any sort of objects, although generally mathematicians use them to talk about mathematical objects, such as numbers. An infinitely large set is, of course, one containing infinitely many objects. It seems strange, therefore, to talk about certain infinitely large sets being ''countably infinite'', but that is indeed what I'm about to do. Consider the natural numbers (the positive, whole numbers: 1, 2, 3, ...). There are infinitely many natural numbers (if there were only a finite number of them, there would have to be a largest natural number - what would happen if we added 1 to that number?) so the set containing all the natural numbers must be infinitely large. We'll never be able to count them all. However, we can list them in such a way that if we counted forever, we'd be sure not to miss any out. The natural numbers are what we use for counting anyway, so we can think of them as coming ready-listed. There is an obvious starting point (1) and a sensible order (1, 2, 3, ...). To illustrate what I mean by a sensible order, imagine what would happen if we tried to count the natural numbers randomly. However long we counted, we'd never be sure we'd counted them all - we could check for individual numbers in our random list, but we'd never know if they were all there, or when (if ever) we were going to reach a particular one. Now imagine what would happen if we tried to count the natural numbers by counting the odd numbers first and then the even ones. We'd count forever, and never start counting the even numbers. However, if we count them in the order given, we'll know once we've reached 100 that we've counted everything between 1 and 100 once and only once. We might never reach the end (in a finite time, at least) but we'll know we've not missed anything on the way and that it's just a matter of time before we reach any given number. That is, the only thing stopping us from counting them all is that we'll run out of breath before we run out of numbers. The proposed method of counting them all is sound, just impractical. This is what we mean by saying the natural numbers are countably infinite. In much the same way, any infinite set of numbers that can be put in a sensible, systematic order with a clear beginning such that we're sure to get everything if we count forever is thought of as countable. Counting other sets When we talk about comparing the sizes of two sets that contain finite numbers of objects (a set, $C$, containing some cats and a set, $M$, containing some baby mice, say) then one way to do so is to match each object in one set with exactly one object in another set and see if we have any left over. For the sets $C$ and $M$, we can instruct each cat to catch exactly one baby mouse - no sharing allowed, nor hogging more than one mouse, nor letting one escape if the cat doesn't already have another one. If each cat does catch exactly one baby mouse - no more, no fewer - we know there are (or, at least, were) the same number of cats as mice. That is, the sets $C$ and $M$ were the same size. If any cats go hungry, there were more cats than mice ($C$ was larger than $M$) and if any mice go free, there were more baby mice than cats ($M$ was larger than $C$). This is called putting the objects into one-to-one correspondence, and the same can be done when comparing the sizes of infinite sets. Another way of thinking about countably infinite sets is that they are those sets whose objects can be put in a one-to-one correspondence with the set containing the natural numbers only. That is, if for each element in the set of the natural numbers there is exactly one - no more, no fewer - element in the set we're trying to count, then the set is the same size as the set containing the natural numbers: countably infinite. Putting the elements of an infinite set in sensible, systematic order with a clear beginning such that it's just a matter of time before we reach any particular element is, in fact, the same thing as putting those elements in one-to-one correspondence with the natural numbers. If the natural numbers are the cats from the above illustration and the elements to be counted are the baby mice, then the cats already have an order to line up in. Lining the mice up next to them so none escape and none are eaten together (putting them in a sensible order) is the same as allotting the cats their dinner (putting the elements of the two sets into one-to-one correspondence). Are the integers (all the whole numbers: ..., -2, -1, 0, 1, 2, 3, ...), therefore, countable? At first, you might think not. First we need to find a sensible beginning, and that may not be obvious. With the natural numbers, we just started at the smallest and worked up. But the integers don't have a smallest number (just as before, when showing there was no largest natural number, think to yourself: if there were a smallest integer, what would happen when we subtracted 1 from it?), so where can we begin? Say we start at 1 and work up, as before with the natural numbers. Then we count 1, 2, 3, ..., putting them into one-to-one correspondence as follows: Natural numbers 1 2 3 ... Integers 1 2 3 ... Here, the problem is however long we count for, we'll never start on the negative numbers. We'll never even get to zero. It is possible to do. Before reading on, try to think how. It may help to remember the problem we would have had counting the natural numbers if we had tried to count all the odd ones first. How to count the integers Consider the following table: Natural numbers 1 2 3 4 5 6 7 ... Integers 0 -1 1 -2 2 -3 3 ... Will each integer be listed once and only once? It may help (or it may not - don't dwell on this if it just confuses you) to visualise the integers as distinct points on an infinitely long line. Think of 0 as the centre of the line, and imagine a circle, centre 0, which increases in diameter as we count up, covering the numbers we've counted. After we've counted to the seventh integer on the list, say, the circle has a diameter of 6, and covers every number from -3 to 3. In other words, yes. Once we reach any integer on the list ($p$, say), any integer with the absolute value (that is, $+n$ if $n$ is positive and $-n$ if $n$ is negative) smaller than the absolute value of $p$ will have been counted exactly once, and any number with a modulus larger than it will be yet to count. Whether $-p$ has been counted obviously depends on whether $p$ is greater or less than 0. So, while at first glance it might seem that there are more integers than natural numbers, this is not the case. This is exactly what happens in the so-called paradox I mentioned at the start of this article. You might at first think that for each natural number, $n$, there are two integers, $\pm n$ and so there are twice as many integers as naturals, which is not the case. In the same way, you might think an infinite number of miles takes you further than an infinite number of centimetres. In fact, the centimetres that go to make up the infinitely many miles can be put into one-to-one correspondence with the centimetres that go to make up the infinitely many centimeteres, so both take you the same distance. Counting the rationals How about the rational numbers? (all the numbers that can be written as one integer divided by another: ..., -2/3, 0/1, 1/1, 1/2, ...) In the case of the natural numbers and the integers, it was easy to check we'd counted every number in a given range. To check we'd counted all the natural numbers less than $n$, we merely needed to have counted up to $n$. To check we'd counted all the integers between $n$ and $m$, we needed to check we'd counted as far as the absolute value of $n$ or $m$, whichever is the largest. However, we can't count all the rational numbers in a given range, however small the range is. This is because between any two rationals, there is another rational. (If you're unconvinced, imagine $x$ and $y$ are two rationals between which there isn't another rational. How can $(x + y)/2$ not be rational?) Between -2 and 4 there are seven integers (including -2 and 4) and four natural numbers. There are, however, infinitely many rationals. How, then, could we possibly count them? We can never get anything in a given range, and there are infinitely many (non-overlapping) ranges we could be given. In fact, if there are infinitely many naturals (which there are) and infinitely many integers (which, again, there are) then surely there must be infinity-squared many rationals, because to get all the rationals you take each integer in turn and divide it by each natural number in turn, as in the following table: $\frac{1}{1}$ $\frac{1}{2}$ $\frac{1}{3}$ $\frac{1}{4}$ $\frac{1}{5}$ $\frac{1}{6}$ ... $\frac{2}{1}$ $\frac{2}{2}$ $\frac{2}{3}$ $\frac{2}{4}$ $\frac{2}{5}$ ... $\frac{3}{1}$ $\frac{3}{2}$ $\frac{3}{3}$ $\frac{3}{4}$ ... $\frac{4}{1}$ $\frac{4}{2}$ $\frac{4}{3}$ ... $\frac{5}{1}$ $\frac{5}{1}$ ... $\frac{6}{1}$ ... ... How can infinity-squared possibly be countable, and therefore the same as infinity? The first thing to do is to take a deep breath and remember that infinity is not a number . In fact, the rationals are countable. To prove this, consider the above table. Is the table a list of all the positive rationals? Consider the general term for a positive rational, $p/q$, where both $p$ and $q$ are positive. If it's on the table, then we know we've got all the positive rationals there. (If we hadn't, yet $p/q$ were on the table, there would be a positive rational that can't be represented as $p/q$, which is not the case.) Is it there? Yes, it's on the $p^{th}$ row in the $q^{th}$ column. So, we have a complete list. However, trying to count along each row or column in turn gives problems - each row and column is infinitely long, after all, and so we'll never reach the end of the first to start on the second. Is there, then, a way of counting them without missing any out? Yes. Just follow the red line in the image below: So we know we've counted every rational at least once. Have we counted them at most once? Obviously, we haven't. 1/1 = 2/2 = 3/3 = ... means 1 is the first, fifth and thirteen number in the list. Why does this matter? It's a good habit to get into, certainly, but more than that it's useful here to convince ourselves that these sets of numbers are all the same size. If we don't check we're not counting the rationals too many times, what's to stop there being more natural numbers than there are rational ones? Intuitively, this seems to be a fairly silly fear (after all, the natural numbers are just the first column of the entire table of rationals) but if you're not doubting your intuition by this stage, I haven't explained what we're doing well enough. We could just check each number we reach against all the previous numbers, making sure it's not equal to any of them. That would work, but ignores the fact that the order in which we're counting these numbers means the first time we meet each one, it's in its simplest possible form. Consider our trusty $p/q$, in row $p$, column $q$. Assume this is the number in its simplest form - $p$ and $q$ have no common factors greater than 1. The next time we meet the same number, it'll be $2p/2q$, which is in row $2p$, column $2q$, therefore on a diagonal further from our start point. $(n+1)p/(n+1)q$ is always further along than $n p/n q$ (if you need convincing, find a formula to tell you which integer is the leftmost point on the diagonal passing through $y/x$). So every time we reach a rational not in its simplest form, ignore it. Hence the positive rationals can be put into one-to-one correspondence with the naturals, and so are countably infinite. Multiply by $-1$ and we get a similar list of the negative rationals. Then, just as we did with the integers, start at 0 and interleave the two lists: Natural numbers 1 2 3 4 5 6 7 ... Rationals 0 $\frac{1}{1}$ $-\frac{1}{1}$ $\frac{1}{2}$ $-\frac{1}{2}$ $\frac{2}{1}$ $-\frac{2}{1}$ ... We know we will have every positive and every negative rational, and zero, once and only once. Thus the natural numbers are countable, the integers are countable and the rationals are countable. It seems as if everything is countable, and therefore all the infinite sets of numbers you can care to mention - even ones our intuition tells contain more objects than there are natural numbers - are the same size. This is not the case. Counting the reals: Cantor's Diagonal Proof Are the real numbers countable? (The real numbers are all the irrationals - those numbers that cannot be written as one integer divided by another: $\pi$, $\sqrt{2}$, $e$, ... - and the rationals together: 1, 4/5, $\pi$, ...) Every other set of numbers we've met so far has been countable. Each new set of numbers that feels as if it should be larger than the set of the natural numbers has been put into one-to-one correspondence with the natural numbers - all we needed was to work out how to list the numbers sensibly. The real numbers are made up of the rationals and the irrationals. The rationals are countable, so if the irrationals are countable then the reals must be countable - just interleave our two systematic lists and we'll get another systematic list. For the same reason, if the reals aren't countable, we'll know that the problem comes with the irrationals. You've probably met rational numbers in at least two guises - one is that they can be written as one integer divided by another, and the other is that they can be written as decimal expansions that eventually become repeating patterns. 1/3 = 0. 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ... - the threes repeat forever. 3/8 = 0. 375 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... - the zeros repeat forever. 22/7 = 3. 142857 142857 142857 142857 ... - 142857 repeats forever. In fact, all real numbers can be represented as infinitely long decimal expansions. The rationals are the ones that eventually repeat and the irrationals are the ones that don't. $\pi$ = 3. 14159 26535 89793 23846 ... - at no point does a finitely long string of digits start repeating for ever. Now, suppose the real numbers were countable. Then we could write a systematic list of all the real numbers. What is more, we could do it as list of decimal expansions. (In the following list, $a_{(n,m)}$ is the $m^{th}$ digit along in the $n^{th}$ number down the list.) $a_{(0,0)}. a_{(0,1)}a_{(0,2)}a_{(0,3)}a_{(0,4)}a_{(0,5)}...$ $a_{(1,0)}. a_{(1,1)}a_{(1,2)}a_{(1,3)}a_{(1,4)}a_{(1,5)}...$ $a_{(2,0)}. a_{(2,1)}a_{(2,2)}a_{(2,3)}a_{(2,4)}a_{(2,5)}...$ $a_{(3,0)}. a_{(3,1)}a_{(3,2)}a_{(3,3)}a_{(3,4)}a_{(3,5)}...$ $a_{(4,0)}. a_{(4,1)}a_{(4,2)}a_{(4,3)}a_{(4,4)}a_{(4,5)}...$ We might run into trouble with repeating a number without realising it (0.9 recurring = 1, for instance) but we can just check every number in our list off those that came before. It might be that the system we use to list the numbers lends itself to a neater method, as we found with the rationals, but even if it doesn't, this method will work. But can we be certain we haven't missed any out? One way of answering this question is to assume our list is, in fact, infinitely long. Then is it possible for it to contain every possible real number? The answer is that it isn't. Not all infinities are, we finally see, the same size. The problem, however - one raised and answered by Georg Cantor - is how to show this. How can we write down or describe a number that we know won't be on the list? Before reading on, take a moment to think about this yourself. It may help to think back to the title of this section: Cantor's Diagonal Proof. Consider the number $A$, where $A = a_{(0,0)} . a_{(1,1)}a_{(2,2)}...a_{(n,n)}...$ (If the list above had started 1.111..., 2.222..., 4.379..., $A$ would start 1.27..., as 1 is the noughth digit of the noughth number, 2 is the first digit of the first number, and 7 is the second digit of the second number) Now, letting $m$ take the value of all the natural numbers (and zero) in turn, do the following: If $a_{(m,m)} \neq 5$, let $b_{(m,m)} = 5$. If $a_{(m,m)} = 5$, let $b_{(m,m)} = 7$. Then we have a new number, $B = b_{(0,0)} . b_{(1,1)}b_{(2,2)}...b_{(n,n)}...$ Is $B$ on our list? Well, it's not the first number, because the first digits don't agree (if the first digit of the first number isn't 5, then the first digit of $B$ is. And if the first digit of the first number is 5, then the first digit of $B$ isn't). And it's not the second number, because the second digits don't agree. And it's not the $n^{th}$ number, because the $n^{th}$ digits don't agree... So $B$ can't be the first number, and it can't be the second, and it can't be the $n^{th}$, for any value of n, so it isn't on our list. We could still run into the problem mentioned earlier, that some numbers with recurring digits are the same number, even when the digits are different. However, this is a problem that only occurs when the digits involved are 0's and 9's, and the digits making up $B$ were specially chosen so this would not happen. Therefore, we know that $B$ is a unique number. This means the list isn't complete, and can never be complete. However cunning our system, even if the list is infinitely long it won't contain every real number. This means that we can't count all the real numbers - there are uncountably infinitely many. At the time of writing, Katherine Korner was a second year undergraduate studying Mathematics at Balliol College, Oxford.
2019-10-19 18:21:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7064139246940613, "perplexity": 251.05833889837052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00555.warc.gz"}
http://mathhelpforum.com/calculus/215264-nth-term-geometric-series-print.html
# Nth term in a geometric series • March 21st 2013, 11:37 PM bikerboy2442 Nth term in a geometric series Attachment 27637So for this I have calculated what I believe to be the correct few terms of Q and P Q0=75, Q1=84, Q2=94.08. Always increasing by 12% of what the previous function was. I can't understand why my formula didn't work. I also tried using Qn=Q0*Rn-1 but that provided an answer for n=0that was smaller than 75. *Edit, I understand that I cant use n-1 because we're starting from n=0 and not n=1 so shouldn't it just be Qn=Q0*Rn? • March 22nd 2013, 12:14 AM $Q_0 = 75$ $Q_n= (.12)Q_{n-1} + 75$, in other words, what remains in the system on the previous day plus the new dose of 75mg per day.
2014-11-28 01:30:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7855923175811768, "perplexity": 981.0641896763904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009515.14/warc/CC-MAIN-20141125155649-00123-ip-10-235-23-156.ec2.internal.warc.gz"}
https://forum.gitlab.com/t/gitlab-runner-docker-machine-concurency-and-request-concurency/16534
# Gitlab-runner (docker-machine) concurency and request-concurency? Can anyone tell me how to set on gitlab-runner ( docker-machine ) parameters: –limit –request-concurrency –machine-idle-nodes concurency (cannot be set from CLI) ? Is --request-concurrency same as concurency parm but just for docker-machine executor ? I would like to have 2 idle nodes, 3 parallel jobs per node and max limit of nodes 10. I am getting WARN message: WARNING: Specified limit (10) larger then current concurrent limit (1). Concurrent limit will not be enlarged. Thanks EDIT: concurency should be number of cores + 1 ? and also concurency=request-concurrency ?
2020-04-09 05:05:37
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764636516571045, "perplexity": 9912.581155288903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00506.warc.gz"}
https://www.nature.com/articles/s41586-018-0861-0?error=cookies_not_supported&code=90eb4b0e-6475-4dcb-a8a8-f09007bee42b
Letter | Published: # Real-time vibrations of a carbon nanotube ## Abstract The field of miniature mechanical oscillators is rapidly evolving, with emerging applications including signal processing, biological detection1 and fundamental tests of quantum mechanics2. As the dimensions of a mechanical oscillator shrink to the molecular scale, such as in a carbon nanotube resonator3,4,5,6,7, their vibrations become increasingly coupled and strongly interacting8,9 until even weak thermal fluctuations could make the oscillator nonlinear10,11,12,13. The mechanics at this scale possesses rich dynamics, unexplored because an efficient way of detecting the motion in real time is lacking. Here we directly measure the thermal vibrations of a carbon nanotube in real time using a high-finesse micrometre-scale silicon nitride optical cavity as a sensitive photonic microscope. With the high displacement sensitivity of 700 fm Hz−1/2 and the fine time resolution of this technique, we were able to discover a realm of dynamics undetected by previous time-averaged measurements and a room-temperature coherence that is nearly three orders of magnitude longer than previously reported. We find that the discrepancy in the coherence stems from long-time non-equilibrium dynamics, analogous to the Fermi–Pasta–Ulam–Tsingou recurrence seen in nonlinear systems14. Our data unveil the emergence of a weakly chaotic mechanical breather15, in which vibrational energy is recurrently shared among several resonance modes—dynamics that we are able to reproduce using a simple numerical model. These experiments open up the study of nonlinear mechanical systems in the Brownian limit (that is, when a system is driven solely by thermal fluctuations) and present an integrated, sensitive, high-bandwidth nanophotonic interface for carbon nanotube resonators. ## Access optionsAccess options from\$8.99 All prices are NET prices. ## Data availability The data that support the findings of this study are available from the corresponding author on reasonable request. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Arlett, J. L., Myers, E. B. & Roukes, M. L. Comparative advantages of mechanical biosensors. Nat. Nanotechnol. 6, 203–215 (2011). 2. 2. Poot, M. & van der Zant, H. S. J. Mechanical systems in the quantum regime. Phys. Rep. 511, 273–335 (2012). 3. 3. Sazonova, V. et al. A tunable carbon nanotube electromechanical oscillator. Nature 431, 284–287 (2004). 4. 4. Jensen, K., Kim, K. & Zettl, A. An atomic-resolution nanomechanical mass sensor. Nat. Nanotechnol. 3, 533–537 (2008). 5. 5. Lassagne, B., Tarakanov, Y., Kinaret, J., Garcia-Sanchez, D. & Bachtold, A. Coupling mechanics to charge transport in carbon nanotube mechanical resonators. Science 325, 1107–1110 (2009). 6. 6. Poncharal, P., Wang, Z. L., Ugarte, D. & de Heer, W. A. Electrostatic deflections and electromechanical resonances of carbon nanotubes. Science 283, 1513–1516 (1999). 7. 7. Chaste, J. et al. A nanomechanical mass sensor with yoctogram resolution. Nat. Nanotechnol. 7, 301–304 (2012). 8. 8. Karabalin, R. B., Cross, M. C. & Roukes, M. L. Nonlinear dynamics and chaos in two coupled nanomechanical resonators. Phys. Rev. B 79, 165309 (2009). 9. 9. Castellanos-Gomez, A., Meerwaldt, H. B., Venstra, W. J., van der Zant, H. S. J. & Steele, G. A. Strong and tunable mode coupling in carbon nanotube resonators. Phys. Rev. B 86, 041402 (2012). 10. 10. Barnard, A. W., Sazonova, V., van der Zande, A. M. & McEuen, P. L. Fluctuation broadening in carbon nanotube resonators. Proc. Natl Acad. Sci. USA 109, 19093–19096 (2012). 11. 11. Greaney, P. A., Lani, G., Cicero, G. & Grossman, J. C. Anomalous dissipation in single-walled carbon nanotube resonators. Nano Lett. 9, 3699–3703 (2009). 12. 12. Eichler, A., Moser, J., Dykman, M. I. & Bachtold, A. Symmetry breaking in a mechanical resonator made from a carbon nanotube. Nat. Commun. 4, 2843 (2013). 13. 13. Maillet, O. et al. Nonlinear frequency transduction of nanomechanical Brownian motion. Phys. Rev. B 96, 165434 (2017). 14. 14. Fermi, E., Pasta, J. & Ulam, S. Studies of nonlinear problems. Report LA-1940 (Los Alamos Scientific Laboratory, 1955). 15. 15. Flach, S., Ivanchenko, M. V. & Kanakov, O. I. q-breathers and the Fermi–Pasta–Ulam problem. Phys. Rev. Lett. 95, 064102 (2005). 16. 16. Tsioutsios, I., Tavernarakis, A., Osmond, J., Verlot, P. & Bachtold, A. Real-time measurement of nanotube resonator fluctuations in an electron microscope. Nano Lett. 17, 1748–1755 (2017). 17. 17. Meerwaldt, H. B., Johnston, S. R., van der Zant, H. S. J. & Steele, G. A. Submicrosecond-timescale readout of carbon nanotube mechanical motion. Appl. Phys. Lett. 103, 053121 (2013). 18. 18. Moser, J., Eichler, A., Güttinger, J., Dykman, M. I. & Bachtold, A. Nanotube mechanical resonators with quality factors of up to 5 million. Nat. Nanotechnol. 9, 1007–1011 (2014). 19. 19. Stapfner, S. et al. Cavity-enhanced optical detection of carbon nanotube Brownian motion. Appl. Phys. Lett. 102, 151910 (2013). 20. 20. Schneider, B. H., Etaki, S., van der Zant, H. S. J. & Steele, G. A. Coupling carbon nanotube mechanics to a superconducting circuit. Sci. Rep. 2, 599 (2012). 21. 21. Miura, R. et al. Ultralow mode-volume photonic crystal nanobeam cavities for high-efficiency coupling to individual carbon nanotube emitters. Nat. Commun. 5, 5580 (2014). 22. 22. Almaqwashi, A. A., Kevek, J. W., Burton, R. M., DeBorde, T. & Minot, E. D. Variable-force microscopy for advanced characterization of horizontally aligned carbon nanotubes. Nanotechnology 22, 275717 (2011). 23. 23. Barclay, P. E., Srinivasan, K., Painter, O., Lev, B. & Mabuchi, H. Integration of fiber-coupled high-Q SiNx microdisks with atom chips. Appl. Phys. Lett. 89, 131108 (2006). 24. 24. Anetsberger, G. et al. Near-field cavity optomechanics with nanomechanical oscillators. Nat. Phys. 5, 909–914 (2009). 25. 25. Zhu, J. et al. On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator. Nat. Photon. 4, 46–49 (2010). 26. 26. Joh, D. Y. et al. Single-walled carbon nanotubes as excitonic optical wires. Nat. Nanotechnol. 6, 51–56 (2011). 27. 27. Liu, K. et al. High-throughput optical imaging and spectroscopy of individual carbon nanotubes in devices. Nat. Nanotechnol. 8, 917–922 (2013). 28. 28. Liu, K. et al. Systematic determination of absolute absorption cross-section of individual carbon nanotubes. Proc. Natl Acad. Sci. USA 111, 7564–7569 (2014). 29. 29. Munteanu, L. & Donescu, S. in Introduction to Soliton Theory: Applications to Mechanics (ed. Van der Werwe, A.) 149–172 (Springer, Dordrecht, 2005). 30. 30. Onorato, M., Vozella, L., Proment, D. & Lvov, Y. V. Route to thermalization in the α-Fermi–Pasta–Ulam system. Proc. Natl Acad. Sci. USA 112, 4208–4213 (2015). 31. 31. Güttinger, J. et al. Energy-dependent path of dissipation in nanomechanical resonators. Nat. Nanotechnol. 12, 631–636 (2017). 32. 32. Cleland, A. N. & Roukes, M. L. Noise processes in nanomechanical resonators. J. Appl. Phys. 92, 2758–2769 (2002). 33. 33. Goupillaud, P., Grossmann, A. & Morlet, J. Seismic signal analysis and discrimination. III. Cycle-octave and related transforms in seismic signal analysis. Geoexploration 23, 85–102 (1984). ## Acknowledgements We thank A. Bachtold for discussions. This work was supported in part by the National Science Foundation under grant number 0928552. It was also supported by the Cornell Center for Materials Research with funding from the NSF MRSEC programme (DMR-1719875) and funding from IGERT (DGE-0654193). This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by the National Science Foundation (Grant ECCS-1542081). G.S.W. acknowledges FAPESP (grant 2012/17765-7) and CNPq for financial support in Brazil. ## Author information ### Author notes • Arthur W. Barnard Present address: Physics Department, Stanford University, Stanford, CA, USA • Mian Zhang Present address: John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA • Michal Lipson Present address: Electrical Engineering, Columbia University, New York, NY, USA 1. These authors contributed equally: Arthur W. Barnard, Mian Zhang ### Affiliations 1. #### School of Applied and Engineering Physics, Cornell University, Ithaca, NY, USA • Arthur W. Barnard •  & Mian Zhang 2. #### Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY, USA • Arthur W. Barnard •  & Paul L. McEuen 3. #### School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA • Mian Zhang • , Gustavo S. Wiederhecker •  & Michal Lipson 4. #### Gleb Wataghin Physics Institute, University of Campinas, Campinas, Brazil • Gustavo S. Wiederhecker 5. #### Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY, USA • Michal Lipson •  & Paul L. McEuen ### Contributions A.W.B. and M.Z. conceived the experiment, designed and fabricated the devices and performed the measurements. A.W.B., M.Z. and G.S.W. performed data analysis. All authors contributed to the writing of the manuscript. M.L. and P.L.M. supervised the project. ### Competing interests The authors declare no competing interests. ### Corresponding authors Correspondence to Michal Lipson or Paul L. McEuen. ## Extended data figures and tables 1. ### Extended Data Fig. 1 Tweezer scattering. a, Spatial dependence of optical transmission at a fixed laser detuning as tweezers are scanned across the optical mode. Both tines (blue and yellow dots) are distinguishable and the gap region (orange dot) shows negligible perturbation. b, Cavity transmission spectra at select points in a along with a far-field spectrum. The dashed line corresponds to the fixed detuning used in a. The small differences in the gap spectrum and the far-field spectrum are attributed to slow thermal drifts between measurements. 2. ### Extended Data Fig. 2 Photocurrent mapping of the optical modes. a, Principle of the near-field photocurrent mapping of the cavity field. The suspended CNT is positioned close to the optical cavity and slowly scanned across the perimeter of the optical cavity. Modes 1 and 2 are two standing waves spatially π out of phase. b, Photocurrent signal (Iphoto) from the CNT as the laser wavelength is rapidly swept across the two optical standing wave modes. The alternating photocurrent strength corresponds to the spatial geometry of the two optical standing waves. 3. ### Extended Data Fig. 3 Optical polarizability measurements. a, Schematic of the measurement. The CNT is touched to the surface of the cavity (dotted line) and then the tweezers are moved downward in the plane of the cavity, moving the CNT over several optical nodes. b, The resulting transmission data as a function of displacement (plotted from black to red). Resonance spectra are plotted as the difference between the off-resonance power (P0) and the transmitted power (Ptrans) and are displaced for clarity. c, Relationship between shifts in damping γ1 and frequency f1 (referenced to their respective far-field quantities γ01 and f01) for the higher-frequency mode. d, Relationship between the damping rates of both cavities. Vertical and horizontal lines denote the far-field damping rates γ01 and γ02 respectively and the blue spectra correspond to the maximum damping condition for each mode. The linear fit specifies the maximum damping rate γ ≈ 450 MHz and is due to the orthogonality of the two spatial modes. 4. ### Extended Data Fig. 4 Broadband analysis of pseudo-periodic resonances. a, Spectrogram of 145 ms of continuous data, revealing correlations of amplitude and frequency variation among several resonance modes. The power spectrum is plotted on the right with modes labelled based on analysis in Supplementary section 5. b, Correlations between resonance mode amplitudes A and frequency shifts Δf. The frequency shifts of $${f}_{3z}$$ is plotted above (blue), with dotted lines corresponding to local minima. The (normalized) amplitudes of five modes are plotted below. ## Supplementary information 1. ### Supplementary Information This file contains Supplementary Discussions 1–16, and Supplementary Figures 1–17. The discussions detail: (1) photocurrent mapping of cavity fields, (2) simulations of optical cavity field, (3) measurements of CNT polarizability, (4) electrostatic tuning of CNT resonances, (5) optical tomography of mechanical resonances, (6) simulations of mechanical resonances, (7) spectrographic analysis of CNT time traces, (8) measurements of nonlinearity in thermally driven resonances, (9) theory of optical mode perturbation, (10) calculation of optical scattering length, (11) calculation of optomechanical displacement sensitivity, (12) calibration of displacement signal, (13) evidence that spectral diffusion is intrinsic to CNT motion, (14) analysis of spectral diffusion, (15) additional simulation data and (16) analysis of CNT thermal statistics. 2. ### Supplementary Video 1 Audio representation of thermally driven spectral diffusion in a CNT. The measured real-time time trace (a segment of Fig. 3 data) is slowed down 1,300 times to form an audible signal. ### DOI https://doi.org/10.1038/s41586-018-0861-0
2019-02-21 10:44:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5934849381446838, "perplexity": 8280.863500449583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00240.warc.gz"}
http://math.stackexchange.com/tags/statistics/new
# Tag Info ## New answers tagged statistics 0 Your sum is not correct. Your $E[S]$ includes the $\frac14$ component applied recursively, which doesn't make sense. EDIT: David's answer (which appeared during this one) is correct. I'll address your second question instead. The $E[Y]$ for the four numbers which end the drawing is obviously $\frac14$. :) Hopefully you can figure out why. For the ... 1 We have $$E(S)=\sum_{k=1}^{13}E(S\mid\hbox{first draw is k})P(\hbox{first draw is k})\ .$$ But if the first draw is $k=1,\ldots,9$ then the expected sum is $k$ plus the overall expected sum; if the first draw is $k=10,\ldots,13$ then the expected sum is just $k$. So $$E(S)=\frac{1}{13}\sum_{k=1}^9(k+E(S))+\frac{1}{13}\sum_{k=10}^{13}k\ ,\tag{*}$$ and ... 0 One of the most classical examples might be the following: assume that $(X_i)_{1\leqslant i\leqslant n}$ is i.i.d. exponential with parameter $\lambda$, then the likelihood is $$\ell(x_1,\ldots,x_n)=\lambda^n\mathrm e^{-\lambda(x_1+\cdots+x_n)},$$ hence the MLE for $\lambda$ is $$\hat\lambda=\frac{n}{X_1+\cdots+X_n}.$$ For every $i$, $E(X_i)=1/\lambda$, ... 1 But if I sum up the $\tilde{f}$ the error would be really high $n\epsilon$... No. Actually, using the fact that every $x$ is nonnegative, one gets: $$(1-\varepsilon)f(\ )\leqslant\bar f(\ )\leqslant(1+\varepsilon)f(\ )\implies\frac1{1+\varepsilon}\sum_xx\bar f(x)\leqslant\sum_xxf(x)\leqslant\frac1{1-\varepsilon}\sum_xx\bar f(x)$$ 0 Your sum equals: $$\sum_{j=0}^{+\infty}\frac{(j+5)^3}{32\cdot 2^j}\binom{j+4}{j}.\tag{1}$$ Since: $$\sum_{j=0}^{+\infty}\binom{j+4}{j}x^j = \frac{1}{(1-x)^5},\tag{2}$$ by differentiating three times both terms of $(2)$, you get the sums $\sum_{j\geq 0}\binom{j+4}{4}j^k\,x^j$ with $k\in\{0,1,2,3\}$, hence recombining these pieces you get: ... 1 Since you are exploring a categorical variable (success-failure), the appropriate test is a chi-square test. You have to perform separate analyses for the four tasks. For each task, build a $2\times8$ contingency table where the 8 columns represent the different method (single or combined techniques) and the 2 rows represent successes and failures. Then fill ... 0 For 1) if you've simulated the excess infections under the null then you have numerically estimated the sampling distribution. Just calculate the fraction of the simulation results that are $\geq 3.3$ or $\leq -3.3$ This is the two-sided p-value for your observed placebo excess. For 2) you are not using your simulation, but the $\mathcal{N}(0,12.3)$ ... 0 If the old data are $x_1$ to $x_N$, then the old variance is given by $$\sigma^2_{\text{old}}=\frac{1}{N}\sum_1^N x_i^2 -\overline{X}_{\text{old}}^2\tag{1}$$ and the new variance is given by $$\sigma^2_{\text{new}}=\frac{1}{N+1}\sum_1^{N+1} x_i^2 -\overline{X}_{\text{new}}^2.\tag{2}$$ You can see that $\sum_1^N x_i^2$ can be recovered from ... 0 The degrees of freedom used in each case is different, and they also depend on the specific type of test you are doing. Without a description of the meaning of the data and how you collected it, it is difficult to determine whether your calculations are valid. If your observations for the two groups are not paired, even though the per-group sample sizes ... 0 You can exploit the formulas $$\overline x=\frac1n\sum x_i=\frac{S_1}n,$$ $$\sigma^2=\frac1n\sum(x_i-\overline x)^2=\frac1n\sum x_i^2-\overline x^2=\frac{S_2}n-\overline x^2.$$ Every time you get a new value, update $n$, the sum of $x_i$ ($S_1$) and the sum of $x_i^2$ ($S_2$). From these you can compute the current mean, variance and standard deviation. ... 0 You can use a truncated maximum likelihood estimate assuming 800=$p\%$ top values of the population and $x_i$ are the 800 observed values: Let $N=\lceil \frac{800}{p\%}\rceil \;\;x^-=\min\{x_i\},\;\; L(\mu,\sigma)=\left\{\Phi\left(\frac{x^--\mu}{\sigma}\right)^{N-801}\right\}\prod\limits_{i=1}^{800}\phi\left(\frac{x_i-\mu}{\sigma}\right)$, where ... 1 $$\Gamma\left(\frac{m+1}{2}\right) = \frac{m-1}{2}\cdot\frac{m-3}{2}\cdot\dotsc\cdot\frac{1}{2}\cdot\Gamma\left(\frac{1}{2}\right),$$ and $\Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}$. So it cancelled against $\Gamma\left(\frac{1}{2}\right)$. 1 Cluster $r$ is a set of points $\{x_1,x_2,\ldots, x_{n_r}\}$. To compute $D_r$, calculate the square of the Euclidean distance between every possible pair of points of cluster $r$, and add up all these numbers. This may look more familiar to you: $$D_r = \sum_{i=1}^{n_r} \sum_{i'=1}^{n_r}\|x_i-x_{i'}\|^2.$$ Then, we claim that the following identity holds ... 0 I assume you mean that $E\left[ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\frown}$}}{X}} \right]=E\left[ X|D \right]$. So first, note that if $E\left[ XY|D \right]<\infty$ and $X,Y$ are both measurable with respect to $D$, then $E\left[ XY|D \right]=YE\left[ X|D \right]$. Now, by definition:      ... 1 Since $f_{x}=\frac{a-2y}{(a-x-y)^2}$ and $f_{y}=\frac{2x-a}{(a-x-y)^2}$, f does not have any critical points in the interior of the closed region bounded by the lines $x=0$, $y=x$, and $y=\frac{a}{2}$. On the boundary of this region, $f(x,y)=0$ on the line $y=x$; $f(x,y)=-1$ on the line $y=\frac{a}{2}$; and on the line $x=0$, ... 1 If x < y <= a/2, then x - y is always negative but a - x - y is always positive. That means f(x,y) must be a negative number. So the best we can do is 0 - epsilon. Notice that the closer you get to x = y, the closer the numerator approaches zero while the denominator is bounded below by a. So the max of this function does not exist (its supremum is ... 1 One way of modelling what is going on here is via the General Linear Model. In your case, if we label the objects $1$ through to $5$, then supposing the $i$-th object $(i = 1, \ldots, 5)$ is given a score $a_i$, $b_i$, $c_i$ and $d_i$ in each of the four characteristics you have, and the total/overall score of the object is $Y_i$, we model the ... 1 Since $0\le x<y\le a/2$, we have that $x-y<0$ and $a-x-y>0$. Hence $f(x,y)<0$ everywhere on the given domain. However, for $x+y<a$, we have $\lim\limits_{x\rightarrow y}f(x,y)=0$. This means that $\sup f(x,y)=0$, but this supremum is never achieved. 1 Thats what Wolfram Alpha says: http://www.wolframalpha.com/input/?i=maximize+%28x-y%29%2F%28a-x-y%29+on+0%3C%3Dx%3Cy%3C%3Da%2F2 So the maximum is basically where $$x-y\rightarrow 0$$ 0 This is a heuristic, and as such it comes with advantages, disadvantages, and assumptions. Let's go over the idea, see its assumptions, and see the advantages and disadvantages, and one possible (relatively easy) improvement. Start simple For the moment, let's forget that there are two ways of talking about how far through a cycle we are. Instead of both ... 1 Directly using the distribution function: $$P(X \geq 2\alpha \beta)= 1 - P(X < 2\alpha \beta)=1 - F(2\alpha \beta)\\=1-\frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^{2\alpha\beta}s^{\alpha-1}e^{-\frac{s}{\beta}}\, ds.$$ Let $u = s/\beta$. Then $$P(X \geq 2\alpha \beta)= 1- \frac{1}{\Gamma(\alpha)}\int_0^{2\alpha}u^{\alpha-1}e^{-u}\, du>0.$$ So ... 2 If the sample consists of just one person, then the number of smokers is $0$ or $1$, with respective probabilities $1-p$ and $p$, so you have a random variable whose expected value is $p$ and whose variance is $p(1-p)$. If the sample size is $n$, then the number of smokers is the sum of $n$ random variables with that distribution, so the expected number is ... 0 Wouldn't GRR reflect the fecundity of a population while NRR reflects the difference between fecundity and mortality? 1 You are confusing a probability distribution with the distribution of a sample. The sample is what it is, you do not weight them. If, in fact, a certain point is more probable than another, then it should, on average, come up more often in a sample. For a simple example, consider a die where the sides are $\{1,1,1,2,3,4\}$ If I rolled the die $n$ times, I'd ... 1 I believe the best strategy for a problem of this kind would be to proceed in two steps: Fit a continuous time Markov chain model to the data by estimating the (infinitesimal) generator $Q$. Using the estimated generator and the Kolmogorov backward equations, find the probability that a Markov chain following the fitted model transitions from state $i$ to ... 0 Random trials/Monte Carlo simulations are notoriously slow to converge, with an expected error inversely proportional to the square root of the number of trials. In this case it is not hard (given a programming language that provides big integers) to do an exact count of cases. Effectively the outcomes are partitions of the 15 balls into some number of ... 4 All in terms of binomial coefficients, where $\binom{n}{k}$ is interpreted as the number of combinations of $k$ items from a pool of $n$: ... 2 $$\mathbb E \bar X = \mathbb E\frac{X_1+\cdots+X_n}{9} = \frac 1 9\left( (\mathbb E(X_1)+\cdots+\mathbb E(X_9)\right) = \frac 1 9 (5+\cdots+5) = 5.$$ $$\operatorname{var}\bar X = \operatorname{var}\frac{X_1+\cdots+X_n}{9} = \frac{1}{81}( \operatorname{var}(X_1)+\cdots+\operatorname{var}(X_9)) = \frac{1}{81}(9 + \cdots+9) = 1.$$ Therefore $\mathbb ... 1 In order to know that the Rao-Blackwell theorem is applicable, you have to know that$\max$is sufficient, i.e. the conditional distribution of$X_1,\ldots,X_n$given$\max$does not depend on$\theta$. The conditional distribution of anything given$\max$is a function of$\max$. Since$\mathbb E(2X_1)= \theta$, the law of total expectation implies ... 2 The answer is: $$\frac{365!}{320!365^{60}}\frac{60!}{32!3!3!2!^{11}}\frac{1}{2!11!}$$ 0 Whether to use a$t$-test or$z$-test depends on whether the standard deviation of the population from which the sample is drawn is known. In your case, you are testing whether the average lifespan$\mu_d$of doctors is less than the average lifespan$\mu_gof the general population; i.e., your hypothesis is H_0 : \mu_d = \mu_g, \quad \mathrm{vs.} \quad ... 2 If \max(X_i) = t then one of the list of \{X_i\} is equal to t and the other n-1 are all less then t. The chance that X_1 is that one variable is \tfrac 1 n. If it is, the conditional expected value is t. If it isn't the value is uniformly distributed on [0, t), and the conditional expected value of that is t/2. Double the answer ... 1 The confidence interval/hypothesis test relationship goes that if 0 is not in the C% confidence interval for a difference, then there is a significance difference at the (100-C)% significance level. So if 0 is not in your 85% confidence interval, then there is a significant difference at the 15% confidence level. I suspect the person who wrote the question ... 1 I think this should work: let's first calculate the probability that the people enter the room one at a time to form a sequence T_1, T_2, D_1, D_2, \ldots, D_{11}, S_1, S_2, \ldots, S_{32}, where T_i is the ith triplet of birthdays, D_i is the ith double, and S_i is the ith singleton. For T_1, T_2, we'll have (\frac{365}{365} \cdot ... 4 The best choice might depend on which type of book you would prefer. In my opinion: If you want to privilege clarity, I would suggest you "Models for Probability and Statistical Inference: Theory and Applications" by James H. Stapleton: this is a relatively short but clear and comprehensive book on probability and statistical inference, with a lot of ... 0 1) There are two common definitions of the Wilcoxon rank-sum statistic, that have been around since the start (each appears in one of the first two papers that relate to the Wilcoxon tests). One of those is the sum of the ranks in the smallest sample, which sounds like the one you're used to. The Wilcoxon rank-sum statistic and the Mann-Whitney U statistic ... 5 Proof by induction: For every complete graph K_{2n} with 2n vertices, there is a labeling of the edges with {-1,+1} such that every vertex has n edges labeled +1 and n-1 edges labeled -1. The claim is trivially true for the null graph since there are no vertices. For the graph K_2, label the single edge +1. Take a graph K_{2n} labeled ... 3 Solution of Question 1: This is an occupancy problem with n=30 boxes and k=15 balls. Let's first consider the expected number of empty boxes. That is much easier to obtain. The exact answer is 30(1-1/30)^{15}=18.04. This is approximately 30/\sqrt e. See the answer by Mr.Spot to this question: Making 400k random choices from 400k samples seems to ... 1 Clearly, for u<0, F_U(u) = \mathbb{P}(\max\{0,X\} \leq u)= 0. Now, for u\geq 0, \begin{align} F_U(u) &= \mathbb{P}(\max\{0,X\} \leq u) \\ &=\mathbb{P}(X<0)\mathbb{P}(\max\{0,X\} \leq u|X<0) + \mathbb{P}(X\geq 0)\mathbb{P}(\max\{0,X\} \leq u|X\geq0)\\ &= \mathbb{P}(X<0) + \mathbb{P}(X\geq 0) \mathbb{P}(X \leq u |X\geq0)\\ ... 2 F_{U}\left(u\right)=P\left\{ \max\left\{ X,0\right\} \leq u\right\} =P\left\{ \omega\in\Omega\mid X\left(\omega\right)\leq u\wedge0\leq u\right\} and F_{X}\left(u\right)=P\left\{ X\leq u\right\} =P\left\{ \omega\in\Omega\mid X\left(\omega\right)\leq u\right\} If u<0 then \left\{ \omega\in\Omega\mid X\left(\omega\right)\leq u\wedge0\leq ... 0 Let \mathcal{F} be the class of Lipschitz densities with Lipschitz constant bounded by C>0. Topologically, we regard \mathcal{F} as a subspace of L^{1}[0,1]. We first show that \mathcal{F} is closed in L^{1}. Suppose (f_{n})_{n=1}^{\infty}\subset\mathcal{F} converges to f\in L^{1}. Passing to a subsequence if necessary, we may assume by ... 0 For the first inequality, you could (prove and) use the identityE(\mathcal O^n)=\int_\mathbb R(\mathbf 1_{x\gt0}-\Phi(x)^n)\,\mathrm dx,$$and the fact that, for every x, the sequence (\Phi(x)^n) is decreasing. (Hint: Start from the identity in your post and integrate by parts, using the functions u=x and v=\Phi(x)^n.) The second inequality ... 0 The presence of \sigma^2 in your formulae is correct. I suspect a normalization \sigma = 1: this is a point to check in the reference-unfortunately I have no access to it-. In any case I would define some context. Let Y=X\theta+\epsilon be a regression line with$$E[\epsilon]=0,~~\operatorname{Cov}(\epsilon)=E[\epsilon\epsilon^T]:=\sigma^2 I_n,$... 3 Pretend there is a fifth student that sits on the empty sofa. Now there are two possibilities. The fifth student can sit the same place, then you are asking for a derangement of the four students. There are (closest integer to)$\frac {4!}e=9$of these. Otherwise, the fifth student can move, and you need a derangement of five. There are(closest integer ... 1 Some context: a density means a nonnegative integrable function on$[0,1]$with integral equal to$1$. Total boundedness is understood in the$L^1$norm. Let$\mathcal F_C$be the set of$C$-Lipschitz densities. I'll prove that it is totally bounded in the uniform norm$\sup|f|$, which will imply total boundedness in the$L^1$norm (since the$L^1$norm ... 0 I'll work straight from the definition.$T_1$is sufficient for$\theta_1$if$\theta_2$is known. I take that to mean that the conditional distribution of the data given$T_1$does not change when$\theta_1$changes but$\theta_2$remains fixed. Similarly the conditional distribution of the data given$T_2$does not change when$\theta_2$changes but ... 0 This question is similar to Motivation behind standard deviation? but since you specifically ask for a parallel between standard deviation and distance, here goes. The Euclidean distance, unlike for example Manhattan distance, is compatible with an inner product. The inner product of two vectors$x,y$is$x\cdot y = \sum x_i y_i$(also known as the dot ... 0 A simple solution, ignoring the effect on larger numbers, is to take your two numbers$x,y$and compute$(e^x - e^y)/e^x$instead of$(x-y)/x$. If the numbers are small this will return a value somewhere close to the range$[-1,1]$instead of giving wild answers. 0 A relatively simple way to adjust for the number of reviews is to divide all rankings for all locations by 5, so your scores are normalized rankings$r_i$, then calculate for each location,$i$, calculate the average normalized ranking,$\bar r_i$and assign each location the following score:$y_i=\frac{\bar r_i-b}{\nu_i n_i+2}$- this is a modified version ... 0 I will assume the balls and boxes are indistinguishable. The first problem is: If I distribute$15$balls among$30$boxes, what is the probability that at most$10$boxes contain a ball? First, the fact that there are$30$boxes does not matter, since they are indistinguishable. So we only need to consider the problem as if there were$15\$ boxes. Second, ... Top 50 recent answers are included
2014-07-30 01:03:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593156576156616, "perplexity": 342.81700828567136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00011-ip-10-146-231-18.ec2.internal.warc.gz"}
https://indico.sissa.it/event/6/timetable/?view=standard
MHPC Workshop on High Performance Computing Europe/Rome SISSA, International School for Advanced Studies SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy , , , , Description The Master in High Performance Computing (MHPC) program is organizing its first workshop to highlight the two foundation pillars of the program: training and science. The workshop, which will run from 24 to 26 February 2016, will be a unique and international event that will highlight state-of-the-art High Performance Computing applied to computational science and engineering. The workshop will be held in Trieste (Italy) at SISSA, the International School for Advanced Studies, and it's a great opportunity for leading scientists, students and vendors. The best MHPC thesis prize will be awarded during this event, and the official graduation ceremony will take place. Participation is totally free. Partners: INAF Participants • Alberto Branchesi • Alberto Ferrari • Alberto Morgante • Alberto Sartori • Alberto Venturato • Alessandro Candolini • Alessandro Laio • Alessandro marassi • Alessandro Michelangeli • Alessandro Renzi • Alessia Andò • Alessio Ansuini • Alessio Berti • Alex Rodriguez • Andrea Bressan • Andrea Magrin • Andrea Marangon • Andrea Mola • Anirban Roy • Anna Somma • Antonio Lanza • Arjuna Scagnetto • Aurora Maurizio • Bojan Zunkovic • Claudia Parma • Clement Onime • Cristiano De Nobili • Célia Laurent • César Ernesto González González • Damas Makweba • Daniele Bertolini • Daniele Ceravolo • Daniele Tavagnacco • Daniele Tolmelli • Davide Cingolani • Davide Gei • Donatella Lucchesi • Edmondo Orlotti • Edoardo Milotti • Edvin Močibob • Eric Pascolo • Erik Romelli • Estelle Maeva Inack • Evelina Parisi • Ezio Corso • Fabio Affinito • Fabio Gallo • Fabio Pasian • Fabio Pichierri • Fatema Mohamed • Federico Carminati • Filip Skrinjar • Filippo Salmoiraghi • Filippo Spiga • Francesco Ballarin • Francesco De Giorgi • Francesco Longo • Francesco Schiumerini • Franco Vaccari • Gianfranco Gallizia • Gianluca Coidessa • Gianluca GUSTIN • Gianluca Orlando • Gianluigi Rozza • Giorgia Del Bianco • Giorgio Bolzon • Giorgio Pastore • Giovanni Alzetta • Giovanni Corsi • Giovanni Grilli di Cortona • Giulia Matilde Ferrante • Giuliano TAFFONI • Giuseppe CHECHILE • Giuseppe Murante • Giuseppe Piero Brandino • Giuseppe Pitton • Giuseppe Puglisi • Guido Bortolami • Guido Lupieri • Ivan Girardi • Ivan Girotto • Jack Dongarra • Jacopo Surace • Jimmy Aguilar Mena • Juan Carlos Vasquez Carmona • Juan Manuel Carmona Loaiza • Jure Pečar • Katy Alazo-Cuartas • Kevin Bianco • Klaus Zimmermann • Laura Bertolini • Leonardo Belpassi • Leonardo Romor • Loris Ercole • Luca Degano • Luca Della Mora • Luca Donatini • Luca Heltai • Luca Tornatore • Luka Živulović • Mahbube Rustaee • Marc Saint Georges • Marco Borelli • Marco Briscolini • Marco Buttu • Marco De Pasquale • Marco Pividori • Marco Raveri • Marco reale • Marco Tezzele • Maria Berti • Maria d'Errico • Maria Peressi • Maria Verina • Mariami Rusishvili • Marina cobal • Marko Kobal • Marlon Brenes • Massimo Masera • Massimo Tormen • Matteo Cerminara • Matteo Nori • Matteo Rinaldi • Matteo SANDRI • Matteo Simone • Mauro Bardelloni • Michele Vidotto • Miguel Carvajal • Mila Bottegal • Milena Valentini • Minase Tekleab • Moreno Baricevic • Najmeh Foroozani • Nicola Bassan • Nicola Cavallini • Nicola Demo • Nicola Giuliani • Nicola marzari • Noe Caruso • Ornela maloku • Ornela Mulita • Paolo F.sco Lenti • Paolo Giannozzi • Peter Klin • Peter Labus • Philippe Cance • Pierluigi Di Cerbo • Piero Colli Franzone • Rajesh Babu Muda • Ralph Gebauer • Riccarda Bonsignori • Riccardo Di Meo • Riccardo Pigazzini • Rita Carbone • Roberto Siagri • Romain Murenzi • Rossella Aversa • Sabrina Visintin • Sandro Scandolo • Sanzio bassini • Sebastiano Saccani • Seher Karakuzu • Seyed Ehsan Nedaaee Oskoee • Seyyedmaalek Momeni • Shima Talehy Moineddin • Silvano Simula • Simone Brazzale • Simone Economo • Simone Martini • Simone Peirone • Simone Piccinin • Simone Scacchi • Stefano Alberto Russo • Stefano Borgani • Stefano Cozzini • Stefano Cristiani • Stefano de Gironcoli • Stefano Piani • Stefano Piano • Stefano Salon • Stella Valentina Paronuzzi Ticco • Thomas Gasparetto • Thomas Puzzera • Tomaso Esposti Ongaro • Tommaso Bianucci • Tommaso Gorni • Ulrich Singe • Valentino Pizzone • Vedran Skrinjar • Veronica Biffi • Virginia Carnevali • Vittorio Sciortino • Volker Springel • Wolfgang BANGERTH • Zakia Zainib Support • Wednesday, 24 February • 13:00 14:00 Registration Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy • 14:00 14:15 Welcome and Introduction Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy • 14:15 15:45 Tutorial: HPC aspects of Quantum-Espresso package: Part 1 Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Conveners: Fabio Affinito, Filippo Spiga (Q/E Foundation/ University of Cambridge) • 14:15 QE, main strategies of parallelization and levels of parallelisms 45m Speaker: Fabio Affinito • 15:00 QE, methodologies to develop/maintain/testing toward code modernization and code sustainability 45m Speaker: Filippo SPIGA (QE Foundation - University of Cambridge) • 15:45 16:15 Coffee 30m • 16:15 17:45 Tutorial: HPC aspects of Quantum-Espresso package: Part 2 Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy • 16:15 QE and many-core architectures 45m Speaker: Fabio Affinito • 17:00 QE and heterogeneous architectures 45m Speaker: Filippo Spiga (QE Foundation - University of Cambridge) • Thursday, 25 February • 08:30 09:00 Registration Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy • 09:00 09:25 Welcome and general introduction Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy • 09:00 Welcome address by SISSA Director 5m • 09:05 Welcome address by ICTP Director 5m • 09:10 Welcome address by Governmental Institutions 15m • 10:30 11:00 Coffee Break 30m Lobby Aula Magna Paolo Budinich () Lobby Aula Magna Paolo Budinich • 11:00 12:15 HPC projects in FVG, Italy, and Europe Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Convener: Stefano Ruffo • 11:00 Perspectives of HPC in FVG 15m Speaker: Sandro SCANDOLO (ICTP) • 11:15 The Quantum ESPRESSO Project 15m Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Speaker: Paolo Giannozzi (UNIUD) • 11:30 The MAX Center of excellence 15m Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Speaker: Elisa Molinari (CNR NANO) • 11:45 The EXANEST project 15m Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Speaker: Giuliano Taffoni (OATS - INAF) • 12:00 The INFN vision for the future of HPC and HTC from regional areas to Europe 15m Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Speaker: Donatella Lucchesi (INFN - University of Padova) • 12:15 12:45 Graduation and Best Thesis Award Cerimony Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy 2015 class will be awarded the MHPC diploma. Best thesis prize will awarded as well. • 12:45 14:00 Lunch 1h 15m Lobby Aula Magna Paolo Budinich () Lobby Aula Magna Paolo Budinich • 14:00 15:15 HPC in industry: some regional examples Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Convener: Stefano Cozzini • 14:00 The Personal Computer is dead. Long life the Personal HPC. 30m The exponential growth of computation is very close to an evolutionary step in the way we use HPC extending and expanding the class of problems they can address. The ongoing digital transformation and software containerization are enabling the use of HPC s in most of the fields of human activities. The new digital hyperconnected world need HPC scientists and not just only Data Scientist. Speaker: Roberto Siagri (Eurotech spa) • 14:30 Big Data and HPC @ Generali 15m Generali is one of the most consolidated insurance company in Europe, looking ahead for innovative product development and new markets across the world. In order to better serve business lines as well as to identify customer valuable products, Generali created the Group Chief Data Office function, whose mission is to define and implement strategies and methods to acquire, analyze and govern data. Evaluating and adopting several data analysis modern techniques, the GCDO is addressing the Big Data challenge and will leverage HPC in dedicated researches beyond the traditional insurance modeling analysis. Speaker: Dr Alberto Branchesi (Generali spa) • 14:45 HPC solutions for Ship Designing @ Fincantieri 15m Fincantieri is one of the leading ship design company in the world. In 2014 we deployed a HPC cluster that is mainly used to perform Computational Fluid Dynamics and Finite Element Analysis calculations. We equipped our cluster with a Citrix virtualization solution that allows our users to perform their pre-processing activities directly on a few reserved nodes on the cluster. CFD calculations are usually performed to optimize the hull, the appendages and the propellers. FEM calculations are performed to assess the behavior of the Ship internal structure and its response to vibrations. Speakers: Dr Gianluca Gustin (Fincantieri), Dr Giuseppe Chechile (Fincantieri) • 15:00 MHPC Thesis: A computational ecosystem for near real time processing of satelllte data 15m The aim of this work is the development of a computational ecosystem for nearly real-time inversion of high spectral resolution infrared data coming from meteorological satellites. The ecosystem has been developed as nearly real-time demonstration project to elaborate the level 2 products derived from MTG-IRS Speaker: Stefano Piani (MHPC - eXact-lab srl) • 15:15 15:45 Coffee Break 30m Lobby Aula Magna Paolo Budinich () Lobby Aula Magna Paolo Budinich • 15:45 17:45 HPC in Mathematics Aula Magna a Paolo Budinich Aula Magna a Paolo Budinich SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Conveners: Luca Heltai, Nicola Cavallini • 15:45 Finite Element Methods at Realistic Complexities 40m Solving realistic, applied problems with the most modern numerical methods introduces many levels of complexity. In particular, one has to think about not just a single method, but a whole collection of algorithms: a single code may utilize fully adaptive, unstructured meshes; nonlinear, globalized solvers; algebraic multigrid and block preconditioners; and do all this on 1,000 processors or more with realistic material models. Codes at this level of complexity can no longer be written from scratch. However, over the past decade, many high quality libraries have been developed that make writing advanced computational software simpler. In this talk, I will briefly introduce the deal.II finite element library (http://www.dealii.org) whose development I lead and show how it has enabled us to develop the ASPECT code (http://aspect.dealii.org) for simulation of thermal convection. The project also builds on a variety of other libraries (e.g., p4est, Threading Building Blocks, and Trilinos) that provide parallelism at various levels. I will discuss some of the results obtained with this code and comment on the lessons learned from developing this massively parallel code for the solution of a complex problem. Speaker: Wolfgang Bangerth (Texas A&M) • 16:25 MHPC Thesis: Hybrid Parallelisation Strategies for Boundary Element Methods 20m Whenever a mathematical problem admits a boundary integral representation, it can be straightforwardly discretised by Boundary Element Methods (BEM). In this work, we present an efficient hybrid parallel solver for FSI problems based on collocation BEM. The major bottlenecks for a serial implementations of BEM is the computational cost and memory requirements needed to respectively assemble and store the BEM full matrices. Both memory storage and assembling CPU times scale with the square of the number of degrees of freedom. We present two different strategies to parallelise BEM implementations. The first uses an MPI strategy, in which we distribute both assemblage workload and storage requirement among different processors, maintaining the classical BEM structure (and algorithm complexity). This approach leads to optimal strong and weak scalability for the matrix assemble cycles and vector matrix multiplication, although the overall algorithm remains of order O(N^2). In the second strategy, we employ a Fast Multipole Method (FMM) to reduce the computational cost and memory allocation of the BEM problem resolution to O(N), and we use a hybrid MPI and multi-threaded parallelization strategy. This implementation combines direct BEM close range interactions with FMM long range couplings, and represents the state of the art in parallel BEM solvers. The BEM-FMM algortihm calls for a hybrid solution, since the algorithm requires inherently a lot of communication among different processors. We address the main parallelisation techniques to be used in a hybrid parallel BEM-FMM implementation, for which we used the Intel Threaded Building Block paradigm to handle multicore platform, and MPI for the communication between different processors. We present strong and weak scalability results together with an optimality result concerning the way to proper set the hierarchical FMM space subdivision. Speaker: Mr Nicola Giuliani (MHPC - SISSA) • 16:45 High performance computing for computational electrocardiology. Part I: motivation and mathematical models. 30m Life sciences could benefit immensely from the massive growth of HPC processing power occurred in the last ten years. Indeed, complex biological systems are described by sophisticated mathematical models, whose solution requires highly scalable solvers. In particular, for what concerns cardiac electrophysiology, the simulation of the electrical excitation of the heart muscle, and the subsequent contraction-relaxation process, represents a challenging computational task. In the present talk, we will describe the main mathematical model of the cardiac electrical and mechanical interactions, the so-called cardiac electro mechanical coupling model. This model consists of a system of non-linear partial differential equations (PDEs), constituted by four sub-models: the quasi-static anisotropic finite elasticity equations describing the macroscopic deformation of the cardiac tissue; the active tension model, i.e. a system of ordinary differential equations (ODEs) describing the intracellular calcium dynamics and the consequent generation of the cellular force; the anisotropic Bidomain model, i.e. a system of degenerate parabolic reaction-diffusion PDEs describing the electrical current flow through the tissue; the membrane model, i.e. a stiff system of ODEs describing the bioelectrical activity of the membrane of cardiac cells. We will finally present the results of three-dimensional simulations of the full cardiac excitation-contraction process. Speaker: Piero Colli Franzone (UNIPV) • 17:15 High performance computing for computational electrocardiology. Part II: scalable solvers. 30m The complex interaction between the cardiac bioelectrical and mechanical phenomena is modeled by a system of non-linear partial differential equations (PDEs), known as cardiac electro-mechanical coupling (EMC) model. Due to the extremely different spatial and temporal scales of the physical phenomena occurring during a single heartbeat, the discretization of the EMC model with finite elements in space and finite differences in time yields the solution of thousands of large scale linear systems, with $O(10^6-10^8)$ degrees of freedom each. The effective solution of such linear systems requires the use of hundreds/thousands processors and, consequently, of highly scalable preconditioners. In this presentation, we will first introduce two classes of Domain Decomposition preconditioners, the Multilevel Additive Schwarz (MAS) and the Balancing Domain Decomposition by Constraints (BDDC) preconditioners, in the simple setting a scalar elliptic PDE. Then, we will extend such preconditioners to the solution of the reaction-diffusion PDEs and of the non-linear elasticity system constituting the EMC model. Finally, the results of three-dimensional parallel simulations will demonstrate the effectiveness of the resulting algorithms. Speaker: Simone Scacchi (UNIMI) • Friday, 26 February • 08:45 10:15 HPC in science: Condensed Matter Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Convener: Ivan Girotto • 08:45 Here and now: the intersection of computational science and computer science 40m Quantum-mechanical simulations have become dominant and widely used tools for scientific discovery and technological advancement; since they are performed without any experimental input or parameter they can streamline, accelerate, or replace actual physical experiments. This is a far-reaching paradigm shift, substituting the cost- and time-scaling of brick-and-mortar facilities, equipment, and personnel with those, very different, of computing engines. Nevertheless, computational science remains anchored to a renaissance model of individual artisans gathered in a workshop, under the guidance of an established practitioner. Great benefits could follow from rethinking such model, while adopting concepts and tools from computer science for the automation, management, preservation, analytics, and dissemination of these computational efforts. I will offer my perspective on the current state-of-the-art in the field, its power and limitations, and the role and opportunities of high-throughput computing (HTC, rather than HPC), of open source codes and workflows, and of big data available on demand. Speaker: Nicola Marzari (EPFL) • 09:25 High Performance Computing and Materials Science: How atomistic simulations can pave the way for clean and sustainable energy 20m The availability of cheap and abundant energy was one of the main drivers of the industrial revolution. Until today, energy remains an essential ingredient for many aspects of human activity. Is is recognized that a major challenge of our times is the transition towards sustainable energy conversion, moving away from carbon-based fossil fuels. Developing more efficient and cheaper ways to convert wind or solar radiation into electricity or to store electric energy are important steps in this transition. Computer simulations at the atomic scale can lead to a detailed understanding of the fundamental steps during energy conversion. In this presentation, I will illustrate a few cases where such a "computational microscope" can be used by materials scientists to develop better solar cells or to more efficiently use solar light to split water into hydrogen and oxygen. In these cases, high performance computing allows for a screening of potential materials, before they have even been synthesized in a laboratory. Speaker: Ralph Gebauer (ICTP) • 09:45 MHPC thesis: High-performance implementation of the Density Peak clustering algorithm 15m We developed a parallel implementation of the “Density Peak” clustering algorithm, exploiting C++11, OpenMP and the FLANN library for k-nearest-neighbour search. The modified algorithm is approximately 50 times faster than the original version on datasets with half a million points, and scales almost linearly with the dataset size. Thanks to improvements on the density estimation and assignation procedure, the algorithm is also unsupervised and non-parametric. Speaker: Marco Borelli (MHPC - SISSA) • 10:00 Improving Performance of Basis-set-free Hartree-Fock Calculations Through Grid-based Massively Parallel Techniques 15m Multicenter numerical integration scheme for polyatomic molecules has been implemented as an initial step to develop a complete basis-set-free Hartree-Fock (HF) software. The validation of the integration scheme includes the integration of the total density and the calculation of Coulomb potentials for several diatomic molecules. A finite difference method is used to solve Poisson's equation for the Coulomb potential on numerical orbitals expanded on the interlocking multicenter quadrature grid. The implementation which rely on OpenMP and CUDA shows a speedup up to 30x. • 10:15 10:45 Coffee 30m • 10:45 13:00 HPC in science: Astrophysics and Earth Science Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Convener: Giuliano Taffoni • 10:45 Simulating Cosmic Structure Formation 30m Numerical simulations on supercomputers play an ever more important role in astrophysics. They have become the tool of choice to predict the non-linear outcome of the initial conditions left behind by the Big Bang, providing crucial tests of cosmological theories. However, the problem of galaxy and star formation confronts us with a staggering multi-physics complexity and an enormous dynamic range that severely challenges existing numerical methods. In my talk, I review current strategies to address these problems, focusing on recent developments in the field such as hierarchical time integration schemes, improved particle- and mesh-based hydrodynamical solvers. I will also discuss a selection of current results and highlight some challenges for the future. Speaker: Volker Springel (The Heidelberg Institute for Theoretical Studies) • 11:15 Numerical simulations of galaxies and galaxy clusters at the Trieste Observatory 20m Speaker: Giuseppe Murante (OATS-INAF) • 11:35 HPC for Earth Sciences: training opportunities and research challenges 20m Speaker: Stefano Salon (OGS) • 11:55 MHPC Thesis: Shyfem Parallelization: An innovative task approach for coastal environment FEM software 20m SHYFEM is a finite element hydrodynamic code written by Georg Umgiesser in the 80s to model Venice lagoon for his master thesis; its development has been continued by CNR-ISMAR group. It is one of the few opensource codes for coastal areas that use a finite element approach. SHYFEM is a very important resource because it is focused on coastal areas and can be coupled it with other software in order to increase the simulation accuracy in such areas. Coastal areas are strategic because many human activities are here concentrated. This means that a software that produces an accurate representation in coastal areas may also advantage socio-economical activities. SHYFEM has been already and successfully applied to several coastal and lagoon environments; for example, it is used to produce tidal forecasts in the Venice lagoon and other lagoons in Mediterranean sea. It is also used in the Danube Delta and to estimate its effects on the Black Sea, and in Malta to produce coastal forecasts. The main goal of this work is to obtain a new version of SHYFEM that may be faster, parallel, capable to use efficiently modern hardware, and easily coupled with other software. Speaker: Eric Pascolo (MHPC - OGS) • 12:15 HPC in Europe Prace 30m Speaker: Sanzio Bassini (CINECA) • 13:00 14:15 Lunch 1h 15m • 14:15 16:15 HPC in science: High Energy Physics Room 128 Room 128 SISSA, International School for Advanced Studies Via Bonomea 265, 34136 Trieste, Italy Convener: Andrea Bressan • 14:15 The path toward High Performance Computing in High Energy Physics 40m Speaker: Federico Carminati (CERN) • 14:55 Studies of Flavor and Hadron Physics using Lattice QCD simulations with modern HPC hardware 20m Speaker: Silvano Simula (INFN - Roma3) • 15:15 High Performance Computing in the ALICE experiment 20m Speaker: Massimo Masera (Università di Torino) • 15:35 MHPC Thesis: Analysis of Hybrid Parallelization strategies: Simulation of Anderson Localization 20m This thesis presents two experiences of hybrid programming applied to condensed matter and high energy physics. The two projects differ in various aspects, but both of them aim to analyse the benefits of using accelerated hardware to speedup the calculations in current science-research scenarios. The first project enables massively parallelism in a simulation of the Anderson localisation phe- nomenon in a disordered quantum system. The code represents a Hamiltonian in momentum space, then it executes a diagonalization of the corresponding matrix using linear algebra libraries, and finally it analyses the energy-levels spacing statistics averaged over several realisations of the disorder. The implementation combines different parallelization approaches in an hybrid scheme. The averag- ing over the ensemble of disorder realisations exploits massively parallelism with a master-slave config- uration based on both multi-threading and message passing interface (MPI). This framework is designed and implemented to easily interface similar application commonly adopted in scientific research, for ex- ample in Monte Carlo simulations. The diagonalization uses multi-core and GPU hardware interfacing with MAGMA, PLASMA or MKL libraries. The access to the libraries is modular to guarantee portability, maintainability and the extension in a near future. The second project is the development of a Kalman Filter, including the porting on GPU architectures and autovectorization for online LHCb triggers. The developed codes provide information about the viability and advantages for the application of GPU technologies in the first triggering step for Large Hadron Collider beauty experiment (LHCb). The optimisation introduced on both codes for CPU and GPU delivered a relevant speedup on the Kalman Filter. The two GPU versions in CUD and OpenCL have similar performances and are adequate to be considered in the upgrade and in the corresponding implementations of the Gaudi framework. In both projects we implement optimisation techniques in the CPU code. This report presents exten- sive benchmark analyses of the correctness and of the performances for both projects. Speaker: Jimmy Aguilar Mena (MHPC - ICTP) • 16:15 16:45 Coffee 30m Your browser is out of date! Update your browser to view this website correctly. Update my browser now ×
2021-09-21 02:48:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35304373502731323, "perplexity": 10645.094134175855}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057131.88/warc/CC-MAIN-20210921011047-20210921041047-00427.warc.gz"}
http://mathhelpforum.com/calculus/127770-volume-solid-hemisphere.html
# Math Help - volume of solid hemisphere 1. ## volume of solid hemisphere Estimate the volume of a solid hemisphere of radius 3, imagining the axis of symmetry to be the x-axis. Partition the interval [0,3] into six subintervals of equal length and approximate the solid with cylinders based on the circular cross sections of the hemisphere perpendicular to the x-axis at the subintervals' left endpoints. How would I start this problem? Any help is appreciated! 2. Originally Posted by live_laugh_luv27 Estimate the volume of a solid hemisphere of radius 3, imagining the axis of symmetry to be the x-axis. Partition the interval [0,3] into six subintervals of equal length and approximate the solid with cylinders based on the circular cross sections of the hemisphere perpendicular to the x-axis at the subintervals' left endpoints. How would I start this problem? Any help is appreciated! So we're using Left Hand Sums. First find the height of each cylinder. $\Delta x = \frac{b-a}{n} = \frac{3 - 0}{6} = 0.5 $ It's six subintervals, so we'll have six cylinders. Note that the height function is the upper half of the circle of radius 3: $y=\sqrt{9-x^2}$ The radii for the six cylinders are the y-values of the left end points: [0, 1/2] [1/2, 2/2] [2/2, 3/2] [3/2, 4/2] [4/2, 5/2] [5/2, 6/2] The Volume formula is $V_i = \pi r^2 h = \pi y^2 \Delta x$ for example: $V_1 = \pi (f(o))^2 0.5 = \pi 9 * 0.5 = \frac{9 \pi}{2}$ $V_2 = \pi (f(1/2))^2 0.5 = \pi \frac{35}{4} * 0.5 = \frac{35 \pi}{8}$ .... Compute the remaining cylinders and add them up to get the approximation. Good luck!!
2015-07-31 22:17:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806604146957397, "perplexity": 739.8259404903208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988312.76/warc/CC-MAIN-20150728002308-00176-ip-10-236-191-2.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/251539/properties-of-the-white-noise-process/251730
# Properties of the white noise process The stochastic process $\{u_t\}$ is a white noise process if and only if 1. $Eu_t=0$ for all integers $t$; and 2. $E(u_t u_{t+k})=\sigma^2\textbf{1}\{k=0\}$ for all integers $t$ and $k$, where $\sigma>0$ and $\textbf{1}\{k=0\}$ is equal to $1$ if and only if $k=0$, and equal to $0$ if and only if $k\neq 0$. I have heard from my lecturer that a white noise process satisfies $E_tu_{t+1}=0$, where $E_t$ is expectation given information about $\{u_k\}_{k\leq t}$. Question. Is it true that a white noise process $\{u_t\}$ satisfies $E_tu_{t+1}=0$? If 'yes', then why is it true and how do I derive that conclusion? If 'not', then what is a counterexample, and is it true under some reasonable assumption (e.g., assuming that the random variables in the stochastic process $\{u_t\}$ are independent)? Attempt 1. By (2), $E(u_tu_{t+1})=0$. From this it follows by the law of total expectations that $E(E_t(u_tu_{t+1}))=0$. Since we are conditioning on information about the white noise process for the time periods $k\leq t$, it follows that $E(u_tE_tu_{t+1})=0$. Now, $E_tu_{t+1}=0$ is consistent with the last expression, but I do not see how it follows deductively (if it does). (Since I view $E_tu_{t+1}$ as a given real number, I think the last expression simplifies to $Eu_t\cdot E_tu_{t+1}=0$, which is satisfied whether or not $E_tu_{t+1}=0$ because $Eu_t=0$ by (1).) Attempt 2. After considering Alexey's comment to my question, I tried to write an answer. To begin with, if we assume independence in the sense that $u(t)$ is a stochastic variable independent of its history before time period $t$, then the distribution of $u_{t+1}|I_t$ coincide with the distribution of $u_{t+1}$, where $I_t$ is the information set up to time period $t$. Thus, in this case we have $E_tu_{t+1}=Eu_{t+1}=0$. After this I tried to find a counterexample to the conclusion that $E_tu_{t+1}=0$ for any white noise process $\{u_t\}$. I found a dependent white noise process, but not one that satisfied $E_tu_{t+1}\neq 0$. The example is the following. Let $\{v_t\}$ be an i.i.d. process such that $P(v_t=-1)=P(v_t=1)=1/2$ for all integers $t$. Define a new stochastic process by $$u_t=v_t(1-v_{t-1}).$$ First, let me check that it is a white noise process. Firstly,\begin{align}Eu_t&=E(v_t(1-v_{t-1}))\\ &=Ev_tE(1-v_{t-1})\\ &=0\cdot 0 =0\end{align} where the second equality follows by independence and the third equality from the fact that $Ev_t=1/2-1/2=0$. Secondly, for any integer $t$, \begin{align}Eu_tu_{t+k}&=E(v_t(1-v_{t-1})v_{t+k}(1-v_{t+k-1}))\\ &=E(v_t(1-v_{t-1})(1-v_{t+k-1}))E(v_{t+k})\\ &=E(v_t(1-v_{t-1})(1-v_{t+k-1}))\cdot 0\\ &=0\end{align} if $k\neq 0$, and, if $k=0$, \begin{align}Eu_t^2 &=Ev_t^2E(1-v_{t-1})^2\\ &=((-1)^2/2+1^2/2)+(2^2/2+0^2/2)\\ &=2\end{align} which is finite. Thus, $\{u_t\}$ is a white noise process. Is the process dependent? Yes, since e.g. $u_t=2$ implies $v_t=1$ and $v_{t-1}=-1$ and thus $u_{t+1}=v_{t+1}(1-v_t)=0$. This means that $$P(u_t=2,u_{t+1}=2)=0.$$ However, $$P(u_t=2)P(u_{t+1}=2)=1/4\cdot 1/4=1/16,$$ and hence $$P(u_t=2,u_{t+1}=2)\neq P(u_t=2)P(u_{t+1}=2).$$ From here on, I have tried to construct an information set $I_t$ such that $E_tu_{t+1}\neq 0$, but without success. I have also tried to somehow change the definition of $v_t$ or $u_t$. Maybe it would work if $u_t$ was a product of two distinct stochastic processes. • Under the introduced definition it is not true. You should think on counterexample, and in this case it is not hard to construct. You should take into account that there is no independence assumption (actually, the common definition of the white noise process is different from yours, see wiki for an example) Dec 14, 2016 at 12:49 • Thanks for the input, I will try to come up with a counterexample, I was using lectures.quantecon.org/jl/arma.html and the definition given there seems to be the same as the definition given in en.wikipedia.org/wiki/White_noise#White_noise_vector. I.i.d. noise is a special case of a white noise process, does the conclusion follow if the process is an i.i.d. noise process? Dec 14, 2016 at 12:54 • As you can see from wiki usually we need an independence property for $u_t$ (also an alternative "weak" definition is possible). For i.i.d. process your conclusion follows from the fact that the conditional distribution is similar to the initial distribution, as the values of a realization of the process at different times are i.i.d. Dec 14, 2016 at 13:07 • Yes, I see that. Hmm, okay, in essence, as I understand you, independence implies that the distribution of $u_{t+1}|I_t$ is equal to the distribution of $u_{t+1}$, where $I_t$ is the information set up until period $t$? Dec 14, 2016 at 13:21 • Yes, it implies. Dec 14, 2016 at 13:23 Consider the i.i.d. process $\{x_t\}$ where $P(x_t=0)=P(x_t=2)=1/2$ for each integer $t$. This sequence satisfies $E(x_t)=1$ for each integer $t$. Now, construct the process $\{u_t\}=\{x_t^2(1-x_{t-1})\}$. For each integer $t$, then, we have \begin{align}E(u_t)&=E(x^2_t(1-x_{t-1}))\\ &=E(x_t^2)(1-E(x_{t-1}))\\ &=E(x_t^2)(1-1)\\ &=0\end{align} where the second inequality follows from independence and the linearity of expectation. Furthermore, \begin{align}Eu_t^2&=Ex_t^4E(1-x_{t-1})^2\\ &=2^4/2\cdot [1^2/2+(-1)^2/2]\\ &=8,\end{align} which is finite and independent of $t$. For $k\neq - 1$ we also have \begin{align}Eu_tu_{t+k}&=E(x_t^2(1-x_{t-1})x_{t+k}^2(1-x_{t+k-1}))\\ &=E(1-x_{t-1})E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &= 0\cdot E(x_t^2x_{t+k}^2(1-x_{t+k-1}))\\ &=0, \end{align} and if $k=-1$ we may do as follows: \begin{align}Eu_tu_{t-1}&=E(x_t^2(1-x_{t-1})x_{t-1}^2(1-x_{t-2}))\\ &=E(1-x_{t-2})E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &= 0\cdot E(x_t^2(1-x_{t-1})x_{t-1}^2)\\ &=0. \end{align} In other words, $\{u_t\}$ is a white noise process. To show that $E_tu_{t+1}\neq 0$, consider the information set $I_t$ up until time period $t$ which says that $u_t=0$. Then $x_t^2(1-x_{t-1})=0$. By construction this can only be the case if $x_t=0$. Hence, $u_{t+1}=x_{t+1}^2(1-0)=x_{t+1}^2$ if $u_t=0$ is given. (Note that this suggests that $u_{t+1}$ depends on information in time periods $k\leq t$.) Hence, as the information set $I_t$ says nothing about the value of $x_{t+1}$, the distribution of $u_{t+1}|I_t$ is equivalent to the distribution of $x_{t+1}^2$. Thus, \begin{align}E_tu_{t+1} &=Ex_{t+1}^2\\ &=0^2/2+2^2/2\\ &=2\\ &\neq 0.\end{align} Thus, we have found a white noise process not satisfying $E_tu_{t+1}= 0$!
2022-10-06 19:38:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669171571731567, "perplexity": 160.5959952064101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00485.warc.gz"}
https://math.stackexchange.com/questions/1915999/integrating-a-l2-function-in-two-variables/1919696
# Integrating a $L^{2}$ function in two variables Consider for $T\in\mathbb{R}_{>0}$ the mapping $$G:\begin{cases} L^{2}((0,T)\times(0,1);\mathbb{R})&\rightarrow X\\ f&\mapsto \int_{0}^{t} f(s,h(s,t,x))\;\mathrm{d} s \end{cases}$$ for a given function $h:[0,T]\times[0,T]\times [0,1]\rightarrow [0,1]$ as smooth as you want. Obviously, we have $$X\subset L^{2}((0,T)\times (0,1);\mathbb{R}),$$ but due to the smoothing of the integral we would hope for a more regular $G(f)$. Can we have a set $X$ where this $G(f)$ belongs to for every $f\in L^{2}((0,T)\times(0,1);\mathbb{R})$? Since the integration is "simultaneously" in the two components the question does not seem to be straight forward. This brings us to the related question: How smooth must $f$ be such that we obtain $G(f)\in L^{2}((0,T)\times(0,1))$? And of course we could just pick $f\in L^{2}$ as well but we would like to get the most general setting... Any help or hint is highly appreciated, thank you very much for reading, Alex EDIT: Thanks to zhw.'s comment it seems I was too optimistic regarding on how to present my question. Let us make another attempt. Consider the mapping $$\mathcal{G}:\begin{cases} L^{1}((0,T)\times(0,1))&\rightarrow L^{1}((0,T)\times (0,1))\\ f&\mapsto \begin{pmatrix} \int_{0}^{t}f(y,y-t+x)\ \mathrm{d} y & \text{for }\ x\geq t\\ \int_{t-x}^{t}f(y,y-t+x)\ \mathrm{d} y& \text{for }\ x\leq t \end{pmatrix} \end{cases}$$ Then, $\mathcal{G}$ should be well defined. I would like to know if $\mathcal{G}(f)$ for given $f$ is more regular than just $L^{1}$. Can we expect some weak differentiability, maybe some fractional weak derivative at least in one component? • I don't understand the question. First, you have $X$ at the beginning with no definition. What is it - the range of $G?$ And what is this "Can we have a set" business? – zhw. Sep 8 '16 at 20:18 Why is $Gf$ even well defined? Take $T=1.$ Let $h(s,t,x) = s^4.$ Define $f(s,t) = (st)^{-1/4}.$ Then $f\in L^2([0,1]^2).$ But for any $t \in (0,1),$ $$Gf(t,x) = \int_0^t f(s,h(s,t,x))\,ds = \int_0^t f(s,s^4)\,ds = \int_0^t s^{-5/4}\,ds = \infty.$$ • Thank you so much, this is true, I was too optimistic concerning what I need as conditions on $h$. Sorry for that inaccuracy, I will edit the question accordingly.
2021-09-19 06:59:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.903344988822937, "perplexity": 122.43649682122584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056752.16/warc/CC-MAIN-20210919065755-20210919095755-00038.warc.gz"}
https://math.stackexchange.com/questions/3198059/how-to-prove-that-x-cdot-y-neq-0-when-x-neq-0-and-y-neq0-via-field-axioms
# How to prove that $x\cdot y\neq 0$ when $x\neq 0$ and $y\neq0$ via field axioms? How to prove that $$x\cdot y\neq 0$$ when $$x\neq 0$$ and $$y\neq0$$ via field axioms? According to the field axioms, especially the Commutativity of multiplication it is $$a\cdot b=b\cdot a$$. Is that enough to disprove $$x\cdot y=0$$ hence proving that $$x\cdot y\neq 0$$ • Thank you for your response. To what "definition" are you reffering? To the aforementioned Commutativity? – Analysis Apr 23 at 10:00 • Sorry, in my definition of a field it's so that $F\setminus \{0\}$ is an Abelian group with respect to multiplication. You must have some other one. – Jakobian Apr 23 at 10:03 Suppose that $$x\cdot y=0$$ and $$x\neq 0$$, then $$\frac1x$$ exists and $$\frac1x\cdot x\cdot y=\frac1x\cdot 0$$, what is equivalent to $$y=0$$. • Yeah, but since it is $\Longleftrightarrow$ I also have to prove that $(x=0)\vee (y=0) \Longrightarrow x\cdot y=0$. – Analysis Apr 23 at 10:12 • @Analysis ok, I understand... Just use the above. I had proven it in my answer, I just skipped the part that $\frac1x\cdot 0=0$ – Masacroso Apr 23 at 10:14 • I want to start like that: Suppose that $x\cdot y=0$ with $y\neq 0$ and $x\neq 0$ ..... and than disprove that statement – Analysis Apr 23 at 10:16 • @Analysis suppose you start like you want... Then from my answer you get the contradiction that $y=0$, then it is not possible that $x\cdot y=0$ and $x,y\neq 0$ at once, so if $x\cdot y=0$ then $x,y\neq 0$ must be false. – Masacroso Apr 23 at 10:17 If $$x\neq0$$, then it as an inverse. So$$y=1.y=(x^{-1}.x).y=x^{-1}.(x.y)=x^{-1}.0=0.$$Now, it remains to be proved from the field axioms that you always have $$x.0=0$$. • I have already proved $x\cdot 0=0$ and $-x=(-1)x$. Now I have to prove that $x\cdot y=0 \quad \Longleftrightarrow \quad (x=0)\vee (y=0)$. Proof by cases: I have proven the cases $x\neq 0$ and $y=0$ and vice versa. Now the third case $x\neq 0$ and $y\neq 0$ – Analysis Apr 23 at 10:03 • Since you have already proved that you always have $x.0=0$, there is nothing else that you need to prove. The fact that you always have $x.0=0$was all that was needed to complete my proof of the fact that $x\neq0\implies y=0$. – José Carlos Santos Apr 23 at 10:06
2019-08-24 11:39:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9032538533210754, "perplexity": 227.98933347152595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00559.warc.gz"}
https://www.cuemath.com/probability-a-given-b-formula/
P(A/B) Formula P(A/B) is known as conditional probability and it means the probability of event A that depends on another event B. It is also known as "the probability of A given B". P(A/B) Formula is used to find this conditional probability quickly. What is P(A/B) Formula? The conditional probability P(A/B) or P(B/A) arises only in the case of dependent events. • The P(A/B) formula is: P(A/B) = P(A∩B) / P(B) • Similarly, the P(B/A) formula is: P(B/A) = P(A∩B) / P(A) Here, P(A∩B) is the probability of happening of both A and B. From these two formulas, we can derive the product formulas of probability. • P(A∩B) = P(A/B) × P(B) • P(A∩B) = P(B/A) ×  P(A) Note: If A and B are independent events, then P(A/B) = P(A) P(B/A) = P(B) Solved Examples Using P(A/B) Formula When a fair die is rolled, what is the probability of A given B where A is the event of getting an odd number and B is the event of getting a number less than or equal to 3? Solution: To find: P(A/B) using the given information. When a die is rolled, the sample space = {1, 2, 3, 4, 5, 6}. A is the event of getting an odd number. So A = {1, 3, 5}. B is the event of getting a number less than or equal to 3. So B = {1, 2, 3}. Then A∩B = {1, 3}. Using the P(A/B) formula: P(A/B) = P(A∩B) / P(B) $$P(A/B) = \dfrac{2/6}{3/6} = \dfrac 2 3$$ Two cards are drawn from a deck of 52 cards where the first card is NOT replaced before drawing the second card. What is the probability that both cards are kings? Solution: To find: The probability that both cards are kings. P(card 1 is a king) = 4 / 52 (as there are 4 kings out of 52 cards). P(card 2 is a king/card 1 is a king) = 3 / 51 (as the first king is not replaced, there is a total of 3 kings out of 51 left out cards). By the formula of conditional probability, P(card 1 is a king ∩ card 2 is a king)  = P(card 2 is a king/card 1 is a king) × P(card 1 is a king) P(card 1 is a king ∩ card 2 is a king)  = 3 / 51 × 4 / 52 = 1 / 221
2021-05-12 14:19:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6029915809631348, "perplexity": 571.0096423354896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990929.24/warc/CC-MAIN-20210512131604-20210512161604-00464.warc.gz"}
https://developmentalsystem.wordpress.com/2019/12/09/methods/
# methods Method 1 Consider N currently identified alleles associated with the trait. Assume there are P alleles associated with the trait across the entire genome. If the PGS difference computed from those N alleles is $PGS_N$, then the extrapolated difference is $PGS_N \cdot \frac{P}{N}$ Benefits: • Simple to do • Finding numbers of associated alleles across the entire genome is not difficult Downsides: • Highly dependent on your choice of original alleles. • Depends significantly on the distribution of effect sizes in your original set of N alleles and $P \setminus N$ (e.g. assumes that that the mean effect size for the alleles in $P \setminus N$ is the same as the mean effect size for the alleles in N) Method 2 Consider N currently identified alleles associated with the trait. Assume these N alleles explain X% of the variance, but it has been estimated that Y% of the variance in the trait is explained by alleles across the entire genome. If the PGS difference computed from these N alleles is $PGS_N$, then we should extrapolate the total PGS difference from genes across the entire genome to be $PGS_N \cdot \frac{Y}{X}$. Benefits: • Doesn’t require mean effect sizes to be equal between discovered and undiscovered alleles Downsides: • Requires accurate estimation of total heritability and current explained heritability. Method 3 Consider K currently identified alleles with effect sizes following $\beta_K \sim \mathcal{N}(\mu, \sigma^2)$. Call the frequency difference between two populations (consistently ordered, e.g. always African frequency minus European frequency) at each allele $i$, $\digamma_i$. Then the contribution of each allele $i$ to the mean genotypic difference is $\digamma_i\beta_i$. In R, fit a GAM model for $\digamma_i\beta_i \sim \beta_i$. This should give you a good nonlinear fit of the expected contribution to the genotypic gap for an allele of a given effect size $\beta$. Call this predicted contribution $G(\beta)$. Then, compare your current distribution of effect sizes $\beta_K$ to the distribution of effect sizes you actually expected when the whole-genome is sequenced, $\beta_M$. Sum over the whole-genome distribution as such: $\sum_{\text{genome}} G(\beta)$ Benefits: • Doesn’t require assumptions about the distribution of effect sizes in current GWAS or in the actual set of associated alleles Downsides: • If there are a large number of large effect sizes alleles in your expected actual distribution, then the out of sample prediction can create wonky predictions, especially because the frequencies are going to be way off. • If you have a non-random subset of alleles with respect to any one given effect size, it can bias the estimates for the rest of the genome.
2020-05-30 08:33:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323619484901428, "perplexity": 1640.507953778378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407667.28/warc/CC-MAIN-20200530071741-20200530101741-00258.warc.gz"}
https://mail.oceanopticsbook.info/view/light-and-radiometry/the-nature-of-light
Page updated: July 7, 2022 Author: Curtis Mobley View PDF # The Nature of Light Writing this page is a no-win effort. No matter what I say about the “nature of light” or the question “What is a photon?”, there will be those who tell me (quite correctly!) that I am wrong. However, the question “What is light?” is perfectly legitimate and deserves discussion, even if, as will be seen, there is no answer in everyday, human, classical physics terms. This page is structured as a chronological “history of light” organized around the debate about whether light is a particle or a wave. Some of the milestones in our understanding of light warrant just a sentence or two; others will be discussed in some detail. ancient Egypt Light is “ocular fire” from the eye of the sun god Ra. ancient Greece Democritus (c. 500 BCE) speculated that everything, light and the soul included, is made of particles, which he called atoms. Other Greeks thought that light was rays that emanate from the eyes and return with information. I haven’t seen any explanation of how they explained why everyone’s eyes quit emanating rays when the sun went down, or when a person entered a dark cave. Somewhat later, Aristotle thought that “light is the activity of what is transparent.” In his view, light is a “form,” not a “substance.” I don’t know if he thought that things became nontransparent when it got dark at night. c. 1000 CE Ibn al-Haytham (965-1039; Latinized as Alhazen): Light is rays that travel in straight lines. Little known today in the West, Ibn al-Haytham was something of an Arab Isaac Newton. He wrote a seven-volume Book of Optics (as well as many other works on astronomy, mathematics, medicine, philosophy, and theology). He clearly understood the “scientific method” and he based his conclusions on observation and clever experiments, rather than on abstract reasoning. He disproved the Greek idea that light emanates from eyes, and he showed that light travels in straight lines. late 1600s Newton: Light is particles. Isaac Newton conducted a series of experiments in the late 1600’s which, among other things, showed that white light is a mixture of all colors. This directly contradicted Aristotle, who claimed that “pure” light (like the light from the Sun) is fundamentally colorless. Because he was able to separate while light into colors with a prism, and because light did not seem to travel around corners (as do sound waves), Newton concluded that light must be made of particles, which he called “corpuscles.” He published his results in his treatise Opticks in 1704. Newton’s particle explanation of refraction required light to travel faster in water than in air, and his explanation of “Newton’s rings” (easily explained by wave interference) was rather incoherent (pun intended). In spite of a few errors like these, Newton was a pretty good scientist and Opticks is one of the seminal books of science. 1676 Ole Rømer measured speed of light by timing eclipses of Jupiter’s moon Io. 1678 Christiaan Huygens published the first credible wave theory of light. Part of his theory says that at each moment each point of an advancing wave front serves as a point source of secondary spherical waves emanating from that point. The position of the wave front at a later time is then the tangent surface of the secondary waves from each of the point sources. This known as Huygen’s principle. 1803 Young: Light is waves. In 1803 Thomas Young conducted a classic experiment (published in 1807) in which coherent light was incident onto two narrow parallel slits in an opaque screen as illustrated in Fig. 1. According to Huygen’s principle, each slit is the source of secondary waves, which then interfere with each other as they propagate further. The light that passed through the slits formed an interference pattern on a viewing screen, just as do water waves passing through holes in a board. This is easily explained by assuming that light is a wave phenomenon. Young’s double-slit experiment was taken as conclusive proof that light is a wave and that Newton was wrong. As will be seen below, this conceptually simple double-slit experiment will reveal one of the most profound mysteries of light. 1819 Poisson, Fresnel, and Arago: Light is waves. In 1818, Augustin-Jean Fresnel presented a new theory of diffraction. Siméon Poisson, who favored Newton’s corpuscular theory of light, analyzed Fresnel’s equations and concluded that if they were correct, then the shadow of a sphere illuminated by a point light source would show a spot of light at the center of the shadow. Poisson considered this an absurd result, thereby disproving Fresnel’s assumption of the wave nature of light. However, when Dominique Arago did the experiment, the spot was there just as predicted, as seen in Fig. 2. Poisson conceded. This point of light is now known as Fresnel’s spot, Arago’s spot, or—no doubt to his chagrin—Poisson’s spot. Again, as in Young’s experiment, light must be understood as a wave. 1865 Maxwell: Light is electric and magnetic fields propagating as a wave. In 1864 James Clerk Maxwell published A Dynamical Theory of the Electromagnetic Field in which he tied together electric and magnetic fields via his famous equations. He then showed that each component of the electric and magnetic fields obeys a wave equation with a speed of propagation numerically equal to that of light. He concluded “This velocity is so nearly that of light that it seems we have strong reason to conclude that light itself (including radiant heat and other radiations) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field according to electromagnetic laws.” (This has to be one of the greatest sentences ever written.) Maxwell’s equations are discussed beginning at Maxwell’s Equations in Vacuo. late 1880s Hertz: Discovery of radio waves. Between 1886 and 1889 Heinrich Hertz conducted a series of experiments designed to test Maxwell’s predictions of propagating electromagnetic waves. In these experiments Hertz discovered what are now called radio waves, and he also discovered the photoelectric effect. Young’s double-slit experiment, Arago’s confirmation of the Fresnel diffraction predictions, and Hertz’s confirmation of Maxwell’s predictions of propagating electromagnetic waves were sufficient to convince everyone that light is a wave. Newton was clearly wrong, and the matter seemed settled once and for all. late 1800s Blackbody radiation. One of the final problems of late nineteenth century physics was to explain the spectral distribution of energy emitted by a “blackbody.” Attempts to do this using Maxwell’s concept of electromagnetic radiation led to the prediction that a blackbody would emit an infinite amount of energy as the frequency increased. This unphysical result was called “the ultraviolet catastrophe.” 1901 Planck: The idea that light is quantized. Max Planck was able to derive the formula for the blackbody radiation spectrum, but only if he assumed that light comes in discrete packages, or “quanta.” He had to assume that the energy $E$ of each light quantum is proportional to its frequency $\nu$ or wavelength $\lambda$ according to $E=h\nu =\frac{hc}{\lambda }\phantom{\rule{0.3em}{0ex}}.$ The proportionality constant $h$, which occurs both in this equation and in Planck’s formula for the energy distribution of blackbody radiation, was a free parameter that was adjusted so that Planck’s blackbody spectrum would fit the measurements. The parameter $h$ is now called Planck’s constant and is one of the fundamental physical constants. Planck himself could not say why the radiation in the blackbody cavity should come in discrete pieces and, at the time, he thought that this assumption was perhaps just “a mathematical artifice.” Planck received the 1918 Nobel Prize in Physics for this work; the Nobel citation credits him with the “discovery of energy quanta.” His discovery was the beginning of modern physics. 1905 Einstein: showed that light really is quantized and is absorbed in discrete amounts. The photolectric effect discovered by Hertz can not be explained if light comes in continuous waves as proposed by Maxwell. Albert Einstein was able to explain the photoelectric effect by assuming that light does indeed come in discrete packages as hypothesized by Planck and that these quanta are absorbed (or emitted) “all at once,” rather than being “soaked up” bit by bit as a continuous light wave arrives at the surface of the photoelectric material. In other words, Einstein claimed that energy quanta were real physical quantities, and not just a mathematical trick. As he worded it in his 1905 paper, “Energy, during the propagation of a ray of light, is not continuously distributed over steadily increasing spaces, but it consists of a finite number of energy quanta localized at points in space, moving without dividing and capable of being absorbed or generated only as entities.” Einstein’s claim that light comes in discrete packets was not well received at the time because the wave theory of light was so well established and had been so successful in most applications—blackbody radiation and the photoelectric effect being the exceptions. In the same year, Einstein also published his famous paper presenting the special theory of relativity; an explanation of Brownian motion, which helped established the reality of atoms at a time when many scientists still did not accept their existence; and a paper presenting the equivalence of mass and energy via his most famous equation, $E=m{c}^{2}$. His claims of light as particles, the mixing of time and space, matter as particles, and the equivalence of matter and energy would have relegated Einstein to the realm of crackpots had his radical ideas not been so successful in explaining so many physical phenomena. When he received the Nobel Prize in 1921 special relativity was still so controversial that the award was given to him for “his discovery of the law of the photoelectric effect.” 1923 Compton scattering: light is a particle. In 1923 Arthur Compton did an experiment in which he bombarded electrons with x-rays. Figure 3 shows the basic idea. Compton found that the incident x-ray wavelength ${\lambda }^{\prime }$ and the wavelength $\lambda$ of the scattered x-ray were related to the mass of the electron $m$ and the angle $𝜃$ of the scattered x-ray from the initial direction by the formula seen in the figure. This formula is derived by assuming that the x-ray is a particle of zero rest mass and energy given by Planck’s formula $E=h\nu$, and then applying the equations for relativistic conservation of energy and momentum. Compton scattering cannot be explained by a wave theory of light. This result therefore was taken to be direct experimental evidence that light is a particle. Compton received the 1929 Nobel Prize in Physics for this work. 1926 The name “photon” was coined by the chemist G. N. Lewis to describe a hypothetical particle that transmitted energy from one atom to another. The word caught on as a name for Einstein’s quantum of energy, although that is not what Lewis intended. The 1910s to 1930s The development of quantum mechanics. These decades were a time of great excitement (and confusion!) in physics. On the one hand, there were convincing experiments showing that light is a wave: Young’s double-slit experiment, Fresnel’s spot, and Hertz’ discovery of electromagnetic waves just as predicted by Maxwell’s wave theory of light. On the other hand, there were equally convincing experiments showing that light must be a particle of some kind: the photoelectric effect, Compton scattering, and the need to assume light quantization in order to explain blackbody radiation. The physics vocabulary now began to include phrases such as “the wave-particle duality of light” (and, indeed, of all matter). The idea is that light has both wave and particle properties and that you detect one or the other depending on the type of measurement being made. That is, if you set up an experiment that is designed to detect wave properties (e.g., a double-slit apparatus), then you will detect light as a wave. But if you set up an experiment that absorbs or emits light (e.g., the photoelectric effect), then you will detect it as discrete quanta or particles. You will also see statements such as • Light behaves as a wave at macroscopic scales (e.g., in a laboratory interference experiment), but it behaves as a particle at atomic scales (e.g., in Compton scattering). • Light behaves as a wave at low energies (no radio engineer ever talks about radio photons, just radio waves), but it behaves as a particle at high energies (those working with gamma rays always talk about gamma-ray photons, never gamma-ray waves). • Light propagates as a wave (according to Maxwell’s equations), but it interacts with matter as a particle (e.g, in the photoelectric effect or in Compton scattering). There is an element of truth to all of these statements, but they also all oversimplify the true nature of light by forcing it to fit into classical categories of wave or particle. During these decades the great physicists Bohr, Schrödinger, Heisenberg, Pauli, Dirac and many others developed an entirely new kind of physics—quantum mechanics—to describe the internal workings of atoms. This quantum mechanics is a theory of how matter behaves at the atomic scale. Energy levels in atoms and molecules are quantized, and atoms and molecules therefore absorb and emit energy only at specific frequencies determined by differences in the quantized energy levels of each kind of atom or molecule. Electromagnetic radiation was thus absorbed or emitted only at the discrete frequencies determined by the quantized energy levels of matter, but the radiation itself did not need to be treated as inherently quantized. A good layman’s history of this era is Thirty Years that Shook Physics by George Gamow. 1946 An unexpected result in the Hydrogen spectrum. Willis Lamb and Robert Retherford measured an extremely small difference in the energies of the $2{S}_{1∕2}$ and $2{P}_{1∕2}$ states of the Hydrogen atom (see the page on the Physics of Absorption for a discussion of energy levels and this notation). This energy difference corresponds to a wavelength of about 30 cm, which is in the microwave region of the electromagnetic spectrum. Now known as “the Lamb shift”, this difference in energy levels could not be explained either by the non-relatavistic quantum mechanics of Schrödinger and Heisenberg, or by the relativistic quantum mechanics developed by Dirac. This experiment was one of the driving forces behind the development of quantum electrodynamics. Lamb received the Nobel Prize in 1955 “for his discoveries concerning the fine structure of the hydrogen spectrum.” late 1940s The development of Quantum Electrodynamics (QED). In QED, light is particles, but very strange particles they are. In part to explain the Lamb shift, Richard Feynman, Julian Schwinger, Shinichiro Tomonaga, and several others developed what is now known as Quantum Electrodynamics or QED. Feynman, Schwinger, and Tomonaga shared the 1965 Nobel Prize in Physics for their development of QED. In this theory, the electromagnetic field itself is quantized. Moreover, in QED the electric field of an electron, for example, is caused by the electron emitting and reabsorbing enormous numbers of energy quanta, which are called virtual photons. These photons are called virtual photons because they are associated with undetectable energy states of the electron-photon system. The Heisenberg uncertainty principle can be written as $\Delta t\Delta E\ge h∕\left(4\pi \right)$. To detect an energy change of size $\Delta E$, you must observe the system for a time $\Delta t$ that is greater than $h∕\left(4\pi \Delta E\right)$. The emission of a virtual photon of energy $\Delta E$ by an electron violates conservation of energy, but this is allowed by the Heisenberg uncertainty principle so long as the virtual photon is reabsorbed within a time of $\Delta t. That is, you can violate conservation of energy so long as you don’t do it long enough to get caught. Or in another view, you are not really violating conservation of energy if the violation isn’t observable. A photon can travel a distance $c\Delta t$ before it is reabsorbed by the electron. Thus low energy virtual photons with a small $\Delta E$ can live longer and “reach out” farther from the electron before reabsorption than higher energy virtual photons, which exist for a shorter time. This gives rise to the $1∕{distance}^{2}$ strength of the classical electrical field as seen in Coulomb’s law. Similarly, a photon can turn into an electron-positron pair so long as the electron and positron reunite before the time limit imposed by Heisenberg’s relation. Note that the emission of virtual photons is a different process than the emission of photons when an atom changes energy levels and emits a photon with energy equal to the difference of the atomic energy levels. In that case, there is no energy violation, Heisenberg’s relation does not come into play, and the emitted photon can live forever (or until it is absorbed by a different atom somewhere else). In QED, a charged particle is surrounded by a cloud of virtual photons, which are constantly flickering in to and out of existence. When two charged particles approach each other, some of these virtual photons are exchanged between the particles, which is what gives the repulsive or attractive force of between the particles. The electric force is said to be “mediated” (i.e., transmitted) by the exchange of virtual photons. (In modern physics, all “action at a distance” forces (except perhaps gravity) are mediated by some type of particle. For electric forces the particles are photons.) Indeed, even “empty space” is seething with virtual particles that come into existence and then promptly disappear. QED is able to explain the Lamb shift because the $2{S}_{1∕2}$ and $2{P}_{1∕2}$ states interact slightly differently with the cloud of virtual photons surrounding the Hydrogen nucleus and electron. This quantitative explanation of the Lamb shift was one of the first tests of QED. There have been many more since, and some of QED’s predictions agree with experiment to within about one part in $1{0}^{12}$. QED is therefore considered the most successful and well tested theory in physics, and it is the starting point for all of elementary particle physics. late 20th century The Fundamental Mystery: single-photon interference in a double-slit experiment. The interference patterns seen in Young’s double-slit experiment can be understood in terms of classical wave theory when the incident light is a coherent wave. But what happens if only one photon at a time is incident onto the double-slit screen? Amazingly, you still get the same interference pattern! It takes longer to build up the pattern one photon at a time, but after many photons have been detected (e.g., on a CCD array), the pattern becomes obvious. There is an excellent video of an 1981 verification of this performed at Hamamatsu Photonics K.K., a company that makes optical sensors and instruments. This video is on the Hamamatsu web site and also on Youtube at https://www.youtube.com/watch?v=I9Ab8BLW3kA. This video is well worth ten minutes of your time. It also shows how the experiment was actually conducted. (I don’t know who first did an experiment like the one on the Hamamatsu web site. However, in 1909 G. I. Taylor conducted an experiment in which he showed that a very faint light source, equivalent to “a candle burning at a distance slightly exceeding a mile,” gave interference fringes.) Figure 4 shows four frames from the Hamamatsu video. In Fig. 4(a), photons have been collected one at a time for 3 minutes. Only about two dozen photons have been detected, and the pattern of dots, showing where each photon was detected on the detector screen (as illustrated in Fig. 1) appears to be random. A few minutes later (panel b) more photons have been detected, but no pattern is yet obvious. However, after 25 minutes (panel c) enough photons have been detected that an interference pattern is taking shape. After 6 hours, the pattern of individual photon detection locations clearly shows exactly the same interference pattern as is obtained for a bright source of coherent monochromatic light. In classical wave theory (such as in Young’s original experiment), part of the incident wave passes through each slit, and each slit then becomes a point source for waves that propagate further (Huygen’s principle) and interfere with each other as illustrated in Fig. 1. The fact that single photons also show interference patterns after enough are collected implies that the individual photons must also simultaneously pass through both slits and then interfere with themselves! Indeed, if you modify the experiment so that the photon can pass through just one slit, or that you can in some way even know which slit it went through, then the interference pattern disappears. The fact that single photons show an interference pattern is so surprising and incomprehensible from the viewpoint of classical physics, that the great physicist and teacher Richard Feynman calls this “The Fundamental Mystery” (of quantum mechanics). There is no explanation for this other than to say “this is just how photons behave.” There are two utterly profound consequences of single-photon interference: • We are forced to abandon the idea that photons are localized particles in the classical sense of having a well-defined (even small) size. A localized particle could not pass through both slits at the same time and then interfere with itself. • We are forced to abandon the idea that photons take a particular path from one point to another. In QED calculations (using so-called Feynman path integrals), a photon simultaneously takes all possible paths from one point to another. Only after all of the calculations are done for all possible paths and the results for the different paths are combined does the final result look like the classical idea of a light ray traveling from one point to the next by a single path. For a photon, concepts like size, position, and path are undefined and meaningless. All you can say is that a photon was created at point A (e.g., at a spot on the surface of a tungsten filament in a light bulb) and it was detected at point B (e.g., at a particular pixel of a CCD array). You can say nothing about the path it took from A to B. (In quantum mechanics, there is no position operator for photons, as there is for material particles like electrons. Instead, photons have creation and annihilation operators which create and destroy them.) To make matters even more mysterious, material particles such as electrons and atoms also display the same interference behavior as light in a double slit apparatus. Single electron interference was first demonstrated in 1989 (Tonomura et al. (1989)). The interference patterns in their experiment look exactly like the ones in Fig. 4, except that the points show where the individual electrons were detected rather than where individual photons were detected. This experiment has since been repeated with molecules of more than 800 atoms and molecular weights greater than $1{0}^{4}$ amu (Eibenberger et al. (2013)). These experiments are strong verifications of the correctness of quantum mechanics as currently formulated. Feynman wrote a delightful and highly recommended book, QED: The Strange Story of Light and Matter (Feynman (1985)). This book explains, as only Feynman can, the fundamental ideas of QED without the math. He clearly considers light to be particles. For example, (on page 15 of my edition) he states “It is very important to know that light behaves like particles, especially for those of you who have gone to school, where you were probably told something about light behaving like waves. I’m telling you the way it does behave—like particles.” However, he also shows that these mysterious particles actually do take all possible paths from one point to another. Thus a photon goes through both slits because each slit is a possible path from the point of the photon’s creation to the point where it is detected. (If you want to see the mathematical horrors of how QED calculations are performed, the best book I’ve found is Introduction to Elementary Particles by David Griffiths (Griffiths (2008)). However, that book presumes you have spent some serious years in physics and math classes.) In 1979 Feynman also delivered a series of non-technical lectures on QED at the University of Auckland, which were the origin of his QED book. These are well worth viewing and are on-line in various places, e.g., at http://www.vega.org.uk/video/programme/45. Most physicists today seem quite happy talking about photons as one member of the pantheon of “elementary particles.” In their language, photons are zero-rest-mass, stable, spin one bosons, which always travel at the speed of light and have energy $E=hc∕\lambda$, momentum of magnitude $p=h∕\lambda$, and angular momentum of magnitude $\ell =h∕\left(2\pi \right)$. However, some people view things differently. Willis Lamb, of Lamb shift fame, wrote a paper “Anti-photon” (Lamb (1995)) in which he states “In his [the author Lamb’s] view, there is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. There are good substitute words for ’photon’, (e.g., ’radiation’ or ’light’)....” In closing he says, “It is high time to give up the use of the word ’photon’, and of a bad concept which will shortly be a century old. Radiation does not consist of particles, and the classical, i.e., non-quantum, limit of QTR [the quantum theory of radiation, or QED] is described by Maxwell’s equations for the electromagnetic fields, which do not involve particles.” Lamb’s paper is worth reading, and he makes some valid points. However, I’m afraid his battle to banish the word “photon” is lost. It is just too convenient. Biologists are going to continue to measure the light available for photosynthesis in units of photons per square meter per second. One einstein is going to retain its definition as “one mole of photons.” Optica (previously The Optical Society of America) is going to continue to publish Optics & Photonics. (Indeed, that magazine devoted the entire issue of October 2003 to six articles on the topic of “What is a Photon?”) It certainly would have been fun to get Lamb and Feynman together in a room and watch them argue about the reality or non-reality of photons. Perhaps the greatest danger inherent in the use of the word “photon” is that it makes it easy to think of light as little balls of energy behaving like particles in the every-day sense, which simply is not correct, as we have seen above. My own concession to Lamb and to the inability to say that a photon takes a particular path from point A to point B is that my Monte Carlo codes no longer “trace photons;” they now “trace rays.” The concept of a light ray is well accepted in the limit of geometrical optics, and all camera lenses are designed with sophisticated ray tracing codes that give perfectly good predictions of what light does, as do my Monte Carlo codes. My Monte Carlo calculations remain unchanged, I’m just more careful in describing what they do. The present day Enough has been said. The above discussion has reviewed the long and confused history of ideas about the nature of light. Our understanding of light has gone through “It’s a particle.”; “No, it’s a wave.”; “No, it’s simultaneously both a particle and a wave.”; and finally “It’s neither a particle nor a wave; it’s something much more mysterious.” This whole business reminds me of Nargarjuna’s Tetralemma in Buddhist philosophy. In Western philosophy we think of a statement as being either true or false. But the Buddhist philosopher Nargarjuna (c. 100 CE) posited the tetralemma that a statement can be true, or it can be false, or it can be both true and false at the same time, or it can be neither true nor false. Planck at the close of his 1918 Nobel Prize lecture raised a fundamental question: What becomes of the energy of a photon after complete emission? Does it spread out in all directions with further propagation in the sense of Huygens’ wave theory, so constantly taking up more space, in boundless progressive attenuation? Or does it fly out like a projectile in one direction in the sense of Newton’s emanation theory? In the first case, the quantum would no longer be in the position to concentrate energy upon a single point in space in such a way as to release an electron from its atomic bond, and in the second case, the main triumph of the Maxwell theory—the continuity between the static and the dynamic fields and, with it, the complete understanding we have enjoyed, until now, of the fully investigated interference phenomena—would have to be sacrificed, both being very unhappy consequences for today’s theoreticians. Be that as it may, in any case no doubt can arise that science will master the dilemma, serious as it is, and that which appears today so unsatisfactory will in fact eventually, seen from a higher vantage point, be distinguished by its special harmony and simplicity. Until this aim is achieved, the problem of the quantum of action will not cease to inspire research and fructify it, and the greater the difficulties which oppose its solution, the more significant it finally will show itself to be for the broadening and deepening of our whole knowledge in physics. As Planck predicted, science has learned much more about the nature of light and the role it plays in the universe, and the mystery of how photons behave continues to deepen (just Google “photon entanglement”). The word “photon” has itself evolved to mean different things to different people, as reviewed by Kidd et al. (1989). In any case, we still cannot say what light or a photon is in everyday language. We can only describe what it does. This situation is really no different from that of the electron. No one has any idea or model of what an electron “really is,” but that does not prevent electrical engineers from using the known properties of electrons to light our homes and run our computers. I’ll close this review with two quotes: All the fifty years of conscious brooding have brought me no closer to the answer to the question: What are light quanta? Of course today every rascal thinks he knows the answer, but he is deluding himself. —Albert Einstein, quoted in Zajonc (2003) No one knows what a photon is, and it’s best not to think about it. —Attributed to Richard Feynman
2022-11-30 23:25:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 30, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5990162491798401, "perplexity": 728.6281692854104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00417.warc.gz"}
https://en.wikipedia.org/wiki/600-cell
600-cell 600-cell Schlegel diagram, vertex-centered (vertices and edges) TypeConvex regular 4-polytope Schläfli symbol{3,3,5} Coxeter diagram Cells600 (3.3.3) Faces1200 {3} Edges720 Vertices120 Vertex figure icosahedron Petrie polygon30-gon Coxeter groupH4, [3,3,5], order 14400 Dual120-cell Propertiesconvex, isogonal, isotoxal, isohedral Uniform index35 In geometry, the 600-cell is the convex regular 4-polytope (four-dimensional analogue of a Platonic solid) with Schläfli symbol {3,3,5}. It is also known as the C600, hexacosichoron[1] and hexacosihedroid.[2] It is also called a tetraplex (abbreviated from "tetrahedral complex") and a polytetrahedron, being bounded by tetrahedral cells. The 600-cell's boundary is composed of 600 tetrahedral cells with 20 meeting at each vertex.[a] Together they form 1200 triangular faces, 720 edges, and 120 vertices. It is the 4-dimensional analogue of the icosahedron, since it has five tetrahedra meeting at every edge, just as the icosahedron has five triangles meeting at every vertex. Its dual polytope is the 120-cell, with which it can form a compound. Geometry The 600-cell is the fifth in the sequence of 6 convex regular 4-polytopes (in order of size and complexity).[b] It can be deconstructed into twenty-five overlapping instances of its immediate predecessor the 24-cell,[4] as the 24-cell can be deconstructed into three overlapping instances of its predecessor the tesseract (8-cell), and the 8-cell can be deconstructed into two overlapping instances of its predecessor the 16-cell.[5] The reverse procedure to construct each of these from an instance of its predecessor preserves the radius of the predecessor, but generally produces a successor with a smaller edge length.[c] The 24-cell's edge length equals its radius, but the 600-cell's edge length is ~0.618 times its radius. The 600-cell's radius and edge length are in the golden ratio. Coordinates The vertices of a 600-cell of unit radius centered at the origin of 4-space, with edges of length 1/φ ≈ 0.618 (where φ = 1 + 5/2 ≈ 1.618 is the golden ratio), can be given[6] as follows: 8 vertices obtained from (0, 0, 0, ±1) by permuting coordinates, and 16 vertices of the form: 1/2, ±1/2, ±1/2, ±1/2) The remaining 96 vertices are obtained by taking even permutations of φ/2, ±1/2, ±φ−1/2, 0) Note that the first 8 are the vertices of a 16-cell, the second 16 are the vertices of a tesseract, and those 24 vertices together are the vertices of a 24-cell. The remaining 96 vertices are the vertices of a snub 24-cell, which can be found by partitioning each of the 96 edges of another 24-cell (dual to the first) in the golden ratio in a consistent manner.[7] When interpreted as quaternions, these are the unit icosians. In the 24-cell, there are squares, hexagons and triangles that lie on great circles (in central planes through four or six vertices).[d] In the 600-cell there are twenty-five overlapping inscribed 24-cells, with each square unique to one 24-cell, each hexagon or triangle shared by two 24-cells, and each vertex shared among five 24-cells.[f] Hopf spherical coordinates In the 600-cell there are also great circle pentagons and decagons (in central planes through ten vertices).[g] Only the decagon edges are visible elements of the 600-cell (because they are the edges of the 600-cell). The edges of the other great circle polygons are interior chords of the 600-cell, which are not shown in any of the 600-cell renderings in this article. By symmetry, an equal number of polygons of each kind pass through each vertex; so it is possible to account for all 120 vertices as the intersection of a set of central polygons of only one kind: decagons, hexagons, pentagons, squares, or triangles. For example, the 120 vertices can be seen as the vertices of 15 pairs of completely orthogonal squares which do not share any vertices, or as 100 dual pairs of non-orthogonal hexagons between which all axis pairs are orthogonal, or as 144 non-orthogonal pentagons six of which intersect at each vertex. This latter pentagonal symmetry of the 600-cell is captured by the set of Hopf coordinates[j] (𝜉i, 𝜂, 𝜉j) given as: ({<10}𝜋/5, {≤5}𝜋/10, {<10}𝜋/5) where {<10} is the permutation of the ten digits (0 1 2 3 4 5 6 7 8 9) and {≤5} is the permutation of the six digits (0 1 2 3 4 5). The 𝜉i and 𝜉j coordinates range over the 10 vertices of great circle decagons; even and odd digits label the vertices of the two great circle pentagons inscribed in each decagon.[k] Structure Polyhedral sections The mutual distances of the vertices, measured in degrees of arc on the circumscribed hypersphere, only have the values 36° = 𝜋/5, 60° = 𝜋/3, 72° = 2𝜋/5, 90° = 𝜋/2, 108° = 3𝜋/5, 120° = 2𝜋/3, 144° = 4𝜋/5, and 180° = 𝜋. Departing from an arbitrary vertex V one has at 36° and 144° the 12 vertices of an icosahedron,[a] at 60° and 120° the 20 vertices of a dodecahedron, at 72° and 108° the 12 vertices of a larger icosahedron, at 90° the 30 vertices of an icosidodecahedron, and finally at 180° the antipodal vertex of V.[10] These can be seen in the H3 Coxeter plane projections with overlapping vertices colored.[11][12] These polyhedral sections are solids in the sense that they are 3-dimensional, but of course all of their vertices lie on the surface of the 600-cell (they are hollow, not solid). Each polyhedron lies in Euclidean 4-dimensional space as a parallel cross section through the 600-cell (a hyperplane). In the curved 3-dimensional space of the 600-cell's boundary envelope, the polyhedron surrounds the vertex V the way it surrounds its own center. But its own center is in the interior of the 600-cell, not on its surface. V is not actually at the center of the polyhedron, because it is displaced outward from that hyperplane in the fourth dimension, to the surface of the 600-cell. Thus V is the apex of a 4-pyramid based on the polyhedron. Vertex chords Vertex geometry of the 600-cell, showing the 5 regular great circle polygons and the 8 vertex-to-vertex chord lengths[d] with angles of arc. The golden ratio[l] governs the fractional roots of every other chord,[m] and the radial golden triangles[n] which meet at the center. The 120 vertices are distributed[13] at eight different chord lengths from each other. These edges and chords of the 600-cell are simply the edges and chords of its five great circle polygons.[14] In ascending order of length, they are 0.𝚫, 1, 1.𝚫, 2, 2.𝚽, 3, 3.𝚽, and 4.[o] Notice that the four hypercubic chords of the 24-cell (1, 2, 3, 4) alternate with the four new chords of the 600-cell's additional great circles, the decagons and pentagons. The new chord lengths are necessarily square roots of fractions, but very special fractions related to the golden ratio[l] including the two golden sections of 5, as shown in the diagram.[m] Geodesics The vertex chords of the 600-cell are arranged in geodesic great circle polygons of five kinds: decagons, hexagons, pentagons, squares, and triangles.[15] The 0.𝚫 = 𝚽 edges form 72 flat regular central decagons, 6 of which cross at each vertex.[a] Just as the icosidodecahedron can be partitioned into 6 central decagons (60 edges = 6 × 10), the 600-cell can be partitioned into 72 decagons (720 edges = 72 × 10). The 720 0.𝚫 edges divide the surface into 1200 triangular faces and 600 tetrahedral cells: a 600-cell. The 720 edges occur in 360 parallel pairs, 3.𝚽 apart. As in the decagon and the icosidodecahedron, the edges occur in golden triangles[q] which meet at the center of the polytope.[n] The 72 great decagons can be divided into 6 sets of 12 non-intersecting Clifford parallel geodesics, such that only one decagon in each set passes through each vertex, and the 12 decagons in each set reach all 120 vertices.[16] The 1 chords form 200 central hexagons (25 sets of 16, with each hexagon in two sets),[e] 10 of which cross at each vertex[r] (4 from each of five 24-cells, with each hexagon in two of the 24-cells). Each set of 16 hexagons consists of the 96 edges and 24 vertices of one of the 25 overlapping inscribed 24-cells. The 1 chords join vertices which are two 0.𝚫 edges apart. Each 1 chord is the long diameter of a face-bonded pair of tetrahedral cells (a triangular bipyramid), and passes through the center of the shared face. As there are 1200 faces, there are 1200 1 chords, in 600 parallel pairs, 3 apart. The hexagonal planes are non-orthogonal (60 degrees apart) but they occur as 100 dual pairs in which all 3 axes of one hexagon are orthogonal to all 3 axes of its dual.[17] The 200 great hexagons can be divided into 10 sets of 20 non-intersecting Clifford parallel geodesics, such that only one hexagon in each set passes through each vertex, and the 20 hexagons in each set reach all 120 vertices.[18] The 1.𝚫 chords form 144 central pentagons, 6 of which cross at each vertex.[g] The 1.𝚫 chords run vertex-to-every-second-vertex in the same planes as the 72 decagons: two pentagons are inscribed in each decagon. The 1.𝚫 chords join vertices which are two 0.𝚫 edges apart on a geodesic great circle. The 720 1.𝚫 chords occur in 360 parallel pairs, 2.𝚽 = φ apart. The 2 chords form 450 central squares (25 disjoint sets of 18), 15 of which cross at each vertex (3 from each of five 24-cells). Each set of 18 squares consists of the 72 2 edges and 24 vertices of one of the 25 overlapping inscribed 24-cells. The 2 chords join vertices which are three 0.𝚫 edges apart (and two 1 chords apart). Each 2 chord is the long diameter of an octahedral cell in just one 24-cell. There are 1800 2 chords, in 900 parallel pairs, 2 apart. The 450 great squares can be divided into 15 sets of 30 non-intersecting Clifford parallel geodesics, such that only one square in each set passes through each vertex, and the 30 squares in each set reach all 120 vertices.[19] The 2.𝚽 = φ chords form the legs of 720 central isosceles triangles (72 sets of 10 inscribed in each decagon), 6 of which cross at each vertex. The third edge (base) of each isosceles triangle is of length 3.𝚽. The 2.𝚽 chords run vertex-to-every-third-vertex in the same planes as the 72 decagons, joining vertices which are three 0.𝚫 edges apart on a geodesic great circle. There are 720 distinct 2.𝚽 chords, in 360 parallel pairs, 1.𝚫 apart. The 3 chords form 400 equilateral central triangles (25 sets of 32, with each triangle in two sets), 10 of which cross at each vertex (4 from each of five 24-cells, with each triangle in two of the 24-cells). Each set of 32 triangles consists of the 96 3 chords and 24 vertices of one of the 25 overlapping inscribed 24-cells. The 3 chords run vertex-to-every-second-vertex in the same planes as the 200 hexagons: two triangles are inscribed in each hexagon. The 3 chords join vertices which are four 0.𝚫 edges apart (and two 1 chords apart on a geodesic great circle). Each 3 chord is the long diameter of two cubic cells in the same 24-cell.[s] There are 1200 3 chords, in 600 parallel pairs, 1 apart. The 3.𝚽 chords (the diagonals of the pentagons) form the legs of 720 central isosceles triangles (144 sets of 5 inscribed in each pentagon), 6 of which cross at each vertex. The third edge (base) of each isosceles triangle is an edge of the pentagon of length 1.𝚫, so these are golden triangles.[q] The 3.𝚽 chords run vertex-to-every-fourth-vertex in the same planes as the 72 decagons, joining vertices which are four 0.𝚫 edges apart on a geodesic great circle. There are 720 distinct 3.𝚽 chords, in 360 parallel pairs, 0.𝚫 apart. The 4 chords occur as 60 long diameters (75 sets of 4 orthogonal axes), the 120 long radii of the 600-cell. The 4 chords join opposite vertices which are five 0.𝚫 edges apart on a geodesic great circle. There are 25 distinct but overlapping sets of 12 diameters, each comprising one of the 25 inscribed 24-cells.[t] The sum of the squared lengths[u] of all these distinct chords of the 600-cell is 14,400 = 1202.[v] These are all the geodesics through vertices, but the 600-cell does have at least one other geodesic that does not pass through any vertices.[w] Boundary envelopes The 600-cell rounds out the 24-cell by adding 96 more vertices between the 24-cell's existing 24 vertices, in effect adding twenty-four more overlapping 24-cells inscribed in the 600-cell.[x] The new surface thus formed is a tessellation of smaller, more numerous cells[y] and faces: tetrahedra of edge length 1/φ ≈ 0.618 instead of octahedra of edge length 1. It encloses the 1 edges of the 24-cells, which become interior chords in the 600-cell, like the 2 and 3 chords. Since the tetrahedra are made of shorter triangle edges than the octahedra (by a factor of 1/φ, the inverse golden ratio), the 600-cell does not have unit edge-length in a unit-radius coordinate system the way the 24-cell and the tesseract do; unlike those two, the 600-cell is not radially equilateral. Like them it is radially triangular in a special way, but one in which golden triangles rather than equilateral triangles meet at the center.[n] The boundary envelope of 600 small tetrahedral cells wraps around the twenty-five envelopes of 24 octahedral cells (adding some 4-dimensional space in places between these 3-dimensional envelopes). The shape of those interstices must be an octahedral 4-pyramid of some kind, but in the 600-cell it is not regular.[z] Constructions The 600-cell incorporates the geometries of every convex regular polytope in the first four dimensions, except the 5-cell, the 120-cell, and the polygons {7} and above.[22] Consequently, there are numerous ways to construct or deconstruct the 600-cell, but none of them are trivial. The construction of the 600-cell from its regular predecessor the 24-cell can be difficult to visualize. Gosset's construction Thorold Gosset discovered the semiregular 4-polytopes, including the snub 24-cell with 96 vertices, which falls between the 24-cell and the 600-cell in the sequence of convex 4-polytopes of increasing size and complexity in the same radius. Gosset's construction of the 600-cell from the 24-cell is in two steps, using the snub 24-cell as an intermediate form. In the first, more complex step (described elsewhere) the snub 24-cell is constructed by a special snub truncation of a 24-cell at the golden sections of its edges.[7] In the second step the 600-cell is constructed in a straightforward manner by adding 4-pyramids (vertices) to facets of the snub 24-cell.[23] The snub 24-cell is a diminished 600-cell from which 24 vertices (and the cluster of 20 tetrahedral cells around each) have been truncated, leaving a "flat" icosahedral cell in place of each removed icosahedral pyramid.[a] The snub 24-cell thus has 24 icosahedral cells and the remaining 120 tetrahedral cells. The second step of Gosset's construction of the 600-cell is simply the reverse of this diminishing: an icosahedral pyramid of 20 tetrahedral cells is placed on each icosahedral cell. Constructing the unit-radius 600-cell from its precursor the unit-radius 24-cell by Gosset's method actually requires three steps. The 24-cell precursor to the snub-24 cell is not of the same radius: it is larger, since the snub-24 cell is its truncation. Starting with the unit-radius 24-cell, the first step is to reciprocate it around its midsphere to construct its outer canonical dual: a larger 24-cell, since the 24-cell is self-dual. That larger 24-cell can then be snub truncated into a unit-radius snub 24-cell. Cell clusters Since it is so indirect, Gosset's construction may not help us very much to directly visualize how the 600 tetrahedral cells fit together into a 3-dimensional surface envelope,[y] or how they lie on the underlying surface envelope of the 24-cell's octahedral cells. For that it is helpful to build up the 600-cell directly from clusters of tetrahedral cells. Most of us have difficulty visualizing the 600-cell from the outside in 4-space, or recognizing an outside view of the 600-cell due to our total lack of sensory experience in 4-dimensional spaces, but we should be able to visualize the surface envelope of 600 cells from the inside because that volume is a 3-dimensional space that we could actually "walk around in" and explore.[24] In this exercise of building the 600-cell up from cell clusters, we are entirely within a 3-dimensional space, albeit a strangely small, closed curved space, in which we can go a mere ten edge lengths away in a straight line in any direction and return to our starting point. Icosahedra A regular icosahedron colored in snub octahedron symmetry.[aa] Icosahedra in the 600-cell are face bonded to each other at the yellow faces, and to clusters of 5 tetrahedral cells at the blue faces. The apex of the icosahedral pyramid (not visible) is a 13th 600-cell vertex inside the icosahedron (but above its hyperplane). A cluster of 5 tetrahedral cells: four cells face-bonded around a fifth cell (not visible). The four cells lie in different hyperplanes. The vertex figure of the 600-cell is the icosahedron.[a] Twenty tetrahedral cells meet at each vertex, forming an icosahedral pyramid whose apex is the vertex, surrounded by its base icosahedron. The 600-cell has a dihedral angle of 𝜋/3 + arccos(−1/4) ≈ 164.4775°.[26] An entire 600-cell can be assembled from 24 such icosahedral pyramids (bonded face-to-face at 8 of the 20 faces of the icosahedron, colored yellow in the illustration), plus 24 clusters of 5 tetrahedral cells (four cells face-bonded around one) which fill the voids remaining between the icosahedra. Six clusters of 5 cells surround each icosahedron, and six icosahedra surround each cluster of 5 cells. Five tetrahedral cells surround each icosahedron edge: two from the icosahedral pyramid, and three from a cluster of 5 cells (one of which is the central tetrahedron of the five). Each icosahedron is face-bonded to each adjacent cluster of 5 cells by two blue faces that share an edge (which is also one of the six edges of the central tetrahedron of the five). The apexes of the 24 icosahedral pyramids are the vertices of a 24-cell inscribed in the 600-cell. The other 96 vertices (the vertices of the icosahedra) are the vertices of an inscribed snub 24-cell, which has exactly the same structure of icosahedra and tetrahedra described here, except that the icosahedra are not 4-pyramids filled by tetrahedral cells; they are only "flat" 3-dimensional icosahedral cells. The partitioning of the 600-cell into clusters of 20 cells and clusters of 5 cells is artificial, since all the cells are the same. One can begin by picking out an icosahedral pyramid cluster centered at any arbitrarily chosen vertex. Thus there are 120 overlapping icosahedra in the 600-cell. Coloring the icosahedra with 8 yellow and 12 blue faces can be done in 5 distinct ways.[ab] Thus each icosahedral pyramid's apex vertex is a vertex of 5 distinct 24-cells, and the 120 vertices comprise 25 (not 5) 24-cells.[x] The icosahedra are face-bonded into geodesic "straight lines" by their opposite faces, bent in the fourth dimension into a ring of 6 icosahedral pyramids. Their apexes are the vertices of a great circle hexagon. This hexagonal geodesic traverses a ring of 12 tetrahedral cells, alternately bonded face-to-face and vertex-to-vertex. The long diameter of each face-bonded pair of tetrahedra (each triangular bipyramid) is a hexagon edge (a 24-cell edge). The tetrahedral cells are face-bonded into helices, bent in the fourth dimension into rings of 30 tetrahedral cells.[ac] Their edges form geodesic "straight lines" of 10 edges: great circle decagons. Each tetrahedron, having six edges, participates in six different decagons. Octahedra There is another useful way to partition the 600-cell surface, into 24 clusters of 25 tetrahedral cells, which reveals more structure[27] and a direct construction of the 600-cell from its predecessor the 24-cell. Begin with any one of the clusters of 5 cells (above), and consider its central cell to be the center object of a new larger cluster of tetrahedral cells. The central cell is the first section of the 600-cell beginning with a cell. By surrounding it with more tetrahedral cells, we can reach the deeper sections beginning with a cell. First, note that a cluster of 5 cells consists of 4 overlapping pairs of face-bonded tetrahedra (triangular dipyramids) whose long diameter is a 24-cell edge (a hexagon edge) of length 1. Six more triangular dipyramids fit into the concavities on the surface of the cluster of 5,[ad] so the exterior chords connecting its 4 apical vertices are also 24-cell edges of length 1. They form a tetrahedron of edge length 1, which is the second section of the 600-cell beginning with a cell.[ae] There are 600 of these 1 tetrahedral sections in the 600-cell.[af] With the six triangular dipyamids fit into the concavities, there are 12 new cells and 6 new vertices in addition to the 5 cells and 8 vertices of the original cluster. The 6 new vertices form the third section of the 600-cell beginning with a cell, an octahedron of edge length 1, obviously the cell of a 24-cell. As partially filled so far (by 17 tetrahedral cells), this 1 octahedron has concave faces into which a short triangular pyramid fits; it has the same volume as a regular tetrahedral cell but an irregular tetrahedral shape.[ag] Each octahedral cell consists of 1 + 4 + 12 + 8 = 25 tetrahedral cells: 17 regular tetrahedral cells plus 8 volumetrically equivalent tetrahedral cells each consisting of 6 one-sixth fragments from 6 different regular tetrahedral cells that each span three adjacent octahedral cells. Thus the unit-radius 600-cell is constructed directly from its predecessor,[z] the unit-radius 24-cell, by placing on each of its octahedral facets a truncated[ah] irregular octahedral pyramid of 14 vertices[ai] constructed (in the above manner) from 25 regular tetrahedral cells of edge length 1/φ ≈ 0.618. Rotations The 600-cell is generated by rotations of the 24-cell in increments of 36° = 𝜋/5 (the arc of one 600-cell edge length). There are 25 inscribed 24-cells in the 600-cell. Therefore there are also 25 inscribed snub 24-cells, 75 inscribed tesseracts and 75 inscribed 16-cells.[x] The 8-vertex 16-cell has 4 long diameters inclined at 90° = 𝜋/2 to each other, often taken as the 4 orthogonal axes of the coordinate system. The 24-vertex 24-cell has 12 long diameters inclined at 60° = 𝜋/3 to each other: 3 disjoint sets of 4 orthogonal axes, each set comprising the diameters of one of 3 inscribed 16-cells, isoclinically rotated by 𝜋/3 with respect to each other.[28] The 120-vertex 600-cell has 60 long diameters: not just 5 disjoint sets of 12 diameters, each comprising one of 5 inscribed 24-cells (as we might suspect by analogy), but 25 distinct but overlapping sets of 12 diameters, each comprising one of 25 inscribed 24-cells. There are 5 disjoint 24-cells in the 600-cell, but not just 5: there are 10 different ways to partition the 600-cell into 5 disjoint 24-cells.[t] The 24-cells are rotated with respect to each other in increments of 𝜋/5. The rotational distance between inscribed 24-cells is always a double rotation of 0 to 4 increments of 𝜋/5 in one invariant plane, combined with 0 to 4 increments of 𝜋/5 in the completely orthogonal invariant plane. The product of these two 5-click simple rotations produces 25 distinct ways we can pick the 24 vertices of a 24-cell out of the 120 vertices of a 600-cell. The 600-cell can be constructed radially from 720 golden triangles of edge lengths 0.𝚫 1 1 which meet at the center of the 4-polytope, each contributing two 1 radii and a 0.𝚫 edge.[n] They form 1200 triangular pyramids with their apexes at the center: irregular tetrahedra with equilateral 0.𝚫 bases (the faces of the 600-cell). These form 600 tetrahedral pyramids with their apexes at the center: irregular 5-cells with regular 0.𝚫 tetrahedron bases (the cells of the 600-cell). As a configuration This configuration matrix[29] represents the 600-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 600-cell. The non-diagonal numbers say how many of the column's element occur in or at the row's element. ${\displaystyle {\begin{bmatrix}{\begin{matrix}120&12&30&20\\2&720&5&5\\3&3&1200&2\\4&6&4&600\end{matrix}}\end{bmatrix}}}$ Here is the configuration expanded with k-face elements and k-figures. The diagonal element counts are the ratio of the full Coxeter group order, 14400, divided by the order of the subgroup with mirror removal. H4 k-face fk f0 f1 f2 f3 k-fig Notes H3 ( ) f0 120 12 30 20 {3,5} H4/H3 = 14400/120 = 120 A1H2 { } f1 2 720 5 5 {5} H4/H2A1 = 14400/10/2 = 720 A2A1 {3} f2 3 3 1200 2 { } H4/A2A1 = 14400/6/2 = 1200 A3 {3,3} f3 4 6 4 600 ( ) H4/A3 = 14400/24 = 600 Symmetries The icosians are a specific set of Hamiltonian quaternions with the same symmetry as the 600-cell.[30] The icosians lie in the golden field, (a + b5) + (c + d5)i + (e + f5)j + (g + h5)k, where the eight variables are rational numbers.[31] The finite sums of the 120 unit icosians are called the icosian ring. When interpreted as quaternions, the 120 vertices of the 600-cell form a group under quaternionic multiplication. This group is often called the binary icosahedral group and denoted by 2I as it is the double cover of the ordinary icosahedral group I. It occurs twice in the rotational symmetry group RSG of the 600-cell as an invariant subgroup, namely as the subgroup 2IL of quaternion left-multiplications and as the subgroup 2IR of quaternion right-multiplications. Each rotational symmetry of the 600-cell is generated by specific elements of 2IL and 2IR; the pair of opposite elements generate the same element of RSG. The centre of RSG consists of the non-rotation Id and the central inversion −Id. We have the isomorphism RSG ≅ (2IL × 2IR) / {Id, -Id}. The order of RSG equals 120 × 120/2 = 7200. The binary icosahedral group is isomorphic to SL(2,5). The full symmetry group of the 600-cell is the Weyl group of H4.[32] This is a group of order 14400. It consists of 7200 rotations and 7200 rotation-reflections. The rotations form an invariant subgroup of the full symmetry group. The rotational symmetry group was described by S.L. van Oss.[33] Visualization The symmetries of the 3-D surface of the 600-cell are somewhat difficult to visualize due to both the large number of tetrahedral cells,[y] and the fact that the tetrahedron has no opposing faces or vertices. One can start by realizing the 600-cell is the dual of the 120-cell. One may also notice that the 600-cell also contains the vertices of a dodecahedron,[22] which with some effort can be seen in most of the below perspective projections. Union of two tori 100 tetrahedra in a 10×10 array forming a clifford torus boundary in the 600 cell. The 120-cell can be decomposed into two disjoint tori. Since it is the dual of the 600-cell, this same dual tori structure exists in the 600-cell, although it is somewhat more complex. The 10-cell geodesic path in the 120-cell corresponds to a 10-vertex decagon path in the 600-cell.[34] Start by assembling five tetrahedra around a common edge. This structure looks somewhat like an angular "flying saucer". Stack ten of these, vertex to vertex, "pancake" style. Fill in the annular ring between each "saucer" with 10 tetrahedra forming an icosahedron. You can view this as five, vertex stacked, icosahedral pyramids, with the five extra annular ring gaps also filled in. The surface is the same as that of ten stacked pentagonal antiprisms. You now have a torus consisting of 150 cells, ten edges long, with 100 exposed triangular faces, 150 exposed edges, and 50 exposed vertices. Stack another tetrahedron on each exposed face. This will give you a somewhat bumpy torus of 250 cells with 50 raised vertices, 50 valley vertices, and 100 valley edges. The valleys are 10 edge long closed paths and correspond to other instances of the 10-vertex decagon path mentioned above. These paths spiral around the center core path, but mathematically they are all equivalent. Build a second identical torus of 250 cells that interlinks with the first. This accounts for 500 cells. These two tori mate together with the valley vertices touching the raised vertices, leaving 100 tetrahedral voids that are filled with the remaining 100 tetrahedra that mate at the valley edges. This latter set of 100 tetrahedra are on the exact boundary of the duocylinder and form a clifford torus. They can be "unrolled" into a square 10x10 array. Incidentally this structure forms one tetrahedral layer in the tetrahedral-octahedral honeycomb. A single 30-tetrahedron ring Boerdijk–Coxeter helix within the 600-cell, seen stereographic projection A 30-tetrahedron ring can be seen along the perimeter of this 30-gonal orthogonal projection There are exactly 50 "egg crate" recesses and peaks on both sides that mate with the 250 cell tori. In this case into each recess, instead of an octahedron as in the honeycomb, fits a triangular bipyramid composed of two tetrahedra. The 600-cell can be further partitioned into 20 disjoint intertwining rings of 30 cells and ten edges long each, forming a discrete Hopf fibration.[35] These chains of 30 tetrahedra each form a Boerdijk–Coxeter helix. Five such helices nest and spiral around each of the 10-vertex decagon paths, forming the initial 150 cell torus mentioned above.[36] The center axis of each helix is a 30-gon geodesic that does not intersect any vertices.[w] This decomposition of the 600-cell has symmetry [[10,2+,10]], order 400, the same symmetry as the grand antiprism. The grand antiprism is just the 600-cell with the two above 150-cell tori removed, leaving only the single middle layer of tetrahedra, similar to the belt of an icosahedron with the 5 top and 5 bottom triangles removed (pentagonal antiprism). 2D projections The H3 decagonal projection shows the plane of the van Oss polygon. Orthographic projections by Coxeter planes H4 - F4 [30] [20] [12] H3 A2 / B3 / D4 A3 / B2 [10] [6] [4] 3D projections A three-dimensional model of the 600-cell, in the collection of the Institut Henri Poincaré, was photographed in 1934–1935 by Man Ray, and formed part of two of his later "Shakesperean Equation" paintings.[37] Vertex-first projection This image shows a vertex-first perspective projection of the 600-cell into 3D. The 600-cell is scaled to a vertex-center radius of 1, and the 4D viewpoint is placed 5 units away. Then the following enhancements are applied: • The 20 tetrahedra meeting at the vertex closest to the 4D viewpoint are rendered in solid color. Their icosahedral arrangement is clearly shown. • The tetrahedra immediately adjoining these 20 cells are rendered in transparent yellow. • The remaining cells are rendered in edge-outline. • Cells facing away from the 4D viewpoint (those lying on the "far side" of the 600-cell) have been culled, to reduce visual clutter in the final image. Cell-first projection. This image shows the 600-cell in cell-first perspective projection into 3D. Again, the 600-cell to a vertex-center radius of 1 and the 4D viewpoint is placed 5 units away. The following enhancements are then applied: • The nearest cell to the 4d viewpoint is rendered in solid color, lying at the center of the projection image. • The cells surrounding it (sharing at least 1 vertex) are rendered in transparent yellow. • The remaining cells are rendered in edge-outline. • Cells facing away from the 4D viewpoint have been culled for clarity. This particular viewpoint shows a nice outline of 5 tetrahedra sharing an edge, towards the front of the 3D image. Simple Rotation A 3D projection of a 600-cell performing a simple rotation. Concentric Hulls The 600-cell is projected to 3D using an orthonormal basis. The vertices are sorted and tallied by their 3D norm. Generating the increasingly transparent hull of each set of tallied norms shows pairs of: 1) points at the origin 2) icosahedrons 3) dodecahedrons 4) icosahedrons for a total of 120 vertices. Frame synchronized animated comparison of the 600 cell using orthogonal isometric (left) and perspective (right) projections. Stereographic projection (on 3-sphere) Cell-Centered. The 720 edges of the 600-cell can be seen here as 72 circles, each divided into 10 arc-edges at the intersections. Each vertex has 6 circles intersecting. Diminished 600-cells The snub 24-cell may be obtained from the 600-cell by removing the vertices of an inscribed 24-cell and taking the convex hull of the remaining vertices. This process is a diminishing of the 600-cell. The grand antiprism may be obtained by another diminishing of the 600-cell: removing 20 vertices that lie on two mutually orthogonal rings and taking the convex hull of the remaining vertices. A bi-24-diminished 600-cell, with all tridiminished icosahedron cells has 48 vertices removed, leaving 72 of 120 vertices of the 600-cell. The dual of a bi-24-diminished 600-cell, is a tri-24-diminished 600-cell, with 48 vertices and 72 hexahedron cells. There are a total of 314,248,344 diminishings of the 600-cell by non-adjacent vertices. All of these consist of regular tetrahedral and icosahedral cells.[38] Diminished 600-cells Name Tri-24-diminished 600-cell Bi-24-diminished 600-cell Snub 24-cell (24-diminished 600-cell) Grand antiprism (20-diminished 600-cell) 600-cell Vertices 48 72 96 100 120 Vertex figure (Symmetry) dual of tridiminished icosahedron ([3], order 6) tetragonal antiwedge ([2]+, order 2) tridiminished icosahedron ([3], order 6) bidiminished icosahedron ([2], order 4) Icosahedron ([5,3], order 120) Symmetry Order 144 (48×3 or 72×2) [3+,4,3] Order 576 (96×6) [[10,2+,10]] Order 400 (100×4) [5,3,3] Order 14400 (120×120) Net Ortho H4 plane Ortho F4 plane Related complex polygons The regular complex polytopes 3{5}3, and 5{3}5, , in ${\displaystyle \mathbb {C} ^{2}}$ have a real representation as 600-cell in 4-dimensional space. Both have 120 vertices, and 120 edges. The first has Complex reflection group 3[5]3, order 360, and the second has symmetry 5[3]5, order 600.[39] Regular complex polytope in orthogonal projection of H4 Coxeter plane {3,3,5} Order 14400 3{5}3 Order 360 5{3}5 Order 600 Related polytopes and honeycombs The 600-cell is one of 15 regular and uniform polytopes with the same symmetry [3,3,5]: H4 family polytopes 120-cell rectified 120-cell truncated 120-cell cantellated 120-cell runcinated 120-cell cantitruncated 120-cell runcitruncated 120-cell omnitruncated 120-cell {5,3,3} r{5,3,3} t{5,3,3} rr{5,3,3} t0,3{5,3,3} tr{5,3,3} t0,1,3{5,3,3} t0,1,2,3{5,3,3} 600-cell rectified 600-cell truncated 600-cell cantellated 600-cell bitruncated 600-cell cantitruncated 600-cell runcitruncated 600-cell omnitruncated 600-cell {3,3,5} r{3,3,5} t{3,3,5} rr{3,3,5} 2t{3,3,5} tr{3,3,5} t0,1,3{3,3,5} t0,1,2,3{3,3,5} It is similar to three regular 4-polytopes: the 5-cell {3,3,3}, 16-cell {3,3,4} of Euclidean 4-space, and the order-6 tetrahedral honeycomb {3,3,6} of hyperbolic space. All of these have a tetrahedral cells. {3,3,p} polytopes Space S3 H3 Form Finite Paracompact Noncompact Name {3,3,3} {3,3,4} {3,3,5} {3,3,6} {3,3,7} {3,3,8} ... {3,3,∞} Image Vertex figure {3,3} {3,4} {3,5} {3,6} {3,7} {3,8} {3,∞} This 4-polytope is a part of a sequence of 4-polytope and honeycombs with icosahedron vertex figures: {p,3,5} polytopes Space S3 H3 Form Finite Compact Paracompact Noncompact Name {3,3,5} {4,3,5} {5,3,5} {6,3,5} {7,3,5} {8,3,5} ... {∞,3,5} Image Cells {3,3} {4,3} {5,3} {6,3} {7,3} {8,3} {∞,3} Notes 1. In the 3-dimensional space of the 600-cell's boundary surface, at each vertex one finds the twelve nearest other vertices surrounding the vertex the way an icosahedron's vertices surround its center. Twelve 600-cell edges converge at the icosahedron's center, where they appear to form six straight lines which cross there. However, the center is actually displaced in the 4th dimension (radially outward from the center of the 600-cell), out of the hyperplane defined by the icosahedron's vertices. Thus the vertex icosahedron is actually a canonical icosahedral pyramid, composed of 20 regular tetrahedra on a regular icosahedron base. 2. ^ The convex regular 4-polytopes can be ordered by size as a measure of 4-dimensional content (hypervolume) for the same radius. Each greater polytope in the sequence is rounder than its predecessor, enclosing more content[3] within the same radius. The 4-simplex (5-cell) is the limit smallest case, and the 120-cell is the largest. Complexity (as measured by comparing configuration matrices or simply the number of vertices) follows the same ordering. This provides an alternative numerical naming scheme for regular polytopes in which the 600-cell is the 120-point 4-polytope: fifth in the ascending sequence that runs from 5-point 4-polytope to 600-point 4-polytope. 3. ^ The edge length will always be different unless predecessor and successor are both radially equilateral, i.e. their edge length is the same as their radius (so both are preserved). Since radially equilateral polytopes are rare, the only such construction (in any dimension) is from the 8-cell to the 24-cell. 4. ^ a b Vertex geometry of the radially equilateral 24-cell, showing the 3 great circle polygons and the 4 vertex-to-vertex chord lengths. The 600-cell geometry is based on the 24-cell. The 600-cell rounds out the 24-cell with 2 more great circle polygons (exterior decagon and interior pentagon), adding 4 more chord lengths which alternate with the 24-cell's 4 chord lengths. 5. ^ a b A 24-cell contains 16 hexagons. In the 600-cell, with 25 24-cells, each 24-cell is disjoint from 8 24-cells and intersects each of the other 16 24-cells in six vertices that form a hexagon.[8] A 600-cell contains 25・16/2 = 200 such hexagons. 6. ^ In cases where inscribed 4-polytopes of the same kind occupy disjoint sets of vertices (such as the two 16-cells inscribed in the tesseract, or the three 16-cells inscribed in the 24-cell), their sets of vertex chords and central polygons must likewise be disjoint. In the cases where they share vertices (such as the three tesseracts inscribed in the 24-cell, or the 25 24-cells inscribed in the 600-cell), they may also share some vertex chords and central polygons.[e] 7. ^ a b Each of the 25 24-cells of the 600-cell contains exactly one vertex (or no vertices) of each regular pentagon.[8] 8. ^ The angles 𝜉i and 𝜉j are angles of rotation in the two completely orthogonal invariant planes which characterize rotations in 4-dimensional Euclidean space. The angle 𝜂 is the inclination of both these planes from the north-south pole axis, where 𝜂 ranges from 0 to 𝜋/2. The (𝜉i, 0, 𝜉j) coordinates describe the great circles which intersect at the north and south pole ("lines of longitude"). The (𝜉i, 𝜋/2, 𝜉j) coordinates describe the great circles orthogonal to longitude ("equators"); there is more than one "equator" great circle in a 4-polytope, as the equator of a 3-sphere is a whole 2-sphere of great circles. The other Hopf coordinates (𝜉i, 0 < 𝜂 < 𝜋/2, 𝜉j) describe the great circles (not "lines of latitude") which cross an equator but do not pass through the north or south pole. 9. ^ The conversion from Hopf coordinates (𝜉i, 𝜂, 𝜉j) to unit-radius Cartesian coordinates (w, x, y, z) is: w = cos 𝜉i sin 𝜂 x = cos 𝜉j cos 𝜂 y = sin 𝜉j cos 𝜂 z = sin 𝜉i sin 𝜂 The "Hopf north pole" (0, 0, 0) is Cartesian (0, 1, 0, 0). Cartesian (1, 0, 0, 0) is Hopf (0, 𝜋/2, 0). 10. ^ The Hopf coordinates[9] are triples of three angles: (𝜉i, 𝜂, 𝜉j) that parameterize the 3-sphere by numbering points along its great circles. A Hopf coordinate describes a point as a rotation from the "north pole" (0, 0, 0).[h] Hopf coordinates are a natural alternative to Cartesian coordinates[i] for framing regular convex 4-polytopes, because the group of 4-dimensional rotations, denoted SO(4), generates those polytopes. 11. ^ There are 600 permutations of these coordinates, but there are only 120 vertices in the 600-cell. These are actually the Hopf coordinates of the vertices of the 120-cell, which can be seen (two different ways) as a compound of 5 disjoint 600-cells. 12. ^ a b c The fractional-root golden chords exemplify that the golden ratio ϕ is a circle ratio related to 𝜋: 𝜋/5 = arccos (ϕ/2) is one decagon edge, the 0.𝚫 = 0.382 = 0.618 = 𝚽 chord. Reciprocally, in this function discovered by Robert Everest expressing ϕ as a function of 𝜋 and the numbers 1, 2, 3 and 5 of the Fibonacci series: ϕ = 1 – 2 cos (3𝜋/5) 3𝜋/5 is the arc length of the 2.𝚽 = 2.618 = 1.618 = ϕ chord. 13. ^ a b The 600-cell edges are decagon edges of length 0.𝚫, which is 𝚽, the smaller golden section of 5; the edges are in the inverse golden ratio 1/φ to the 1 hexagon chords (the 24-cell edges). The other fractional-root chords exhibit golden relationships as well. The chord of length 1.𝚫 is a pentagon edge. The next fractional-root chord is a decagon diagonal of length 2.𝚽 which is φ, the larger golden section of 5; it is in the golden ratio[l] to the 1 chord (and the radius).[p] The last fractional-root chord is the pentagon diagonal of length 3.𝚽. The diagonal of a regular pentagon is always in the golden ratio to its edge, and indeed φ1.𝚫 is 3.𝚽. 14. ^ a b c d The long radius (center to vertex) of the 600-cell is in the golden ratio to its edge length; thus its radius is ϕ if its edge length is 1, and its edge length is 1/ϕ if its radius is 1. Only a few uniform polytopes have this property, including the four-dimensional 600-cell, the three-dimensional icosidodecahedron, and the two-dimensional decagon. (The icosidodecahedron is the equatorial cross section of the 600-cell, and the decagon is the equatorial cross section of the icosidodecahedron.) Radially golden polytopes are those which can be constructed, with their radii, from golden triangles[q] which meet at the center, each contributing two radii and an edge. 15. ^ The fractional square roots are given as decimal fractions where 𝚽 ≈ 0.618 is the inverse golden ratio 1/φ and 𝚫 ≈ 0.382 = 1 - 𝚽 = 𝚽2. For example: 0.𝚫 = 0.382 = 0.618 = 𝚽 16. ^ Notice in the diagram how the φ chord (the larger golden section) sums with the adjacent 𝚽 edge (the smaller golden section) to 5, as if together they were a 5 chord bent to fit inside the 4 diameter. 17. ^ a b c A golden triangle is an isosceles triangle in which the duplicated side a is in the golden ratio to the distinct side b: a/b = ϕ = 1 + 5/2 ≈ 1.618 It can be found in a regular decagon by connecting any two adjacent vertices to the center, and in the regular pentagon by connecting any two adjacent vertices to the vertex opposite them. The vertex angle is: 𝛉 = arccos(ϕ/2) = 𝜋/5 = 36° so the base angles are each 2𝜋/5 = 72°. The golden triangle is uniquely identified as the only triangle to have its three angles in 2:2:1 proportions. 18. ^ The 10 hexagons which cross at each vertex lie along the 20 short radii of the icosahedral vertex figure.[a] 19. ^ The 25 inscribed 24-cells each have 3 inscribed tesseracts, which each have 8 1 cubic cells. The 1200 3 chords are the 4 long diameters of these 600 cubes; the 3 tesseracts overlap and each chord is the long diameter of a cube in two different tesseracts. 20. ^ a b Schoute was the first to state (a century ago) that there are exactly ten ways to partition the 120 vertices of the 600-cell into five disjoint 24-cells. The 25 24-cells can be placed in a 5 x 5 array such that each row and each column of the array partitions the 120 vertices of the 600-cell into five disjoint 24-cells. The rows and columns of the array are the only ten such partitions of the 600-cell.[21] 21. ^ The sum of 0.𝚫・720 + 1・1200 + 1.𝚫・720 + 2・1800 + 2.𝚽・720 + 3・1200 + 3.𝚽・720 + 4・60 is 14,400. 22. ^ The sum of the squared lengths of all the distinct chords of any regular convex n-polytope of unit radius is the square of the number of vertices.[20] 23. ^ a b The 600 cells are arranged in 20 disjoint twisted rings of 30 tetrahedra each. The center axis of each 30-tetrahedron Boerdijk–Coxeter helix forms a 30-gon, with each segment passing through a tetrahedron similarly. This geodesic resides completely in the 3-dimensional surface; the segments are not interior chords. It does not touch any edges or vertices, but it does hit faces. 24. ^ a b c The 600-cell contains exactly 25 24-cells, 75 16-cells and 75 8-cells, with each 16-cell and each 8-cell lying in just one 24-cell.[21] 25. ^ a b c Each tetrahedral cell touches, in some manner, 56 other cells. One cell contacts each of the four faces; two cells contact each of the six edges, but not a face; and ten cells contact each of the four vertices, but not a face or edge. 26. ^ a b Beginning with the 16-cell, every regular convex 4-polytope in the unit-radius sequence is inscribed in its successor.[5] Therefore the successor may be constructed by placing 4-pyramids of some kind on the cells of its predecessor. Between the 16-cell and the tesseract, we have 16 right tetrahedral pyramids, with their apexes filling the corners of the tesseract. Between the tesseract and the 24-cell, we have 8 canonical cubic pyramids. But if we place 24 canonical octahedral pyramids on the 24-cell, we only get another tesseract (of twice the radius and edge length), not the successor 600-cell. Between the 24-cell and the 600-cell there must be 24 smaller, irregular 4-pyramids on a regular octahedral base. 27. ^ Because the octahedron can be snub truncated yielding an icosahedron,[25] another name for the icosahedron is snub octahedron. This term refers specifically to a lower symmetry arrangement of the icosahedron's faces (with 8 faces of one color and 12 of another). 28. ^ The pentagonal pyramids around each vertex of the "snub octahedron" icosahedron all look the same, with two yellow and three blue faces. Each pentagon has five distinct rotational orientations. Rotating any pentagonal pyramid rotates all of them, so the five rotational positions are the only five different ways to arrange the colors. 29. ^ Since tetrahedra do not have opposing faces, the only way they can be stacked face-to-face in a straight line is in the form of a twisted chain called a Boerdijk-Coxeter helix. 30. ^ These 12 cells are edge-bonded to the central cell, face-bonded to the exterior faces of the cluster of 5, and face-bonded to each other in pairs. They are blue-faced cells in the 6 different icosahedral pyramids surrounding the cluster of 5. 31. ^ The 1 tetrahedron has a volume of 9 0.𝚫 tetrahedral cells. In the 3-dimensional volume of the 600 cells, it encloses the cluster of 5 cells, which do not entirely fill it. The 6 dipyramids (12 cells) which fit into the concavities of the cluster of 5 cells overfill it: only one third of each dipyramid lies within the 1 tetrahedron. The dipyramids contribute one-third of each of 12 cells to it, a volume equivalent to 4 cells. 32. ^ We also find 1 tetrahedra as the cells of the unit-radius 5-cell, and radially around the center of the 24-cell (one behind each of the 96 faces). Those radial 1 tetrahedra also occur in the 600-cell (in the 25 inscribed 24-cells), but note that those are not the same tetrahedra as the 600 1 tetrahedral sections. 33. ^ Each 1 edge of the octahedral cell is the long diameter of another tetrahedral dipyramid (two more face-bonded tetrahedral cells). In the 24-cell, three octahedral cells surround each edge, so one third of the dipyramid lies inside each octahedron, split between two adjacent concave faces. Each concave face is filled by one-sixth of each of the three dipyramids that surround its three edges, so it has the same volume as one tetrahedral cell. 34. ^ The apex of a canonical 1 octahedral pyramid has been snub truncated into a regular tetrahedral cell with shorter 0.𝚫 edges, replacing the apex with four vertices. The truncation has also created another four vertices (arranged as a 1 tetrahedron in a hyperplane between the octahedral base and the apex tetrahedral cell), and linked these eight new vertices with 0.𝚫 edges. The truncated pyramid thus has eight 'apex' vertices above the hyperplane of its octahedral base, rather than just one. The original pyramid had flat sides: the five geodesic routes from any base vertex to the opposite base vertex ran along two 1 edges (and just one of those routes ran through the single apex). The truncated pyramid has rounded sides: five geodesic routes from any base vertex to the opposite base vertex run along three 0.𝚫 edges (and pass through two 'apexes'). 35. ^ The uniform 4-polytopes which this 14-vertex, 25-cell irregular 4-polytope most closely resembles may be the 10-vertex, 10-cell rectified 5-cell and its dual (it has characteristics of both). Citations 1. ^ N.W. Johnson: Geometries and Transformations, (2018) ISBN 978-1-107-10340-5 Chapter 11: Finite Symmetry Groups, 11.5 Spherical Coxeter groups, p.249 2. ^ Matila Ghyka, The Geometry of Art and Life (1977), p.68 3. ^ Coxeter 1973, pp. 292-293, Table I(ii): The sixteen regular polytopes {p,q,r} in four dimensions; An invaluable table providing all 20 metrics of each 4-polytope in edge length units. They must be algebraically converted to compare polytopes of unit radius. 4. ^ Coxeter 1973, p. 153, §8.51; "In fact, the vertices of {3, 3, 5}, each taken 5 times, are the vertices of 25 {3, 4, 3}'s." 5. ^ a b Coxeter 1973, p. 305, Table VII: Regular Compounds in Four Dimensions. 6. ^ Coxeter 1973, pp. 156-157, §8.7 Cartesian coordinates. 7. ^ a b Coxeter 1973, pp. 151-153, §8.4 The snub {3,4,3}. 8. ^ a b Denney et al. 2020, p. 438. 9. ^ Zamboj 2021, pp. 10-11, §Hopf coordinates. 10. ^ Coxeter 1973, p. 298, Table V: The Distribution of Vertices of Four-dimensional Polytopes in Parallel Solid Sections (§13.1); (iii) Sections of {3, 3, 5} (edge 2𝜏−1) beginning with a vertex. 11. ^ Oss 1899; van Oss does not mention the arc distances between vertices of the 600-cell. 12. ^ 13. ^ Coxeter 1973, p. 298, Table V: The Distribution of Vertices of Four-dimensional Polytopes in Parallel Solid Sections (§13.1); (iii) Sections of {3, 3, 5} (edge 2𝜏−1) beginning with a vertex; see column a. 14. ^ Steinbach 1997, p. 23, Figure 3; Steinbach derived a formula relating the diagonals and edge lengths of successive regular polygons, and illustrated it with a "fan of chords" diagram like the one here. 15. ^ Denney et al. 2020, pp. 437-439, §4 The planes of the 600-cell. 16. ^ Sadoc 2001, p. 576, §2.4: the ten-fold screw axis. 17. ^ Waegell & Aravind 2009, p. 5, §3.4. The 24-cell: points, lines, and Reye's configuration. [The dual hexagon planes are not orthogonal to each other, only their dual axis pairs. Dual hexagon pairs do not occur in individual 24-cells, only between 24-cells in the 600-cell.] 18. ^ Sadoc 2001, pp. 576-577, §2.4: the six-fold screw axis. 19. ^ Sadoc 2001, p. 577, §2.4: the four-fold screw axis. 20. ^ Copher 2019, p. 6, §3.2 Theorem 3.4. 21. ^ a b Denney et al. 2020, p. 434. 22. ^ a b Coxeter 1973, p. 303, Table VI (iii): 𝐈𝐈 = {3,3,5}. 23. ^ Coxeter 1973, p. 153, §8.5 Gosset's construction for {3,3,5}. 24. ^ Miyazaki 1990; Miyazaki showed that the surface envelope of the 600-cell can be realized architecturally in our ordinary 3-dimensional space as physical buildings (geodesic domes). 25. ^ Coxeter 1973, pp. 50-52, §3.7. 26. ^ Coxeter 1973, p. 293; 164°29' 27. ^ Coxeter 1973, p. 299, Table V: (iv) Simplified sections of {3,3,5} ... beginning with a cell. 28. ^ Waegell & Aravind 2009, pp. 2-5, §3. The 600-cell. 29. ^ Coxeter 1973, p. 12, §1.8. Configurations. 30. ^ van Ittersum 2020, pp. 80-95, §4.3. 31. ^ Steinbach 1997, p. 24. 32. ^ Denney et al. 2020, §2 The Labeling of H4. 33. ^ Oss 1899, pp. 1-18. 34. ^ Sadoc 2001, p. 576-577, §2.4 Discretising the fibration for the {3, 3, 5} polytope. 35. ^ Zamboj 2021, pp. 6-12, §2 Mathematical background. 36. ^ Sadoc 2001, p. 577-578, §2.5 The 30/11 symmetry. 37. ^ Grossman, Wendy A.; Sebline, Edouard, eds. (2015), Man Ray Human Equations: A journey from mathematics to Shakespeare, Hatje Cantz. See in particular mathematical object mo-6.2, p. 58; Antony and Cleopatra, SE-6, p. 59; mathematical object mo-9, p. 64; Merchant of Venice, SE-9, p. 65, and "The Hexacosichoron", Philip Ordning, p. 96. 38. ^ Sikiric, Mathieu; Myrvold, Wendy (2007). "The special cuts of 600-cell". Beiträge zur Algebra und Geometrie. 49 (1). arXiv:0708.3443. 39. ^ Coxeter 1991, pp. 48-49.
2021-08-04 14:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6857770085334778, "perplexity": 2345.847223745575}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00705.warc.gz"}
https://www.techwhiff.com/issue/1-point-a-5-8-is-reflected-across-the-line-y-x-what--52899
# 1. Point A(-5,8) is reflected across the line y = x. What are the coordinates of A'? Show your work and explain. Need help with this ###### Question: 1. Point A(-5,8) is reflected across the line y = x. What are the coordinates of A'? Show your work and explain. Need help with this ### $\sf \red { What \: is \: Thermodynamics \: ?}$​ $\sf \red { What \: is \: Thermodynamics \: ?}$​... ### Cooling temperatures and high humidity promote which stage of the water cycle? condensation evaporation infiltration transpiration Cooling temperatures and high humidity promote which stage of the water cycle? condensation evaporation infiltration transpiration... ### Sodium oxide (Na2O) crystallizes in a structure in which the O2– ions are in a face - centered cubic lattice and the Na+ ions are in tetrahedral holes. What is the number of Na+ ions in the unit cell? A)2 B)4 C)6 D)8 E)none of these Sodium oxide (Na2O) crystallizes in a structure in which the O2– ions are in a face - centered cubic lattice and the Na+ ions are in tetrahedral holes. What is the number of Na+ ions in the unit cell? A)2 B)4 C)6 D)8 E)none of these... ### An individual's _______________ is comprised of people who are considered nearly equal in status or community esteem, who regularly socialize among themselves, and who share behavioral norms. a. extended family b. subculture c. dissociative group d. social class e. procreational family An individual's _______________ is comprised of people who are considered nearly equal in status or community esteem, who regularly socialize among themselves, and who share behavioral norms. a. extended family b. subculture c. dissociative group d. social class e. procreational family... ### Electroysis is used for ​ electroysis is used for ​... ### Suppose a coin is tossed 100 times. about how many times would you expect to get heads? Suppose a coin is tossed 100 times. about how many times would you expect to get heads?... ### An empty swimming pool needs to be filled to the top. The pool is shaped like a cylinder with a diameter of 6m and a depth of 1.7 m. Suppose water is pumped into the pool at a rate of 11 m3 per hour. How many hours will it take to fill the empty pool? Use the value 3.14 for π, and round your answer to the nearest hour. Do not round any intermediate computations. PLEASE SOMEONE HELP ME THANK YOU!! An empty swimming pool needs to be filled to the top. The pool is shaped like a cylinder with a diameter of 6m and a depth of 1.7 m. Suppose water is pumped into the pool at a rate of 11 m3 per hour. How many hours will it take to fill the empty pool? Use the value 3.14 for π, and round your answe... ### The ratio x : x + 1 is equal to 1 : 2. Find the value of x . The ratio x : x + 1 is equal to 1 : 2. Find the value of x .... ### HELP ME PLZ Between 9 P.M. and 6:20 A.M.​, the water level in a swimming pool decreased by 7/12 . Assuming that the water level decreased at a constant​ rate, how much did the water level drop each​ hour? The water level decreased by nothing in. each hour. HELP ME PLZ Between 9 P.M. and 6:20 A.M.​, the water level in a swimming pool decreased by 7/12 . Assuming that the water level decreased at a constant​ rate, how much did the water level drop each​ hour? The water level decreased by nothing in. each hour.... ### In the 1800s, Horace Mann believed that America schools should be in the 1800s, Horace Mann believed that America schools should be... ### ?Write and evaluate the following algebraic expression when the product of 24 and a number Which is the correct expression and product? ?Write and evaluate the following algebraic expression when the product of 24 and a number Which is the correct expression and product?... ### The physician ordered tegopen oral solution 50 mg/kg in equally divided doses at 6-hour intervals for a child who weighs 23 pounds The physician ordered tegopen oral solution 50 mg/kg in equally divided doses at 6-hour intervals for a child who weighs 23 pounds... ### I need help asap. Evaluate functions expressions I need help asap. Evaluate functions expressions... ### I REALLY NEED HELP PLEASE ​ I REALLY NEED HELP PLEASE ​... ### In the late 1700's, how was the Vice President of the United States chosen? A- The President chose his Vice President B- A separate election was held for the Vice President position C- The person who had the 2nd most votes in the election became Vice President D - The previous president served as Vice President In the late 1700's, how was the Vice President of the United States chosen? A- The President chose his Vice President B- A separate election was held for the Vice President position C- The person who had the 2nd most votes in the election became Vice President D - The previous president served as V...
2023-03-31 07:22:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2997017800807953, "perplexity": 1590.6531319738128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00692.warc.gz"}
http://yems.asdpallavolorossano.it/maths-solution.html
# Maths Solution Deitz, James L. Mixed Collections of Mathematics Exams with Solutions. Here we have given TN State Board New Syllabus Samacheer Kalvi 9th Std Maths Guide Pdf of Book Back Questions and Answers, Chapter Wise Important Questions, Study. Great Minds in Sync™: A collection of customized digital and print resources for Eureka Math ®, Wit & Wisdom ® and PhD Science ®. Test Bank for Contemporary Business Mathematics for Colleges 17th Edition by James E. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Extra Questions for Class 9 Maths Chapter 6 Lines and Angles with Solutions Answers. It contains 25 mathematics problems involving simple “equations” and solutions. • GeoGebra interactive • Page 275: Check your solution using your calculator. These free lessons are cross-referenced to help you find related material, and the "Search" box on every page is available to help you find whatever math content you're looking for. Solved examples with detailed answer description, explanation are given and it would be easy to understand. New course Math 72 (Honors Section): Calculus on Manifolds. After the step-by-step solution process is shown, you can click on any step to see a detailed explanation. Basic Math Solver offers you solving online fraction problems, metric conversions, power and radical problems. Solutions for the following. Physical Goods. Math Homework help is available 24/7 in Go4Guru by experienced and qualified tutors. This is a solver for the "24 ® Game" by Suntex International Inc. 9/02/2020 in NCERT Notes. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. ExamSolutions Maths Stuart the ExamSolutions Guy 2020-08-07T20:13:03+00:00 Welcome to ExamSolutions maths website. Knowing the answer is always 24 alleviates a classic type of math anxiety—getting the right answer—and instead puts the emphasis on the method behind the math. In Selina Concise Mathematics for Class 10 ICSE Guide answers pdf, all questions are solved and explained by expert mathematics teachers as per ICSE board guidelines. It is no secret that the solution of mathematical tasks requires a certain mindset and great analytical skills. New! Comments. Chapter 12 - Areas Related to. A note on the solutions Disclaimer - The solutions provided here are intended to be a guide/reference for your own solutions and not intended to be a substitution for work that should be done by you the student. Class- XI-CBSE-Mathematics Number System. Lines and Angles Class 9 Extra Questions Very Short Answer Type. No solutions. In a series on museums in different forms, but their muscles seem to fit the thrust of the year underline the clause dependent. Papers 1 and 2. Our Math Games games are fun and free. NCERT solutions for class 7 covers subjects like Maths, Science, Social Science, English, Hindi, Sanskrit and Meritnation has provided enough study material to get clear student's doubt. Basic Math Plan. NSS Mathematics In Action 6B (Exercises FULL SOLUTION ) 中六 指定書閱讀報告; HKDSE Practice Paper – Paper 1 Answer; English Assignment Workplace Communication ( Word file*docx檔) English Summer Assignment Development Skills Grammar and Usage Set B. Worksheets, learning resources, and math practice sheets for teachers to print. That mathematics is just a collection of formulas and theorems that one somehow has to cram into one's head? Comments. At Math Corporation, we deliver technologically advanced financial calculation software products to seamlessly and accurately perform a myriad of complex loan, savings, and deposit calculations. An Arithmetic Progression (AP) is a sequence of numbers in which each term is obtained by adding a fixed number to the preceding term except the first term. There's no better way to find math help online than with Cymath, so also make sure you download our mobile app for iOS and Android today! Learn more than what the answer is - with the math helper app, you'll learn the steps behind it too. There are 35 faculty members in the department, most of them actively engaged in research in areas that range from the purely classical to cutting-edge applications in biomathematics and. In addition to all the material in your Mathematics HL course books, we've included a full set of worked solutions here, to fully equip you to tackle the course and assessment. The students can refer to the NCERT Solutions for Class 10 as their additional references and study materials. The Simple Solutions team is currently working remotely. Making Sense of Math. Practical Geometry 5. Anyone can learn math whether they're in higher math at school or just looking to brush up on the basics. Learn how you can get involved in making sure that math accessibility solutions are available to everyone. 10 Quickies Worksheets. Get the Cymath math solving app on your smartphone!. The range of important applications of these results is enormous. Thus, x=8 is a solution of the inequality. Get math help in algebra, geometry, trig, calculus, or something else. The CMS Launches the Cathleen Synge Morawetz Prize (Aug 10, 2020) The Canadian Mathematical Society is pleased to announce the launch of Cathleen Synge Morawetz Prize. 9/02/2020 in NCERT Notes. The department of mathematical sciences offers the degrees BA, BS, MS, MS in Applied Mathematics and Statistics, MS in Teaching, and Ph. Students can register for a free demo through Go4Guru. Maths Genie GCSE Revision - GCSE Exam Papers. Other math worksheet websites. The Math Forum has a rich history as an online hub for the mathematics education community. social sciences. This makes the product ideal for use during oral sex. It continues to produce world-class mathematics research and is devoted to excellence in teaching. Example 4 Let. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. Just few clicks to get a solution of any. There are 35 faculty members in the department, most of them actively engaged in research in areas that range from the purely classical to cutting-edge applications in biomathematics and. NCERT Solutions for Class 10 Maths PDF is free to download. Solution Solution Solution Solution Solution. Active Maths 3 Solutions Chapter 5 Algebra 2. Since he was the first person to submit this name, he wins the 200 prize and the seven books awarded to the winner. 6180339… = Φ. A solution or example that is ridiculously simple and of little interest. Tamilnadu State Board Samacheer Kalvi 10th Maths Book Answers Solutions Guide Term 1, 2, 3. Other Web Pages of Mine. Study Guide of 1A Chapter 4 and 6 are uploaded. One way to write mathematics like this is to make use of the free software package LATEX, which can produce very high quality scientific documents, and is used extensively by. Solving these Class 9th Maths solution in each chapter will assure positive results. Mathematical puzzles, with hints, full solutions, and links to related math topics. Address: 17-19 Union Place, Third Floor Summit, NJ 07901. Additionally, you will learn how to give the mathematical justification or solution of a construction. It, not only explains the syllabus chapter wise but also includes the various new types of questions and hence increases the quality of study by the students. In addition to all the material in your Mathematics HL course books, we've included a full set of worked solutions here, to fully equip you to tackle the course and assessment. EQUATIONS INVOLVING FRACTIONS - Solve for x in the following equations. Mensuration math chapter 6 grade 5 exercise and solution. com provides Selina Publishers Concise Mathematics Class 10 ICSE Solutions 2019-2020 PDF free download are solved step-by-step in order to improve student problem solving skills. drive the expression of newton law? Details Purchase An Answer Below Get this Solution Purchase this Solution immediately!. Videos of questions and theory are available for your reference. This manual contains hints or full solutions to many of the problems in Chapters 1, 2, and 3 of the text: J. Cross of Iron. Robert Morse and Eric Brooks April 20, 2020. These points are where the graphs of the inequalities intersect. Ezy Math Tutoring is a new kind of maths tutoring company. Classroom Lessons. For more information on the game, visit their site. Expert Teachers at KSEEBSolutions. With the group stages of World Cup 2018 drawing to a close, I was wondering what the possible scores were attainable in each group (e. Tweets by @BedtimeMath. Including solutions. Math glossary with math definitions, examples, math. New course Math 72 (Honors Section): Calculus on Manifolds. The Simple Solutions team is currently working remotely. There's no better way to find math help online than with Cymath, so also make sure you download our mobile app for iOS and Android today! Learn more than what the answer is - with the math helper app, you'll learn the steps behind it too. You will need to get assistance from your school if you are having problems entering the answers into your online assignment. The resources emphasize understanding of concepts instead of mechanical memorization of rules. Covers arithmetic, algebra, geometry, calculus and statistics. Read the terms and conditions for using our sample lessons below. CB Tricks – CB/Amateur Radio Technical Info. Test Bank for Contemporary Business Mathematics for Colleges 17th Edition by James E. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. A first useful catalogue of questions could look like this, where each main question is followed by more specific ones. Book: Math Kangaroo Levels 1&2 - Q's and Solutions from Years 2005-2017. Learn from detailed step-by-step explanations Get walked through each step of the solution to know exactly what path gets you to the right answer. Tweets by @BedtimeMath. We have step-by-step solutions for your textbooks written by Bartleby experts!. Chapter 11 - Inequalities. Back of Chapter Questions. Math: It works! The data age is likely to spell trouble for gerrymandering. Address: 17-19 Union Place, Third Floor Summit, NJ 07901. 0 5 is shown in the place-value chart below. Before going to class, some students have found it helpful to print out Purplemath's math lesson for that day's topic. NCERT Solutions for Class 10 Maths is the best source to start your preparation for class 10 maths because class 10 board exams mark a crucial stage in every student’s life and at Vidyakul, we not only understand the importance of scoring maximum marks in Class 10 Math Board exam but also pledge to help you with NCERT Solutions Class 10 Maths. Class 6 Maths NCERT Solutions for Chapter 1 Knowing Our Numbers Exercise 1. Here we have given KTBS Karnataka State Board Syllabus. com Sudoku posts five new puzzles every day. NCERT Solutions for Class 8 Maths in PDF format provided by Vedantu are solved by subject expert teachers carefully from the latest edition books and as per NCERT (CBSE) guidelines. Pure Mathematics. 1 Chapter 5 Co-ordinate Geometry Practice Set 5. How to Learn Math. Hanging at a park with some. com has created KSEEB Solutions for Class 6 Maths Pdf Free Download in English Medium and Kannada Medium of 6th Standard Karnataka Maths Textbook Solutions Answers Guide, Textbook Questions and Answers, Notes Pdf, Model Question Papers with Answers, Study Material, are part of KSEEB Solutions for Class 6. { cos }^ { 3 } cos3x – cosx – 2 cosx. Also, find answers on accurately creating a division of the line segments in our textbook solutions. NCERT Solutions for Class 10 Maths in PDF format, solved by subject expert teachers from latest edition books and as per NCERT (CBSE) guidelines. , groups, topological spaces) with a very simple structure. Gain fluency and confidence in math! IXL helps students master essential skills at their own pace through fun and interactive questions, built in support, and motivating awards. LOGIN New to Big Ideas Math? LOG IN. We have been providing Math Games for all grade levels for over 12 years. 5 model papers completely solved as per new paper pattern 2020. The homework site for teachers!. Just type KC sinha math solution class 11th pdf on google and there u can download directly. Phone support is available Monday-Friday, 9:00AM-10:00PM ET. Lines and Angles Class 9 Extra Questions Very Short Answer Type. Franklin, C. Read the terms and conditions for using our sample lessons below. { cos }^ { 3 } cos3x – cosx – 2 cosx. 4 - three doors" in issue 4. Acknowledgements. Primary 5 maths Here is a list of all of the maths skills students learn in primary 5! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. Launched in 1984 and owned by Scholastic, Math Solutions provides professional development services, products, and resources for K-12 educators in the United States and Canada. Extra Questions for Class 9 Maths Lines and Angles with Answers Solutions. pdf), Text File (. For registration, please visit here. NCERT Solutions Class 9 Maths Chapter 6 Lines and Angles – Here are all the NCERT solutions for Class 9 Maths Chapter 6. Log in with ClassLink. The Movie Math Quiz Solutions! The Matrix. Hanging at a park with some. Department of Mathematics 275 TMCB Brigham Young University Provo, UT 84602 801-422-2061 (office) 801-422-0504 (fax) [email protected] Students can also read Samacheer Kalvi 10th Science Book Solutions and Tamil Nadu Samacheer Kalvi 10th Maths Model Question Papers 2019-2020 English & Tamil Medium. Step-by-step solutions help students find and fix their own mistakes and grow as independent learners. Developed according to latest exam pattern Online school classes from Takshila Learning are crafted for class 10 Maths with NCERT guide/RD Sharma solutions/Notes/study material/ sample question papers based on ICSE/CBSE and various state board exams. Almost every student is facing problems when performing tasks in math. Starting with the largest numbers representing an animal - goats are represented by both 4 and 5 (4 to 3 chicks and 5 to 3 ducks). Learn how Carroll County Public Schools is using this digital math textbook to empower teachers in classrooms where 1:1 devices aren't always available. The Politician's solution: lie about how many bridges were crossed. The CMS wishes to thank the sponsors and partners who make our competitions program successful. A note on the solutions Disclaimer - The solutions provided here are intended to be a guide/reference for your own solutions and not intended to be a substitution for work that should be done by you the student. It could easily be mentioned in many undergraduate math courses, though it doesn't seem to appear in most textbooks used for those courses. Many teachers are looking for common core aligned math work. No solutions. Solve various attributes of shapes and solids. Free PDF download of Class 8 Maths NCERT Solutions help students to understand and learn the chapters easily and score more marks in your exams. Applied Mathematics, 3rd ed. About Dan Theobald:. Solution Banks Lockdown revision never really happened? Our online Maths A-level refresher courses on 17-21 August will review Year 12 content, getting you ready for September. For instance, let's consider this problem: Solve the equation6\\cos x - 8\\sin x = 7$for$. CB Tricks – CB/Amateur Radio Technical Info. IXL covers everything students need to know for grade 8. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. com provides Selina Publishers Concise Mathematics Class 10 ICSE Solutions 2019-2020 PDF free download are solved step-by-step in order to improve student problem solving skills. Jan 11-Feb 13. Active Maths 3 Solutions Chapter 5 Algebra 2. Both E and M Divisions. , groups, topological spaces) with a very simple structure. Bijles di Wiskunde A + B, Scheikunde, Natuurkunde i Biologie pa VWO, HAVO, VSBO i klas 1 ku 2 di MTS. A solution is always transparent, light passes through with no scattering from solute particles which are molecule in size. Check NCERT Solutions now. We’ll organise for someone to tutor math and help your child succeed, our service is one of the best operating in Sydney , Melbourne , Perth , Brisbane and Adelaide. Active Maths 3 Solutions Chapter 6 Functions. 10 Quickies Worksheets. Advanced Mathematics Matsec Solutions. Thank you for visiting Figure This! Math Challenges for Families. class 9 Maths NCERT Solutions includes all the questions provided as per new revised syllabus in Class 9 math NCERT textbook. Fun, visual skills bring learning to life and adapt to each student's level. Guru has created Tamilnadu State Board Samacheer Kalvi 9th Maths Book Answers and Solutions Guide Pdf Free Download in English Medium and Tamil Medium are part of Samacheer Kalvi 9th Books Solutions. Math: It works! The data age is likely to spell trouble for gerrymandering. Gain more understanding of your homework with steps and hints guiding you from problems to answers!. Tests how well the video lesson was understood; Encourages the traditional pen-and-paper. Composed of forms to fill-in and then returns analysis of a problem and, when possible, provides a step-by-step solution. Nov 16-Dec 12. Finding a solution to a math problem can be broken down into parts in several ways – for example by asking questions. All you have to do is simply tap on the quick links available for ML Aggarwal Class 8 Maths Solutions and download them for free and utilize them during your exam preparation. The characters’ names are math terms!. METHOD 1: Find common multiples of the numbers. org to see if there are step-by-step solutions available for that competition set. Mensuration math chapter 6 grade 5 exercise and solution. b²-4ac>0 then you have 2 real solutions. The Edexcel Mathematics mark schemes use the following types of marks: M marks: Method marks are awarded for ‘knowing a method and attempting to apply it’, unless otherwise indicated. How to Learn Math. Schedule classes also available. Ohio State's Department of Mathematics is a prominent mathematical research center. Hanging at a park with some. In Selina Concise Mathematics for Class 10 ICSE Guide answers pdf, all questions are solved and explained by expert mathematics teachers as per ICSE board guidelines. Solution: Numerator = −7 × 2 = −14 Denominator = 21 − 2 = 19 10. The 24 Game is a unique mathematics teaching tool proven to successfully engage students in grades 1-9 from diverse economic and social backgrounds. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. New Continuous Learning Resources. All Chapters Class 7 Maths NCERT Solutions were prepared according to CBSE (NCERT) guidelines. To start practising, just click on any link. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. NCERT Solutions for Class 9 Maths. 6180339… = Φ. Mathematical puzzles, with hints, full solutions, and links to related math topics. Khan Academy's Mathematics 1 course is built to deliver a comprehensive, illuminating, engaging, and Common. Box 12395 - El Paso TX 79913 - USA users online during the last hour. This skulduggery relies on geometry, geography, and demographic tables, precisely the domains where math nerds can give. Math Worksheets Listed By Specific Topic and Skill Area. , This in turn, helps you not to lose even a single mark. NCERT Solutions for Class 11 Maths: There are a total of 16 chapters in NCERT textbook for class 11 maths. Keep reading and try to figure out these 10 math problems that confused people across the internet. com, Elsevier’s leading platform of peer-reviewed scholarly literature. Online math solver with free step by step solutions to algebra, calculus, and other math problems. Lesson Sampler eBook. Hanging at a park with some. Math is omnipresent in our everyday world. And, the region is said to be bounded when the graph of a system of constraints is a polygonal region. Enter the title of your math square puzzle The title will appear at the top of your page. class 9 Maths NCERT Solutions includes all the questions provided as per new revised syllabus in Class 9 math NCERT textbook. In the virtual world of Google Earth, concepts and challenges can be presented in a meaningful way that portray the usefulness of the ideas. The Prize is established in honour of the late Canadian mathematician, Cathleen Synge Morawetz, to reflect the remarkable breadth and influence of her research achievements in pure and applied mathematics. It continues to produce world-class mathematics research and is devoted to excellence in teaching. The department of mathematical sciences offers the degrees BA, BS, MS, MS in Applied Mathematics and Statistics, MS in Teaching, and Ph. IXL covers everything students need to know for grade 8. Solved homework answers available. The bolded solution is the winner. So, from equations to word problems, for a math operation to be worth solving, the need for a solution must have real-life implications to which students can relate. This is a solver for the "24 ® Game" by Suntex International Inc. The Mathematics 1 course, often taught in the 9th grade, covers Linear equations, inequalities, functions, and graphs; Systems of equations and inequalities; Extension of the concept of a function; Exponential models; Introductory statistics; and Geometric transformations and congruence. These math equations went viral for being much more complicated than they seemed — or so simple that people got tripped up overthinking them. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. Trinity Admissions Test Solutions GCSE Edexcel Maths Paper 1 Higher - 21 May 2019 Unofficial Mark Scheme Edexcel AS Maths Statistics and Mechanics 22/05/2019 Unofficial MarkScheme STEP 2006 Solutions Thread STEP I, II, III 2000 solutions. Example 4 Let. The Mathematics Department of the Rutgers School of Arts and Sciences is one of the oldest mathematics departments in the United States, graduating its first major in 1776. Practical Geometry 5. To score good marks in tests and class exams student has to go through all exercise of NCERT textbook solutions. Provide high-impact teaching and personalize instruction with Full Access for Mathematics. THE CALCULUS PAGE PROBLEMS LIST Problems and Solutions Developed by : D. It is very helpful to the students of 8th Class Students of CBSE Board and All Other State Board. They’ve made sure that the solutions satisfy the learning. , Wiley{Interscience, New York. { cos }^ { 3 } cos3x – cosx – 2 cosx. Other math worksheet websites. Hanging at a park with some. A first useful catalogue of questions could look like this, where each main question is followed by more specific ones. Video Clip Library. If you purchased a MATHCOUNTS competition through the MATHCOUNTS online store, you can contact [email protected] Or, use the math worksheet generators to create on-demand math worksheets for your elementary, kindergarten, middle, or high school math classes. Instructional Practices Inventory. A solution or example that is ridiculously simple and of little interest. Infeld - Morrill Press, 1941. Gain fluency and confidence in math! IXL helps students master essential skills at their own pace through fun and interactive questions, built in support, and motivating awards. One Solution for Multiple Environments Intel® Math Kernel Library (Intel® MKL) optimizes code with minimal effort for future generations of Intel® processors. All Chapters Class 7 Maths NCERT Solutions were prepared according to CBSE (NCERT) guidelines. Includes practice exercises and answers. AdaptedMind is a customized online math curriculum, problems, and worksheets that will significantly improve your child's math performance, guaranteed. Mensuration math chapter 6 grade 5 exercise and solution. Let A = 2x and B = x, then. Technical Mathematics Solutions for Acceleration, Redesign & Readiness Are you trying to accelerate your students’ progress through the developmental curriculum?. Questions concerning proposals and/or solutions can be sent by e-mail to: [email protected] Acknowledgements. Homework For Second Grade. CBSE Class 10 Maths. social sciences. The presentation of the solutions caters to a variety of learning styles, allowing for flexibility in approaches to solving problems. It could easily be mentioned in many undergraduate math courses, though it doesn't seem to appear in most textbooks used for those courses. Tweets by @BedtimeMath. There's no better way to find math help online than with Cymath, so also make sure you download our mobile app for iOS and Android today! Learn more than what the answer is - with the math helper app, you'll learn the steps behind it too. By Secondary Math Solutions This is a foldable that covers:solving one-step equationssolving two-step equationssolving equations containing the distributive propertysolving equations with combining like termssolving equations with the distributive property and combining like. These Class 10 Maths has been prepared by experts faculty of Studyrankers who have large experience in teaching Maths and successfully helped students in cracking. Our online Maths Bridging the Gap course on 24-25 August can help the transition from GCSE if you are about to start your A-levels. We have been providing Math Games for all grade levels for over 12 years. Meet Mathscot, the Math League mascot. We provide home tutors for all grades in mathematics; from year 3 to year 12. Included area a review of exponents, radicals, polynomials as well as indepth discussions of solving equations (linear, quadratic, absolute value, exponential, logarithm) and inqualities (polynomial, rational, absolute value), functions (definition, notation, evaluation, inverse functions) graphing. Forgot Password Log in with Clever. It only takes a minute to sign up. Expert Teachers at SamacheerKalvi. Weekly workbooks for K-8. Phone: (855) 321-MATH. Make calculations for circle, parallelogram, rectangle, square, trapezoid, right circular cone, right circular cylinder, pyramid, rectangular solid, and sphere geometric formulas. Address: 17-19 Union Place, Third Floor Summit, NJ 07901. • Page 274: Explore the solution using GeoGebra. LOGIN New to Big Ideas Math? LOG IN. com Monthly Topics and New on Mathwire. in has created most accurate and detailed solutions for Class 9 Maths NCERT solutions. The problem that generates the most interest is the calculation of the volume of a truncated pyramid (a square based pyramid with the top portion removed). b²-4ac=0 then you have just 1 real (if perfect square then it's a double root) So 1)5x^2 + 8x + 7 = 0. AdaptedMind is a customized online math curriculum, problems, and worksheets that will significantly improve your child's math performance, guaranteed. Mixed Collections of Mathematics Exams with Solutions. Supported by the Actuarial Profession The Canadian Mathematical Olympiad (CMO) is Canada’s premier national advanced mathematics competition. Worksheets, learning resources, and math practice sheets for teachers to print. Hundreds of free, online math games that teach multiplication, fractions, addition, problem solving and more. Usually, teachers and professors always evaluate a math problem from this perspective. Free Solutions Guide for O Level Students. Since he was the first person to submit this name, he wins the $200 prize and the seven books awarded to the winner. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. Online math solver with free step by step solutions to algebra, calculus, and other math problems. Improve your math knowledge with free questions in "Solutions to inequalities" and thousands of other math skills. Hope this helps! For other things promised in the book (like the Quickie Sheet, etc) go to the Extras page. com provides Selina Publishers Concise Mathematics Class 10 ICSE Solutions 2019-2020 PDF free download are solved step-by-step in order to improve student problem solving skills. Visual Math Learning offers free math lessons featuring an interactive on-line tutorial for teaching elementary mathematics and basic arithmetic for grades K-12 at the pre-algebra level. Free PDF download of Class 8 Maths NCERT Solutions help students to understand and learn the chapters easily and score more marks in your exams. Acknowledgements. Data Handling 6. At Math Corporation, we deliver technologically advanced financial calculation software products to seamlessly and accurately perform a myriad of complex loan, savings, and deposit calculations. org to see if there are step-by-step solutions available for that competition set. Higher Maths 2014 Solutions. Honours (Adv Maths) student Michela Castagnone got her first taste of uni life as a high school student at our Girls Do the Maths workshop. This publication covers solutions to SEA Mathematics Past Papers for for the period 2009-2019. I am interested in how you would encourage students to layout their working to a trigonometric equation. Access your worked solutions for:. The CMS wishes to thank the sponsors and partners who make our competitions program successful. Squares and Square Roots 7. Grades K - 6, school direct online catalog and store, Houghton Mifflin Math kids' place, Houghton Mifflin Math parents' place, eBooks. MathsOnline - Australia's #1 Online Maths Teacher. Videos of questions and theory are available for your reference. Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step This website uses cookies to ensure you get the best experience. Solved examples with detailed answer description, explanation are given and it would be easy to understand. Here's a nice explanation submitted by Nigel Mudd from Leeds Grammar School: If you swap, you win when your initial guess was wrong, probability 2/3. Our Actuarial Science Program has been designated a Center of Actuarial Excellence by the Society of Actuaries. LEARN MORE A national middle school contest that blends math, creativity, art and technology and challenges students to produce a video solving a math problem in a real-world setting. We are here to assist you with your math questions. EQUATIONS INVOLVING FRACTIONS - Solve for x in the following equations. Included area a review of exponents, radicals, polynomials as well as indepth discussions of solving equations (linear, quadratic, absolute value, exponential, logarithm) and inqualities (polynomial, rational, absolute value), functions (definition, notation, evaluation, inverse functions) graphing. pdf), Text File (. Find f '(x). Our Class 10 mathematics experts have explained and solved all the doubts & questions from CBSE syllabus. It is always recommended to study NCERT books as it covers the whole syllabus. Free PDF Download of Class 10 Maths NCERT Solutions to help you to revise complete Syllabus and score more marks in your exams. The students can refer to the NCERT Solutions for Class 10 as their additional references and study materials. Puzzles 1 to 10. It also offers free review materials for Pure Math and Statistics. Here is a set of notes used by Paul Dawkins to teach his Algebra course at Lamar University. Students can download for both Chapter Probability Class 12 Maths Solutions in Hindi medium and English medium her. a=5, b=8, and c. Great Minds in Sync™: A collection of customized digital and print resources for Eureka Math ®, Wit & Wisdom ® and PhD Science ®. Exercise: 1. Solution Solution. Statewide residential magnet school for students with a strong aptitude and interest in math and science. high school math. • GeoGebra interactive • Page 275: Check your solution using your calculator. Purchase past years' MATHCOUNTS competitions, as well as national-level competitions through the MATHCOUNTS online store. For registration, please visit here. Most of the students of Class 9 face problems while solving problem of Maths. QuickMath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices. We have been providing Math Games for all grade levels for over 12 years. IMPORTANT: Puzzle titles are limited to 49 characters. Manifolds provide mathematicians and other scientists with a way of grappling with the concept of “space” (from a global viewpoint). 5% dextrose solution) giving you a 5% dextrose solution. The department of mathematics website has been moved to hmc. com--a website dedicated to Math lessons, demonstrations, interactive activities and online quizzes on all areas of geometry, algebra and trigonometry. We have shown that: (af+ bg)(x+ h) (af+ bg)(x) = (af0(x) + bg0(x))h+ r. #ekamkasoti #unittest #papersolution #ekamkasotisolution #sciencesolution #gujaratisolution #mathssolution #dhoran9vignanpapersolution #std9science #dhoran9s. A debt of gratitude is owed to the dedicated staff who created and maintained the top math education content and community forums that made up the Math Forum since its inception. In Math-Only-Math you'll find abundant selection of all types of math questions for all the grades with the complete step-by-step solutions. Digital math centers and activities are the perfect solution! Using digital math centers can give you all of the same features of math centers (engaging, interactive) but without all the prep! And you can easily differentiate your centers with a few clicks on your computer! About the Digital Math Centers. Meet Mathscot, the Math League mascot. Advanced Problems in Mathematics book. So, if you are looking for NCERT solutions for class 9 Maths, then you have landed on the perfect place. Access your worked solutions for:. The CMS Launches the Cathleen Synge Morawetz Prize (Aug 10, 2020) The Canadian Mathematical Society is pleased to announce the launch of Cathleen Synge Morawetz Prize. Get Free NCERT Solutions for Class 7 Maths PDF. Active Maths 3 Solutions Chapter 5 Algebra 2. Clearly, the solutions to math and statistics problems need proper presentation. AdaptedMind is a customized online math curriculum, problems, and worksheets that will significantly improve your child's math performance, guaranteed. Infeld - Morrill Press, 1941. To Support Student Learning During COVID-19, Hooda Math has removed ads from Timed Tests , Manipulatives , Tutorials , and Movies until January 1, 2021. Address: 17-19 Union Place, Third Floor Summit, NJ 07901. Math Techbook in Carroll County. Using the Guided Practices, Interactive Video Lessons, Practice Quizzes, Tests, Online Drills, Millionaire Games, and Jeopardy!. Competitions are an important part of learning mathematics and a fun activity for students of all ages. Papers 1 and 2. Our online Maths Bridging the Gap course on 24-25 August can help the transition from GCSE if you are about to start your A-levels. They can explain why standards are sequenced the way they are, point out cognitive difficulties and pedagogical solutions, and give more detail on particularly knotty areas of the mathematics. Maharashtra State Board Class 9 Maths Solutions Part-2 Maharashtra Board Class 9 Maths Chapter 1 Basic Concepts in Geometry Chapter 1 Basic Concepts in Geometry Practice Set 1. Included area a review of exponents, radicals, polynomials as well as indepth discussions of solving equations (linear, quadratic, absolute value, exponential, logarithm) and inqualities (polynomial, rational, absolute value), functions (definition, notation, evaluation, inverse functions) graphing. Research at the HeartMath Institute shows that, adding heart to our daily activities and connections produces measurable benefits to our own and others' well-being. 2 and Exercise 1. Mensuration math chapter 6 grade 5 exercise and solution. Learn how Carroll County Public Schools is using this digital math textbook to empower teachers in classrooms where 1:1 devices aren't always available. Phone: (855) 321-MATH. Jan 11-Feb 13. The CMS supports math competitions in many ways, from running our own competitions to funding regional competitions to supporting Canadian involvement in international competitions. Applied Mathematics, 3rd ed. Click on the question mark button(s) for more details. Nov 16-Dec 12. Class 10 Maths Chapter solutions will help you to attempt questions that involve drawing tangents to a circle. All solutions are based on 12th Class Maths Book Solution. If an angle is half of its complementary angle, then find its degree measure. Robert Morse and Eric Brooks April 20, 2020. Address: 17-19 Union Place, Third Floor Summit, NJ 07901. Whether you are a classroom teacher, a school district or state education administrator, a publisher, an assistive technology vendor, a parent, or a person with disability, there are things you can do to help make math accessible. Puzzles 1 to 10. net is a comprehensive math resource site for homeschooling parents, parents, and teachers that includes free math worksheets, lessons, online math games lists, ebooks, a curriculum guide, reviews, and more. Mensuration math chapter 6 grade 5 exercise and solution. Solution Solution Solution Solution Solution. Common Core Quick Tips. NCERT Solutions for Class 9 Maths are provided here that will help you solving difficult Class 9 Maths NCERT Solutions and understanding the concepts behind every questions so you can solve those problems with ease. 10 Quickies Worksheets. Make calculations for circle, parallelogram, rectangle, square, trapezoid, right circular cone, right circular cylinder, pyramid, rectangular solid, and sphere geometric formulas. Mathematics resources for children,parents and teachers to enrich learning. For instance, let's consider this problem: Solve the equation$6\\cos x - 8\\sin x = 7$for$. Grades K - 6, school direct online catalog and store, Houghton Mifflin Math kids' place, Houghton Mifflin Math parents' place, eBooks. We have shown that: (af+ bg)(x+ h) (af+ bg)(x) = (af0(x) + bg0(x))h+ r. However, all too often, students see math as a purely academic subject that does not extend beyond quizzes and homework. p q, where p and q are integers and q ≠0? Solution:. These Sample Papers are part of CBSE Sample Papers for Class 10 Maths. Math Solver is a complete scientific calculator that enables you to solve any mathematical expression in just a few steps. Cos (A+B) = cosA cosB – sinA sinB. Most of the students of Class 9 face problems while solving problem of Maths. Veganarto - 4th Grade Woth Problems. Book: Math Kangaroo Levels 1&2 - Q's and Solutions from Years 2005-2017. Guru has created Tamilnadu State Board Samacheer Kalvi 9th Maths Book Answers and Solutions Guide Pdf Free Download in English Medium and Tamil Medium are part of Samacheer Kalvi 9th Books Solutions. Students can download for both Chapter Probability Class 12 Maths Solutions in Hindi medium and English medium her. A note on the solutions Disclaimer - The solutions provided here are intended to be a guide/reference for your own solutions and not intended to be a substitution for work that should be done by you the student. The space of solutions to a system of equations. All Chapters Class 7 Maths NCERT Solutions were prepared according to CBSE (NCERT) guidelines. QuickMath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices. More information about Math Solutions is available at mathsolutions. Statistics Class 10 Extra Questions Maths Chapter 14 with Solutions Answers by. Use step-by-step calculators for chemistry, calculus, algebra, trigonometry, equation solving, basic math and more. Included area a review of exponents, radicals, polynomials as well as indepth discussions of solving equations (linear, quadratic, absolute value, exponential, logarithm) and inqualities (polynomial, rational, absolute value), functions (definition, notation, evaluation, inverse functions) graphing. For example, the equation x + 5y = 0 has the trivial solution x = 0, y = 0. Directions. It continues to produce world-class mathematics research and is devoted to excellence in teaching. Access your worked solutions for:. Keep reading and try to figure out these 10 math problems that confused people across the internet. Developed by MIT graduates since 2003, MathScore helps students acquire a deep understanding of math by providing adaptive math practice that functions like self-guided lessons. Including solutions. Edexcel past exam papers, mark schemes, grade boundaries and model answers. In this paper, the stability of Ulam–Hyers and existence of solutions for semi-linear time-delay systems with linear impulsive conditions are studied. All problems are based on STEM, common core standards and real-world applications for grades 3 to 12 and beyond. Houghton Mifflin Harcourt Online Store; Math Expressions Resources for Students; Math Expressions Resources for Families. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. The problem that each std 12 student faces in his/her academic life is how to prepare for maths board exam to gain those extra points to achieve a top rank in their batch. Class- XI-CBSE-Mathematics Number System. Our solutions cover the entire 15 chapters from number systems to liner equations, geometry to probability and statistics. The solution to this is found with the quadratic formula: So our formula for the golden ratio above (B 2 – B 1 – B 0 = 0) can be expressed as this: 1a 2 – 1b 1 – 1c = 0. Here's a nice explanation submitted by Nigel Mudd from Leeds Grammar School: If you swap, you win when your initial guess was wrong, probability 2/3. Give your brain a workout!. Short lessons to help you learn and revise to get you the grade you deserve. NCERT Maths Solutions are an amazing way of encouraging math learning in Class 7 students. NCERT Solutions For Class 10 Maths Chapter 5: Arithmetic Progressions. • Page 274: Explore the solution using GeoGebra. Back of Chapter Questions. Invergordon Academy - for providing the handwritten solutions from 2005 - 2011; Broxburn Academy - for providing solutions from 2000 - 2004; Pupils can find on the Maths pupil server the complete Powerpoint of Past Paper questions and solutions. literature and. About Dan Theobald:. 99 Read more; 180 Prompts For 180 Days E-Book \$ 9. Welcome to Big Ideas Math! Let's get you registered. Active Maths 3 Solutions Chapter 9 Calculus. Math 1130 Exam 3 review (AU 17). Model your word problems, draw a picture, and organize information!. Join a community of 14,500+ applied mathematicians and computational scientists worldwide. Purplemath's pages print out neatly and clearly. Check NCERT Solutions now. Carnegie Learning Texas Math Solution Quality Math Instruction for 6-12 Texas Educators The Texas Education Agency (TEA) has partnered with Carnegie Learning to provide educators and students with free access to its highly rated middle school and high school content to support continuous learning efforts in the 2020-21 school year and beyond. Primary 5 maths Here is a list of all of the maths skills students learn in primary 5! These skills are organised into categories, and you can move your mouse over any skill name to preview the skill. This publication covers solutions to SEA Mathematics Past Papers for for the period 2009-2019. Schedule classes also available. NSS Mathematics In Action 6B (Exercises FULL SOLUTION ) 中六 指定書閱讀報告; HKDSE Practice Paper – Paper 1 Answer; English Assignment Workplace Communication ( Word file*docx檔) English Summer Assignment Development Skills Grammar and Usage Set B. Gain more understanding of your homework with steps and hints guiding you from problems to answers!. This solution contains questions, answers, images, explanations of the complete Chapter 6 titled Lines and Angles of Maths taught in class 9. It has past papers, mark schemes and model answers to GCSE and A Level exam questions. NCERT Solutions for Class 9 Maths is solved by expert teachers provide you a strong foundation in the subject Maths. Purchase past years' MATHCOUNTS competitions, as well as national-level competitions through the MATHCOUNTS online store. Learn from detailed step-by-step explanations Get walked through each step of the solution to know exactly what path gets you to the right answer. The more you do, the more it will enable you to do others, even if it doesn't make them easy. Go4Guru can guarantee the same tutor for the students. Find helpful math lessons, games, calculators, and more. Included area a review of exponents, radicals, polynomials as well as indepth discussions of solving equations (linear, quadratic, absolute value, exponential, logarithm) and inqualities (polynomial, rational, absolute value), functions (definition, notation, evaluation, inverse functions) graphing. Theres no right or right now 4 steps solver math with free pm 5 pm 7 pm esl teaching tip your students and they. We publish many of the most prestigious journals in Mathematics, including a number of fully open access journals. Statewide residential magnet school for students with a strong aptitude and interest in math and science. Tweets by @BedtimeMath. In elementary mathematics, a term is either a single number or variable, or the product of several numbers or variables. Here we have given CBSE Sample Papers for Class 10 Maths Paper 6. This Saxon Math Homeschool 6/5 Solutions Manual provides answers for all problems in the textbook lesson (including warm-up, lesson practice, and mixed practice exercises), as well as solutions for the investigations and supplemental practice found in the back of the student text. Maths Genie GCSE Revision - GCSE Exam Papers. Give your brain a workout!. The solution to this is found with the quadratic formula: So our formula for the golden ratio above (B 2 – B 1 – B 0 = 0) can be expressed as this: 1a 2 – 1b 1 – 1c = 0. Parents and teachers can follow math-only-math to help their students to improve and polish their knowledge. Log in with ClassLink. This app is very easy to use. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. NCERT solutions for class 7 covers subjects like Maths, Science, Social Science, English, Hindi, Sanskrit and Meritnation has provided enough study material to get clear student's doubt. Real World Math is a collection of free math activities for Google Earth designed for students and educators. Methodology: 2020 Best STEM High Schools. com provides Selina Publishers Concise Mathematics Class 10 ICSE Solutions 2019-2020 PDF free download are solved step-by-step in order to improve student problem solving skills. All solutions are based on 12th Class Maths Book Solution. Instructional math help video lessons online and on CD. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. Often, solutions or examples involving the number 0 are considered trivial. Improve your math knowledge with free questions in "Solutions to inequalities" and thousands of other math skills. In this paper, the stability of Ulam–Hyers and existence of solutions for semi-linear time-delay systems with linear impulsive conditions are studied. Extra Questions for Class 9 Maths Lines and Angles with Answers Solutions. This manual contains hints or full solutions to many of the problems in Chapters 1, 2, and 3 of the text: J. Mathematics is much more than a set of problems in a textbook. CBSE 6th class maths textbook solutions includes answers of all questions except the ones which are not in the syllabus anymore. Mathematics On these pages you will find Springer’s journals, books and eBooks in all areas of Mathematics, serving researchers, lecturers, students, and professionals. Maharashtra State Board Class 9 Maths Solutions Part-2 Maharashtra Board Class 9 Maths Chapter 1 Basic Concepts in Geometry Chapter 1 Basic Concepts in Geometry Practice Set 1. Com provides free math worksheets for teachers, parents, students, and home schoolers. Go4Guru can guarantee the same tutor for the students. The space occupied by an object. After the step-by-step solution process is shown, you can click on any step to see a detailed explanation. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. The total number of marks for the paper is 75 2. The problem that each std 12 student faces in his/her academic life is how to prepare for maths board exam to gain those extra points to achieve a top rank in their batch. This section is a collection of lessons, calculators, and worksheets created to assist students and teachers of algebra. For registration, please visit here. 5% dextrose in their fluids (and there are 700ml left) and you are asked to make it 5%, 5-2. NCERT solutions for class 7 covers subjects like Maths, Science, Social Science, English, Hindi, Sanskrit and Meritnation has provided enough study material to get clear student's doubt. How to Write the Solution: According to the triangle inequality, the sum of any two sides of a triangle is at least as great as the length of the third side. Business Math Lessons, Problems and Exercises. Mensuration math chapter 6 grade 5 exercise and solution. It continues to produce world-class mathematics research and is devoted to excellence in teaching. EQUATIONS CONTAINING ABSOLUTE VALUE(S) - Solve for x in the following equations. A national middle school mathematics enrichment program that gives educators the resources and guidance needed to run math clubs in schools and other groups. So, from equations to word problems, for a math operation to be worth solving, the need for a solution must have real-life implications to which students can relate. Check NCERT Solutions now. We have shown that: (af+ bg)(x+ h) (af+ bg)(x) = (af0(x) + bg0(x))h+ r. All solutions are based on 12th Class Maths Book Solution. The Texas Math Solution includes daily teacher lesson plans with accessible student materials, guidance for remote learning for teachers, and family guides to support learning regardless of setting. Thus, we have the inequalities. 5 model papers completely solved as per new paper pattern 2020. Information. PEARSON EDEXCEL GCE MATHEMATICS General Instructions for Marking 1. 0 5 is shown in the place-value chart below. reception maths worksheets printable. In the Selina Maths Class 9 solutions Chapter 10 Isosceles Triangle, you will also find answers to prove that a given triangle is an isosceles triangle with an accurate explanation. This is a collection of Singapore Primary 3 Maths practice questions. com provides Selina Publishers Concise Mathematics Class 10 ICSE Solutions 2019-2020 PDF free download are solved step-by-step in order to improve student problem solving skills. Math 300 is an introduction to rigorous, abstract mathematics. NCERT Solutions for class 10 Maths is prepared by highly experienced teachers of Entrancei academic team. Not open to students with credit for 1172, 1181H or any Math class numbered 1500 or above, or with credit for 153. Basic Math Solver offers you solving online fraction problems, metric conversions, power and radical problems. Here are a few of the ways you can learn here. The solutions are done according to the pattern and syllabus of NCERT. com has created KSEEB Solutions for Class 6 Maths Pdf Free Download in English Medium and Kannada Medium of 6th Standard Karnataka Maths Textbook Solutions Answers Guide, Textbook Questions and Answers, Notes Pdf, Model Question Papers with Answers, Study Material, are part of KSEEB Solutions for Class 6. Speaker Presentations. Statistics. Check NCERT Solutions now. Candidates require an invitation from the Canadian Mathematical Society in order to participate. For example, in 3 + 4x + 5yzw. KEYWORDS: Course materials,. Videos of questions and theory are available for your reference. The undergraduate program enrolls approximately twenty-three thousand students each year and counts five hundred majors, while the doctoral program covers all areas of pure. We’ll organise for someone to tutor math and help your child succeed, our service is one of the best operating in Sydney , Melbourne , Perth , Brisbane and Adelaide. It explores major themes of mathematics, from humankind's earliest study of prime numbers, to the cutting-edge mathematics used to reveal the shape of the universe. Click on the question mark button(s) for more details. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Expert Teachers at KSEEBSolutions. UNSW School Mathematics Competition: Past competition questions, solutions and winners Past competition questions and solutions are available online via Parabola Magazine. Current textbooks and solution manuals MAY NOT be taken home and must be returned on the day borrowed by the Math Lab's closing time. There is an analogous formula for polynomials of degree three: The solution of ax 3 +bx 2 +cx+d=0 is. This article is a self. Here is a set of notes used by Paul Dawkins to teach his Algebra course at Lamar University. Prereq: A grade of C- or above in 1114 (114), 1151, 1156, 1161. There are 35 faculty members in the department, most of them actively engaged in research in areas that range from the purely classical to cutting-edge applications in biomathematics and. Technology based learning: A license of leading software's which are dynamic, interactive, versatile and user friendly.
2020-10-20 05:28:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18939362466335297, "perplexity": 3111.4800025979657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00504.warc.gz"}
https://dml.cz/handle/10338.dmlcz/118804
# Article Full entry | PDF   (0.1 MB) Keywords: normal integrand; Carathéodory function Summary: Let $D\subset T\times X$, where $T$ is a measurable space, and $X$ a topological space. We study inclusions between three classes of extended real-valued functions on $D$ which are upper semicontinuous in $x$ and satisfy some measurability conditions. References: [1] Ash R.B.: Real Analysis and Probability. Academic Press, New York, 1972. MR 0435320 [2] Berliocchi H., Lasry J.-M.: Intégrandes normales et mesures paramétrées en calcul de variations. Bull. Soc. Math. France 101 (1973), 129-184. MR 0344980 [3] Burgess J., Maitra A.: Nonexistence of measurable optimal selections. Proc. Amer. Math. Soc. 116 (1992), 1101-1106. MR 1120505 | Zbl 0767.28010 [4] Christensen J.P.R.: Topology and Borel Structure. North Holland, Amsterdam, 1974. MR 0348724 | Zbl 0273.28001 [5] Himmelberg C.J.: Measurable relations. Fund. Math. 87 (1975), 53-72. MR 0367142 | Zbl 0296.28003 [6] Kucia A.: Some counterexamples for Carathéodory functions and multifunctions. submitted to Fund. Math. [7] Kucia A., Nowak A.: On Baire approximations of normal integrands. Comment. Math. Univ. Carolinae 30:2 (1989), 373-376. MR 1014136 | Zbl 0685.28001 [8] Kucia A., Nowak A.: Relations among some classes of functions in mathematical programming. Mat. Metody Sots. Nauk 22 (1989), 29-33. MR 1111399 | Zbl 0742.49009 [9] Levin V.L.: Measurable selections of multivalued mappings into topological spaces and upper envelopes of Carathéodory integrands (in Russian). Dokl. Akad. Nauk SSSR 252 (1980), 535-539 English transl.: Sov. Math. Dokl. 21 (1980), 771-775. MR 0577834 [10] Levin V.L.: Convex Analysis in Spaces of Measurable Functions and its Applications to Mathematics and Economics (in Russian). Nauka, Moscow, 1985. MR 0809179 [11] Pappas G.S.: An approximation result for normal integrands and applications to relaxed controls theory. J. Math. Anal. Appl. 93 (1983), 132-141. MR 0699706 | Zbl 0521.49012 [12] Rockafellar R.T.: Integral functionals, normal integrands and measurable selections. in: Nonlinear Operators and Calculus of Variations (L. Waelbroeck, ed.), Lecture Notes in Mathematics 543, Springer, Berlin, 1976, pp. 157-207. MR 0512209 | Zbl 0374.49001 [13] Schäl M.: A selection theorem for optimization problem. Arch. Math. 25 (1974), 219-224. MR 0346632 [14] Wagner D.H.: Survey of measurable selection theorems. SIAM J. Control 15 (1977), 859-903. MR 0486391 | Zbl 0407.28006 [15] Zygmunt W.: Scorza-Dragoni property (in Polish). UMCS, Lublin, 1990. Partner of
2017-12-14 19:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699814915657043, "perplexity": 3314.4155411755432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550199.46/warc/CC-MAIN-20171214183234-20171214203234-00696.warc.gz"}
https://math.libretexts.org/Bookshelves/Applied_Mathematics/Book%3A_College_Mathematics_for_Everyday_Life_(Inigo_et_al)/04%3A_Growth
# 4: Growth Population growth is a current topic in the media today. The world population is growing by over 70 million people every year. Predicting populations in the future can have an impact on how countries plan to manage resources for more people. The tools needed to help make predictions about future populations are growth models like the exponential function. This chapter will discuss real world phenomena, like population growth and radioactive decay, using three different growth models. The growth functions to be examined are linear, exponential, and logistic growth models. Each type of model will be used when data behaves in a specific way and for different types of scenarios. Data that grows by the same amount in each iteration uses a different model than data that increases by a percentage. Thumbnail: False color time-lapse video of E. coli colony growing on microscope slide. This growth can be model with first order logistic equation. Added approximate scale bar based on the approximate length of 2.0 μm of E. coli bacteria. (CC BY-SA 4.0 International; Stewart EJ, Madden R, Paul G, Taddei F). 4: Growth is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Maxie Inigo, Jennifer Jameson, Kathryn Kozak, Maya Lanzetta, & Kim Sonier via source content that was edited to conform to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
2022-05-24 05:52:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4625469744205475, "perplexity": 1116.0157422544444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00382.warc.gz"}
https://www.advanceduninstaller.com/VirtualDrive-Pro-82dc26f58c9d2c86b56c494db880f4be-application.htm
# VirtualDrive Pro ## A guide to uninstall VirtualDrive Pro from your system This page is about VirtualDrive Pro for Windows. Below you can find details on how to remove it from your computer. The Windows version was developed by FarStone Technology Inc.. Further information on FarStone Technology Inc. can be seen here. Click on to get more information about VirtualDrive Pro on FarStone Technology Inc.'s website. Usually the VirtualDrive Pro application is installed in the C:\Program Files\FarStone\VirtualDrive directory, depending on the user's option during install. The full command line for removing VirtualDrive Pro is C:\Program Files\FarStone\VirtualDrive\Setup.exe. Note that if you will type this command in Start / Run Note you may receive a notification for administrator rights. The application's main executable file is labeled VDMain.exe and it has a size of 20.00 KB (20480 bytes). The executables below are part of VirtualDrive Pro. They occupy an average of 4.18 MB (4383271 bytes) on disk. • CheckVersion.exe (52.00 KB) • DrvDisable64.exe (112.50 KB) • EvalBrowse.exe (82.52 KB) • fsreg.exe (52.05 KB) • FSXDCommon.exe (42.52 KB) • Regsvr32.exe (16.50 KB) • ResUnist.exe (36.00 KB) • Setup.exe (86.52 KB) • UIFrame.exe (82.59 KB) • UpdateFiles.exe (44.00 KB) • VDMain.exe (20.00 KB) • VDrive.exe (84.00 KB) • vdtask.exe (162.59 KB) • WebReg.exe (196.00 KB) • Building.exe (68.00 KB) • Burning.exe (48.00 KB) • DVDCreator.exe (600.00 KB) • Retriever.exe (136.00 KB) • Start.exe (2.14 MB) • inVHDDrvExe.exe (32.00 KB) • RDTask.exe (104.00 KB) • unVHDDrvExe.exe (36.00 KB) ...click to view all... The current page applies to VirtualDrive Pro version 12.2 alone. Click on the links below for other VirtualDrive Pro versions: ...click to view all... If you are manually uninstalling VirtualDrive Pro we recommend you to verify if the following data is left behind on your PC. Folders that were found: • C:\Program Files\FarStone\VirtualDrive Files remaining: • C:\Program Files\FarStone\VirtualDrive\VHD\FsLodLib.dll • C:\Program Files\FarStone\VirtualDrive\VHD\inVHDDrvExe.exe • C:\Program Files\FarStone\VirtualDrive\VHD\Logo.dll • C:\Program Files\FarStone\VirtualDrive\VHD\ProdVer.dat • C:\Program Files\FarStone\VirtualDrive\VHD\RAMDrive_RC.dll • C:\Program Files\FarStone\VirtualDrive\VHD\RamDriveFiles.dat • C:\Program Files\FarStone\VirtualDrive\VHD\RamDriverSys.dll • C:\Program Files\FarStone\VirtualDrive\VHD\RDrv2KInterface.dll • C:\Program Files\FarStone\VirtualDrive\VHD\RDrvInterface.dll • C:\Program Files\FarStone\VirtualDrive\VHD\RDrvRpr.dll • C:\Program Files\FarStone\VirtualDrive\VHD\setup.sys • C:\Program Files\FarStone\VirtualDrive\VHD\unVHDDrvExe.exe • C:\Program Files\FarStone\VirtualDrive\VHD\VHDCom.dll • C:\Program Files\FarStone\VirtualDrive\VProdInfo.dll • C:\Program Files\FarStone\VirtualDrive\WebReg.exe • C:\Program Files\FarStone\VirtualDrive\WebRegRc.dll • C:\Program Files\FarStone\VirtualDrive\WNASPI32.DLL • C:\Program Files\FarStone\VirtualDrive\XDriveRC.dll Use regedit.exe to manually remove from the Windows Registry the keys below: • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\{EEE22184-B53C-4B87-9F5B-53638160B966} Open regedit.exe in order to delete the following registry values: • HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Folders\C:\Windows\Installer\{EEE22184-B53C-4B87-9F5B-53638160B966}\ ## How to remove VirtualDrive Pro with the help of Advanced Uninstaller PRO VirtualDrive Pro is a program by the software company FarStone Technology Inc.. Frequently, users try to remove this application. This is hard because performing this by hand takes some skill related to Windows internal functioning. One of the best EASY solution to remove VirtualDrive Pro is to use Advanced Uninstaller PRO. Take the following steps on how to do this: 1. If you don't have Advanced Uninstaller PRO on your Windows system, add it. This is a good step because Advanced Uninstaller PRO is a very potent uninstaller and all around tool to optimize your Windows system. • set up Advanced Uninstaller PRO 2. Start Advanced Uninstaller PRO. Take your time to get familiar with the program's interface and wealth of tools available. Advanced Uninstaller PRO is a very useful Windows optimizer. 3. Click on the General Tools category 4. Click on the Uninstall Programs button 5. All the applications installed on your computer will be made available to you 6. Scroll the list of applications until you find VirtualDrive Pro or simply click the Search feature and type in "VirtualDrive Pro". If it is installed on your PC the VirtualDrive Pro program will be found automatically. When you select VirtualDrive Pro in the list of programs, the following information regarding the application is made available to you: • Safety rating (in the lower left corner). The star rating explains the opinion other people have regarding VirtualDrive Pro, ranging from "Highly recommended" to "Very dangerous". • Opinions by other people - Click on the Read reviews button. • Details regarding the program you are about to remove, by clicking on the Properties button. For instance you can see that for VirtualDrive Pro: • The web site of the application is: http://www.farstone.com • The uninstall string is: C:\Program Files\FarStone\VirtualDrive\Setup.exe 7. Click the Uninstall button. A window asking you to confirm will show up. accept the removal by clicking the Uninstall button. Advanced Uninstaller PRO will then remove VirtualDrive Pro. 8. After uninstalling VirtualDrive Pro, Advanced Uninstaller PRO will offer to run an additional cleanup. Click Next to perform the cleanup. All the items of VirtualDrive Pro which have been left behind will be found and you will be asked if you want to delete them. By removing VirtualDrive Pro using Advanced Uninstaller PRO, you are assured that no registry items, files or directories are left behind on your computer. Your PC will remain clean, speedy and ready to take on new tasks. ## Disclaimer The text above is not a piece of advice to uninstall VirtualDrive Pro by FarStone Technology Inc. from your PC, we are not saying that VirtualDrive Pro by FarStone Technology Inc. is not a good application for your computer. This page only contains detailed instructions on how to uninstall VirtualDrive Pro in case you decide this is what you want to do. The information above contains registry and disk entries that Advanced Uninstaller PRO stumbled upon and classified as "leftovers" on other users' computers. 2016-07-17 / Written by Daniel Statescu for Advanced Uninstaller PRO
2020-09-19 02:30:01
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035638332366943, "perplexity": 14551.416365479681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00019.warc.gz"}
https://www.physicsforums.com/threads/explain-solve-the-matching-problem-in-the-simplest-terms.332341/
# Explain/solve the Matching Problem in the simplest terms 1. Aug 24, 2009 ### redphoton Explain/solve the "Matching Problem" in the simplest terms How would you explain and solve the "Matching Problem" to a HS math club? give the simplest explanation and solution you know. Last edited: Aug 24, 2009 2. Aug 24, 2009 ### Dragonfall Re: Explain/solve the "Matching Problem" in the simplest terms You mean the stable matching problem, from graph theory? 3. Aug 24, 2009 ### redphoton Re: Explain/solve the "Matching Problem" in the simplest terms matching problem like the classic: "An absent-minded secretary prepares n letters and envelopes to send to n different people, but then randomly stuffs the letters into the envelopes. A match occurs if a letter is inserted in the proper envelope. Find the probability a match happens." Last edited: Aug 24, 2009 4. Aug 24, 2009 ### mXSCNT Re: Explain/solve the "Matching Problem" in the simplest terms There are many different problems involving matching one set against another. 5. Aug 25, 2009 ### Dragonfall Re: Explain/solve the "Matching Problem" in the simplest terms If P is the probability that a match DOESN'T happen, then 1-P is the probability that a match happens. Is this what you want?
2018-07-23 16:43:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224910497665405, "perplexity": 6444.888992666741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596542.97/warc/CC-MAIN-20180723145409-20180723165409-00134.warc.gz"}
http://www.atractor.pt/mat/conchas/conchaphimuom-_en.html
## Shells ### Joint variation of parameters (3) 3. Rotation angles of the ellipse: $$\phi$$, $$\mu$$ and $$\Omega$$
2018-11-20 07:47:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3983709514141083, "perplexity": 2483.6224289724705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746301.92/warc/CC-MAIN-20181120071442-20181120093442-00033.warc.gz"}
https://zbmath.org/?q=an:1225.35088
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) A positive solution for an asymptotically linear elliptic problem on $\Bbb R^{N}$ autonomous at infinity. (English) Zbl 1225.35088 Summary: We establish the existence of a positive solution for an asymptotically linear elliptic problem on $\Bbb R^N$. The main difficulties to overcome are the lack of a priori bounds for Palais-Smale sequences and a lack of compactness as the domain is unbounded. For the first one we make use of techniques introduced by Lions in his work on concentration compactness. For the second we show how the fact that the “problem at infinity” is autonomous, in contrast to just periodic, can be used in order to regain compactness. ##### MSC: 35J60 Nonlinear elliptic equations 58E05 Abstract critical point theory Full Text: ##### References: [1] A. Ambrosetti and P.H. Rabinowitz , Dual variational methods in critical point theory and applications . J. Funct. Anal. 14 ( 1973 ) 349 - 381 . MR 370183 | Zbl 0273.49063 · Zbl 0273.49063 · doi:10.1016/0022-1236(73)90051-7 [2] H. Berestycki and P.L. Lions , Nonlinear scalar field equations I . Arch. Rational Mech. Anal. 82 ( 1983 ) 313 - 346 . MR 695535 | Zbl 0533.35029 · Zbl 0533.35029 [3] H. Berestycki , T. Gallouët and O. Kavian , Equations de Champs scalaires euclidiens non linéaires dans le plan . C. R. Acad. Sci. Paris Sér. I Math. 297 ( 1983 ) 307 - 310 . MR 734575 | Zbl 0544.35042 · Zbl 0544.35042 [4] H. Brezis , Analyse fonctionnelle . Masson ( 1983 ). MR 697382 | Zbl 0511.46001 · Zbl 0511.46001 [5] V. Coti Zelati and P.H. Rabinowitz , Homoclinic type solutions for a semilinear elliptic PDE on $\mathbb{R}^{N}$ . Comm. Pure Appl. Math. XIV ( 1992 ) 1217 - 1269 . MR 1181725 | Zbl 0785.35029 · Zbl 0785.35029 · doi:10.1002/cpa.3160451002 [6] I. Ekeland , Convexity methods in Hamiltonian Mechanics . Springer ( 1990 ). MR 1051888 | Zbl 0707.70003 · Zbl 0707.70003 [7] L. Jeanjean , On the existence of bounded Palais-Smale sequences and application to a Landesman-Lazer-type problem set on $\mathbb{R}^N$ . Proc. Roy. Soc. Edinburgh Sect. A 129 ( 1999 ) 787 - 809 . Zbl 0935.35044 · Zbl 0935.35044 · doi:10.1017/S0308210500013147 [8] P.L. Lions , The concentration-compactness principle in the calculus of variations . The locally compact case. Parts I and II. Ann. Inst. H. Poincaré Anal. Non Linéaire 1 ( 1984 ) 109 - 145 and 223 - 283 . Numdam | Zbl 0704.49004 · Zbl 0704.49004 · numdam:AIHPC_1984__1_4_223_0 · eudml:78074 [9] P.H. Rabinowitz , On a class of nonlinear Shrödinger equations . ZAMP 43 ( 1992 ) 270 - 291 . MR 1162728 | Zbl 0763.35087 · Zbl 0763.35087 · doi:10.1007/BF00946631 [10] C.A. Stuart , Bifurcation in $L^{p}(\mathbb{R}^{N})$ for a semilinear elliptic equation . Proc. London Math. Soc. 57 ( 1988 ) 511 - 541 . MR 960098 | Zbl 0673.35005 · Zbl 0673.35005 · doi:10.1112/plms/s3-57.3.511 [11] C.A. Stuart and H.S. Zhou , A variational problem related to self-trapping of an electromagnetic field . Math. Meth. Appl. Sci. 19 ( 1996 ) 1397 - 1407 . MR 1414401 | Zbl 0862.35123 · Zbl 0862.35123 · doi:10.1002/(SICI)1099-1476(19961125)19:17<1397::AID-MMA833>3.0.CO;2-B [12] C.A. Stuart and H.S. Zhou , Applying the mountain-pass theorem to an asymtotically linear elliptic equation on $\mathbb{R}^N$ . Comm. Partial Differential Equations 24 ( 1999 ) 1731 - 1758 . MR 1708107 | Zbl 0935.35043 · Zbl 0935.35043 · doi:10.1080/03605309908821481 [13] A. Szulkin and W. Zou , Homoclinic orbits for asymptotically linear Hamiltonian systems . J. Funct. Anal. 187 ( 2001 ) 25 - 41 . MR 1867339 | Zbl 0984.37072 · Zbl 0984.37072 · doi:10.1006/jfan.2001.3798
2016-05-05 23:50:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5880225300788879, "perplexity": 3622.1445163600274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700245.92/warc/CC-MAIN-20160428164140-00177-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.matrix.edu.au/beginners-guide-year-9-maths/?cta=guide_HSSG_year_9_text
# The Beginner’s Guide to Year 9 Maths Welcome to our Beginner's Guide to year 9 Maths. In this Guide, we'll show you the secrets to acing Maths. ## All about the Beginner’s Guide to Year 9 Maths Year 9 is an important year for Maths students. The concepts and skills learned in Year 9 are the foundations for the skills for the senior years of high school. Falling behind in Year 9 can make it really hard to catch back up. The Beginner’s Guide to Year 9 Maths is your resource for staying on top. We wrote the Beginner’s Guide to Year 9 Maths to help students learn and reinforce the core concepts they need to know for Year 9. Each article addresses the NESA syllabus Outcomes for the subject. These can be found here on the NESA website (Stage 5 is what students study in Years 9 & 10). ## How will this Guide help me? • As mentioned before, our goal in this guide is to build key foundations for students for their senior years. • Throughout each article, we’ve provided some worked examples so that you can see the application of the theory. We’ve also provided a number of questions for each subject. These will let you test your knowledge. • Finally, we’ve provided you with worked examples so that you can check your skills and understand your mistakes. • We examine the concepts behind the questions rather than pure calculations as we believe this will be beneficial to understanding mathematics rather than just a list of steps. This is the philosophy behind our Matrix method for Maths. ## What are the common issues faced by Year 9 students? There is a significant jump from Year 8 Maths to Year 9 Maths. Students may find it difficult to apply the concepts they learned in Year 8 to Year 9 because the level of Maths is much harder. Some common problems that students face are: • Difficulty adding and subtracting two algebraic fractions For example: Simplify $$\frac{a}{b}+\frac{b}{a}$$ A poor understanding of finding the Lowest Common Denominator would leave students unable to solve this question. In order to successfully solve the question, students must identify that both $$a$$ and $$b$$ must be in the denominator. The correct answer would be $$\frac{a^2+b^2}{ab}$$ • Difficulty converting numbers in small and large magnitude in scientific notation or across different units • Weak indices skills For example: Simplify $$4^x\times4^y$$ Many students forget that about their index laws where if the bases are the same we add the powers. They make the mistake of multiplying the bases and adding the powers, ending up with the answer $$16^{x+y}$$. The correct answer is actually $$4^{x+y}$$ • Poor understanding of straight line equations in different forms • Not knowing how to sketch simple linear relationships • Poor understanding of Highest Common Factor and Lowest Common Multiple • Unable to factorise algebraic expressions • Difficulty converting between smaller metric units • Limits of accuracy • Expanding binomial products involving surds ## Why do students struggle with Maths in Year 9? We’ve learned that many students struggle with Maths in year 9 because they take the wrong approach to learning and study. Here are some of the reasons that students have difficulty: • Students do not understand the basics of algebra – Instead, they rote learn methods for specific types of questions. For them, Mathematics becomes memory work instead of a logical puzzle game. • Students do not dedicate enough practice time to work on various types of questions – They may be actively involved in extra-curricular activities which make it harder to spend time working on Maths. When these students encounter unseen questions, they have no idea how to approach them. • Students do not have the patience for figuring out each question before referring to the solution for working steps. This means the essence of that question, and its learning opportunities, are lost through ‘referring’ to the solution. • Students have a lack of commitment towards their homework. Sometimes, students put off their homework because they are busy, don’t enjoy it, or just don’t see the importance of it. It is crucial that students develop a good habit of finishing their homework early because this gives them a chance to practice and refine their skills. • Students struggle to understand the specific language used. Students begin to lose motivation when they don’t understand what is going on. You can’t solve a problem if you don’t understand what it means. This is why it is important that students fully understand the definitions of mathematical terms. ## Develop your understanding and skills in Maths Now it is time to familiarise yourself with the content of this Guide. This is a resource that you should come back to consistently as you encounter the subjects at school during the year. First up, we’re going to discuss algebra and algebraic equations. © Matrix Education and www.matrix.edu.au, 2020. Unauthorised use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Matrix Education and www.matrix.edu.au with appropriate and specific direction to the original content.
2020-09-23 15:46:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44381022453308105, "perplexity": 884.139793264839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400211096.40/warc/CC-MAIN-20200923144247-20200923174247-00566.warc.gz"}
http://farside.ph.utexas.edu/teaching/jk1/Electromagnetism/node32.html
Next: Axisymmetric Charge Distributions Up: Potential Theory Previous: Poisson's Equation in Spherical # Multipole Expansion Consider a bounded charge distribution that lies inside the sphere . It follows that in the region . According to the previous three equations, the electrostatic potential in the region takes the form (339) where the (340) are known as the multipole moments of the charge distribution . Here, the integral is over all space. Incidentally, the type of expansion specified in Equation (340) is called a multipole expansion. The most important are those corresponding to , , and , which are known as monopole, dipole, and quadrupole moments, respectively. For each , the multipole moments , for to , form an th-rank tensor with components. However, Equation (310) implies that (341) Hence, only of these components are independent. For , there is only one monopole moment. Namely, (342) where is the net charge contained in the distribution, and use has been made of Equation (312). It follows from Equation (340) that, at sufficiently large , the charge distribution acts like a point charge situated at the origin. That is, (343) By analogy with Equation (195), the dipole moment of the charge distribution is written (344) The three Cartesian components of this vector are (345) (346) (347) On the other hand, the spherical components of the dipole moment take the form (348) (349) (350) where use has been made of Equations (313)-(315). It can be seen that the three spherical dipole moments are independent linear combinations of the three Cartesian moments. The potential associated with the dipole moment is (351) However, from Equations (313)-(315), (352) (353) (354) Hence, (355) in accordance with Equation (200). Note, finally, that if the net charge, , contained in the distributions is non-zero then it is always possible to choose the origin of the coordinate system in such a manner that . The Cartesian components of the quadrupole tensor are defined (356) for , , , . Here, , , and . Incidentally, because the quadrupole tensor is symmetric (i.e., ) and traceless (i.e., ), it only possesses five independent Cartesian components. The five spherical components of the quadrupole tensor take the form (357) (358) (359) (360) (361) Moreover, the potential associated with the quadrupole tensor is (362) It follows, from the previous analysis, that the first three terms in the multipole expansion, (340), can be written (363) Moreover, at sufficiently large , these are always the dominant terms in the expansion. Next: Axisymmetric Charge Distributions Up: Potential Theory Previous: Poisson's Equation in Spherical Richard Fitzpatrick 2014-06-27
2018-01-19 05:40:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103877305984497, "perplexity": 709.9527470522478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00729.warc.gz"}
https://pypi.org/project/citextract/
CiteXtract - Bringing structure to the papers on ArXiv. # CiteXtract CiteXtract - Bringing structure to the papers on ArXiv. ## Getting started In order to install CiteXtract, run the following command: pip install citextract ### Extracting references Then, one can extract references from a text using the RefXtract model: from citextract.models.refxtract import RefXtractor text = """This is a test sentence.\n[1] Jacobs, K. 2019. This is a test title. In Proceedings of Some Journal.""" refs = refxtractor(text) print(refs) It gives the following output: ['[1] Jacobs, K. 2019. This is a test title. In Proceedings of Some Journal.'] Under the hood, a trained neural network extracts reference boundaries and extracts the references by using these boundaries. ### Extracting titles Using the found references, titles can be extracted by using the TitleXtract model: from citextract.models.titlextract import TitleXtractor ref = """[1] Jacobs, K. 2019. This is a test title. In Proceedings of Some Journal.""" title = titlextractor(ref) print(title) It gives the following output: 'This is a test title.' Here, a trained neural network extracts the titles from the given reference. ### Converting an arXiv PDF to text There is a utility available which takes an arXiv URL and converts it to text: from citextract.utils.pdf import convert_pdf_url_to_text pdf_url = 'https://arxiv.org/pdf/some_file.pdf' text = convert_pdf_url_to_text(pdf_url) print(text) ## Project details Uploaded source Uploaded py3
2022-12-05 12:13:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.793090283870697, "perplexity": 12492.812510782285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00511.warc.gz"}
http://sci-gems.math.bas.bg/jspui/handle/10525/1273
Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/1273 Title: Operational Calculi for the Euler Operator Authors: Dimovski, IvanSkórnik, Krystyna Keywords: Operational Calculus of Mikusiński and HeavisideEuler Differential OperatorDuhamel ConvolutionNonlocal Boundary Value Problems44A4044A35 Issue Date: 2006 Publisher: Institute of Mathematics and Informatics Bulgarian Academy of Sciences Citation: Fractional Calculus and Applied Analysis, Vol. 9, No 2, (2006), 89p-100p Abstract: A direct algebraic construction of a family of operational calculi for the Euler differential operator δ = t d/dt is proposed. It extends the Mikusiński's approach to the Heaviside operational calculus for the case when the classical Duhamel convolution is replaced by the convolution ... Description: 2000 Mathematics Subject Classification: 44A40, 44A35 URI: http://hdl.handle.net/10525/1273 ISSN: 1311-0454 Appears in Collections: 2006 Files in This Item: File Description SizeFormat
2017-01-16 15:06:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24115464091300964, "perplexity": 4622.05694268344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279189.36/warc/CC-MAIN-20170116095119-00071-ip-10-171-10-70.ec2.internal.warc.gz"}
http://www.ck12.org/algebra/Zero-Product-Principle/lesson/Zero-Product-Principle-Intermediate/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Zero Product Principle ( Read ) | Algebra | CK-12 Foundation # Zero Product Principle % Best Score Practice Zero Product Principle Best Score % Zero Product Principle 0  0  0 What if you had a polynomial equation like $3x^2 + 4x - 4 = 0$ ? How could you factor the polynomial to solve the equation? After completing this Concept, you'll be able to solve polynomial equations by factoring and by using the zero-product property. ### Guidance The most useful thing about factoring is that we can use it to help solve polynomial equations. #### Example A Consider an equation like $2x^2 + 5x - 42 = 0$ . How do you solve for $x$ ? Solution: There’s no good way to isolate $x$ in this equation, so we can’t solve it using any of the techniques we’ve already learned. But the left-hand side of the equation can be factored, making the equation $(x + 6)(2x - 7)=0$ . How is this helpful? The answer lies in a useful property of multiplication: if two numbers multiply to zero, then at least one of those numbers must be zero. This is called the Zero-Product Property. What does this mean for our polynomial equation? Since the product equals 0, then at least one of the factors on the left-hand side must equal zero. So we can find the two solutions by setting each factor equal to zero and solving each equation separately. Setting the factors equal to zero gives us: $(x + 6) = 0 && \text{OR} && (2x - 7) = 0$ Solving both of those equations gives us: $& x + 6 = 0 && && 2x - 7 =0\\& \underline{\underline{x = -6}} && \text{OR} && 2x = 7\\& && && \underline{\underline{x = \frac{7}{2}}}$ Notice that the solution is $x = -6$ OR $x = \frac{7}{2}$ . The OR means that either of these values of $x$ would make the product of the two factors equal to zero. Let’s plug the solutions back into the equation and check that this is correct. $& Check: x = - 6; && Check: x = \frac{7}{2}\\& (x + 6)(2x - 7)= && (x + 6)(2x - 7)=\\& (-6 +6)(2(-6) -7)= && \left ( \frac{7}{2} + 6 \right ) \left (2 \cdot \frac{7}{2} - 7 \right ) =\\& (0)(-19) = 0 && \left (\frac{19}{2} \right ) (7 - 7) = \\& && \left (\frac{19}{2} \right ) (0) = 0$ Both solutions check out. Factoring a polynomial is very useful because the Zero-Product Property allows us to break up the problem into simpler separate steps. When we can’t factor a polynomial, the problem becomes harder and we must use other methods that you will learn later. As a last note in this section, keep in mind that the Zero-Product Property only works when a product equals zero. For example, if you multiplied two numbers and the answer was nine, that wouldn’t mean that one or both of the numbers must be nine. In order to use the property, the factored polynomial must be equal to zero. #### Example B Solve each equation: a) $(x - 9)(3x + 4)=0$ b) $x(5x - 4) = 0$ c) $4x (x+6)(4x - 9)=0$ Solution Since all the polynomials are in factored form, we can just set each factor equal to zero and solve the simpler equations separately a) $(x - 9)(3x + 4) = 0$ can be split up into two linear equations: $& x - 9 = 0 && && 3x + 4 = 0\\& \underline{\underline{x = 9}} && \text{or} && 3x = -4\\& && && \underline{\underline{x = - \frac{4}{3}}}$ b) $x(5x - 4) = 0$ can be split up into two linear equations: $& && && 5x - 4 = 0\\& \underline{\underline{x = 0}} && \text{or} && 5x = 4\\& && && \underline{\underline{x = \frac{4}{5}}}$ c) $4x(x + 6)(4x - 9) =0$ can be split up into three linear equations: $& 4x = 0 && && x + 6 = 0 && && 4x - 9 =0\\& x= \frac{0}{4} && \text{or} && x = -6 && \text{or} && 4x = 9\\& \underline{\underline{x = 0}} && && && && \underline{\underline{x = \frac{9}{4}}}$ Solve Simple Polynomial Equations by Factoring Now that we know the basics of factoring, we can solve some simple polynomial equations. We already saw how we can use the Zero-Product Property to solve polynomials in factored form—now we can use that knowledge to solve polynomials by factoring them first. Here are the steps: a) If necessary, rewrite the equation in standard form so that the right-hand side equals zero. b) Factor the polynomial completely. c) Use the zero-product rule to set each factor equal to zero. d) Solve each equation from step 3. #### Example C Solve the following polynomial equations. a) $x^2 - 2x =0$ b) $2x^2 = 5x$ c) $9x^2 y - 6xy = 0$ Solution a) $x^2 - 2x = 0$ Rewrite: this is not necessary since the equation is in the correct form. Factor: The common factor is $x$ , so this factors as $x(x-2)=0$ . Set each factor equal to zero: $x = 0 && \text{or} && x - 2 = 0$ Solve: $\underline{x = 0} && \text{or} && \underline{x = 2}$ Check: Substitute each solution back into the original equation. $x & = 0 \Rightarrow (0)^2 - 2(0) = 0 && \text{works out}\\x & = 2 \Rightarrow (2)^2 - 2(2) = 4 - 4 = 0 && \text{works out}$ Answer: $x = 0, x = 2$ b) $2x^2 = 5x$ Rewrite: $2x^2 = 5x \Rightarrow 2x^2 - 5x = 0$ Factor: The common factor is $x$ , so this factors as $x(2x - 5)=0$ . Set each factor equal to zero: $x = 0 && \text{or} && 2x - 5 = 0$ Solve: $& \underline{x = 0} && \text{or} && 2x = 5\\& &&&& \underline{x = \frac{5}{2}}$ Check: Substitute each solution back into the original equation. $x & = 0 \Rightarrow 2(0)^2 = 5(0) \Rightarrow 0 = 0 && \text{works out}\\x & = \frac{5}{2} \Rightarrow 2 \left ( \frac{5}{2} \right )^2 = 5 \cdot \frac{5}{2} \Rightarrow 2 \cdot \frac{25}{4} = \frac{25}{2} \Rightarrow \frac{25}{2} = \frac{25}{2} && \text{works out}$ Answer: $x = 0, x =\frac{5}{2}$ c) $9x^2y - 6xy = 0$ Rewrite: not necessary Factor: The common factor is $3xy$ , so this factors as $3xy(3x - 2)=0$ . Set each factor equal to zero: $3 = 0$ is never true, so this part does not give a solution. The factors we have left give us: $x = 0 && \text{or} && y = 0 && \text{or} && 3x - 2 = 0$ Solve: $& \underline{x = 0} && \text{or} && \underline{y = 0} && \text{or} && 3x = 2\\& &&&& \underline{x = \frac{2}{3}}$ Check: Substitute each solution back into the original equation. $& x = 0 \Rightarrow 9(0)y - 6(0)y = 0 - 0 = 0 && \text{works out}\\& y = 0 \Rightarrow 9x^2 (0) - 6x(0) = 0 - 0 = 0 && \text{works out}\\& x = \frac{2}{3} \Rightarrow 9 \cdot \left ( \frac{2}{3} \right)^2 y - 6 \cdot \frac{2}{3} y = 9 \cdot \frac{4}{9} y - 4y = 4y - 4y = 0 && \text{works out}$ Answer: $x = 0, y = 0, x = \frac{2}{3}$ Watch this video for help with the Examples above. ### Vocabulary • Polynomials can be written in expanded form or in factored form . Expanded form means that you have sums and differences of different terms: • The factored form of a polynomial means it is written as a product of its factors. • Zero Product Property: The only way a product is zero is if one or more of the terms are equal to zero: $a\cdot b=0 \Rightarrow a=0 \text{ or } b=0.$ ### Guided Practice Solve the following polynomial equation. $9x^2-3x=0$ Solution: $9x^2-3x=0$ Rewrite : This is not necessary since the equation is in the correct form. Factor : The common factor is $3x$ , so this factors as: $3x(3x-1)=0$ . Set each factor equal to zero. $3x=0 && \text{or} && x-2=0$ Solve : $x=0 && \text{or} && x=2$ Check : Substitute each solution back into the original equation. $x=0 && (0)^2-2(0)=0\\x=2 && (2)^2-2(2)=0$ Answer $x=0, \ x=2$ ### Explore More Solve the following polynomial equations. 1. $x(x + 12) = 0$ 2. $(2x + 1)(2x - 1) = 0$ 3. $(x - 5)(2x + 7)(3x - 4) = 0$ 4. $2x(x + 9)(7x - 20) = 0$ 5. $x(3 + y) = 0$ 6. $x(x - 2y) = 0$ 7. $18y - 3y^2 = 0$ 8. $9x^2 = 27x$ 9. $4a^2 + a = 0$ 10. $b^2 - \frac{5}{3}b = 0$ 11. $4x^2 = 36$ 12. $x^3 - 5x^2 = 0$
2014-10-23 18:46:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 67, "texerror": 0, "math_score": 0.7631284594535828, "perplexity": 409.30291383816797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067214.90/warc/CC-MAIN-20141017150107-00221-ip-10-16-133-185.ec2.internal.warc.gz"}
https://hugocisneros.com/notes/optimal_control/
# Optimal control tags Applied maths resources Book by Daniel Liberzon ## Optimal control problem An typical optimal control problem starts with a control system $\dot{x} = f(t, x, u), \quad x(t_0) = x_0$ where $$x$$ is the state of the system, $$t$$ represents time and $$u$$ is the control input. The goal of an OC problem is to minimize a cost functional of the form $J(u) := \int_{t_0}^{t_f}L(t, x(t), u(t))dt + K(t_f, x_f).$
2022-07-02 21:31:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9453802704811096, "perplexity": 342.0562457418395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00482.warc.gz"}
https://gmatclub.com/forum/how-many-four-digit-positive-integers-can-be-formed-by-using-133069.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Oct 2018, 12:57 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # How many four-digit positive integers can be formed by using new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: ### Hide Tags Intern Joined: 04 Mar 2012 Posts: 39 How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 22 May 2012, 09:35 7 30 00:00 Difficulty: 85% (hard) Question Stats: 49% (02:06) correct 51% (02:13) wrong based on 359 sessions ### HideShow timer Statistics How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 I am getting answer as 432, but the OA is D. Here is the approach I used. First number can be selected from 1 to 9 - in 9 ways, 2nd number has to be same as the one selected so only 1 way, Third number can be selected from remaining 8 numbers in 8 ways and 4th number again has to be only 1 so in total (9*1*8*1)*4!/2!*2! = 72*6 = 432 Can anyone please explain what mistake I am making here? Thanks! Math Expert Joined: 02 Sep 2009 Posts: 50002 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 22 May 2012, 10:35 7 5 gmihir wrote: How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 I am getting answer as 432, but the OA is D. Here is the approach I used. First number can be selected from 1 to 9 - in 9 ways, 2nd number has to be same as the one selected so only 1 way, Third number can be selected from remaining 8 numbers in 8 ways and 4th number again has to be only 1 so in total (9*1*8*1)*4!/2!*2! = 72*6 = 432 Can anyone please explain what mistake I am making here? Thanks! XXYY can be arranged in $$\frac{4!}{2!2!}=6$$ ways (# of arrangements of 4 letters out of which 2 X's and 2 Y's are identical); # of ways we can select 2 distinct digits out of 9 is $$C^2_9=36$$; Total # of integers that can be formed is 6*36=216. _________________ Intern Joined: 27 Oct 2011 Posts: 12 Schools: Cambridge Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 22 May 2012, 10:28 5 2 gmihir wrote: How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 I am getting answer as 432, but the OA is D. Here is the approach I used. First number can be selected from 1 to 9 - in 9 ways, 2nd number has to be same as the one selected so only 1 way, Third number can be selected from remaining 8 numbers in 8 ways and 4th number again has to be only 1 so in total (9*1*8*1)*4!/2!*2! = 72*6 = 432 Can anyone please explain what mistake I am making here? Thanks! The number of ways u can chose two digits from 9 digits is 9C2=(9*8)/2. And the no of ways u can arrange two digits like asked is 4C2=(4*3)/2. total ways is 36*6=216. Hope that helps. If you like it give me Kudos ##### General Discussion Current Student Joined: 21 May 2012 Posts: 93 Location: United States (CA) Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 22 May 2012, 10:29 1 This one is confusing to me. I know how to find the answer in a much more brute force manner. Think about how many possible combinations you can make with 2 numbers (say 1 and 2) 1122, 1212, 1221, 2121, 2211, 2112 (6 combinations) The number 1 has 8 numbers it can pair off with The number 2 has 7 numbers it can pair off with The number 3 has 6 numbers it can pair off with so on and so forth $$= (8+7+6+5+4+3+2+1) * 6$$ $$= 36 * 6$$ $$= 216$$ Intern Joined: 27 Oct 2011 Posts: 12 Schools: Cambridge Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 22 May 2012, 10:43 Cares wrote: This one is confusing to me. I know how to find the answer in a much more brute force manner. Think about how many possible combinations you can make with 2 numbers (say 1 and 2) 1122, 1212, 1221, 2121, 2211, 2112 (6 combinations) The number 1 has 8 numbers it can pair off with The number 2 has 7 numbers it can pair off with The number 3 has 6 numbers it can pair off with so on and so forth $$= (8+7+6+5+4+3+2+1) * 6$$ $$= 36 * 6$$ $$= 216$$ Regarding the pairing you are right.It is the basic way of understanding combinations of numbers. here you are picking 2 numbers to form a set from 9 available numbers,which turns out to be 9C2. Basically picking of k numbers from a n set of numbers gives nCk ways of possibilities. Hope its clear to you now. Manager Joined: 26 Dec 2011 Posts: 93 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 24 May 2012, 02:09 I understood the solution of Bunuel, but can you explain where did gmihir went wrong... Thanks in advance. Math Expert Joined: 02 Sep 2009 Posts: 50002 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 24 May 2012, 02:16 1 1 pavanpuneet wrote: I understood the solution of Bunuel, but can you explain where did gmihir went wrong... Thanks in advance. In that solution the # of ways to choose 2 different numbers for X and Y is 9*8=72, but it should be $$C^2_9=36$$, so gmihir's solution counts the same numbers twice. _________________ Senior Manager Joined: 24 Aug 2009 Posts: 475 Schools: Harvard, Columbia, Stern, Booth, LSB, Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 16 Sep 2012, 06:09 Bunuel wrote: gmihir wrote: How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 I am getting answer as 432, but the OA is D. Here is the approach I used. First number can be selected from 1 to 9 - in 9 ways, 2nd number has to be same as the one selected so only 1 way, Third number can be selected from remaining 8 numbers in 8 ways and 4th number again has to be only 1 so in total (9*1*8*1)*4!/2!*2! = 72*6 = 432 Can anyone please explain what mistake I am making here? Thanks! XXYY can be arranged in $$\frac{4!}{2!2!}=6$$ ways (# of arrangements of 4 letters out of which 2 X's and 2 Y's are identical); # of ways we can select 2 distinct digits out of 9 is $$C^2_9=36$$; Total # of integers that can be formed is 6*36=216. Hi Bunuel, I Want to know if the statement changes from "How many four-digit positive integers can be formed by using the digits from 1 to 9" to How many four-digit positive integers can be formed by using the digits from 0 to 9", what would be answer. As per me answer should be 243. Kindly conform it. _________________ If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS. Kudos always maximizes GMATCLUB worth -Game Theory If you have any question regarding my post, kindly pm me or else I won't be able to reply Director Joined: 22 Mar 2011 Posts: 601 WE: Science (Education) Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 16 Sep 2012, 06:46 fameatop wrote: Bunuel wrote: gmihir wrote: How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 I am getting answer as 432, but the OA is D. Here is the approach I used. First number can be selected from 1 to 9 - in 9 ways, 2nd number has to be same as the one selected so only 1 way, Third number can be selected from remaining 8 numbers in 8 ways and 4th number again has to be only 1 so in total (9*1*8*1)*4!/2!*2! = 72*6 = 432 Can anyone please explain what mistake I am making here? Thanks! XXYY can be arranged in $$\frac{4!}{2!2!}=6$$ ways (# of arrangements of 4 letters out of which 2 X's and 2 Y's are identical); # of ways we can select 2 distinct digits out of 9 is $$C^2_9=36$$; Total # of integers that can be formed is 6*36=216. Hi Bunuel, I Want to know if the statement changes from "How many four-digit positive integers can be formed by using the digits from 1 to 9" to How many four-digit positive integers can be formed by using the digits from 0 to 9", what would be answer. As per me answer should be 243. Kindly conform it. Yes, the answer should be 243. Assume the first digit is A, for which we have 9 choices, because it cannot be 0. Then, we have 3 choices where to place the second digit A. Now, for the other different digit B, we have again 9 choices (different from A but can be 0), and the places of the two B's are uniquely determined. Therefore, total number of possibilities is 9*3*9 = 243. A similar logic can be applied also for the original question: 9*3*8 = 216 9 possibilities for A 3 places to choose from for the second A 8 possibilites for B (cannot be A and cannot be 0), no places to choose, already determined _________________ PhD in Applied Mathematics Love GMAT Quant questions and running. Manager Joined: 25 Jun 2012 Posts: 62 Location: India WE: General Management (Energy and Utilities) Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 18 Sep 2012, 23:07 we units 1,2,3,4,5,6,7,8,9 total 9 digits Now we want arragement xxyy x - can be selected in 9 ways x - same as first x - 1 way y - can be selected from remaining 8 digits - 8 ways y - same as first y - 1 way so total# of ways = 9*1*8*1 = 72 not the 4 digits can be organised in 4C2 ways = 6 ways So total arragements = 72*6 = 432 Am I wrong in thinking? & I didnt understand the concept mentioned by Bunuel here... P & C is my weakest point in Quant Director Joined: 22 Mar 2011 Posts: 601 WE: Science (Education) Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 19 Sep 2012, 00:00 2 2 bhavinshah5685 wrote: we units 1,2,3,4,5,6,7,8,9 total 9 digits Now we want arragement xxyy x - can be selected in 9 ways x - same as first x - 1 way y - can be selected from remaining 8 digits - 8 ways y - same as first y - 1 way so total# of ways = 9*1*8*1 = 72 not the 4 digits can be organised in 4C2 ways = 6 ways So total arragements = 72*6 = 432 Am I wrong in thinking? & I didnt understand the concept mentioned by Bunuel here... P & C is my weakest point in Quant the 4 digits can be organised in 4C2 ways = 6 ways You have to divide by 2, so only 3 ways. Or, 9 possibilities for the first digit, this can be placed for the second time in 3 places, then in the remaining places have another different digit - 8 possibilities. A total of 9*3*8=216. 4C2 would work if you choose two distinct digit out of 9, which is 9C2 = 36 and you don't care about the order in which you have chosen them, take 2 of each type, then arrange them, which is 4C2 = 6. This will give you again 9C2*4C2 = 36*6 = 216. You cannot in one stage take order into account, then in the next stage ignore order. Choosing two digits as 9*8 means you distinguish between which one is chosen first. For example, once you considered x = 4 and y = 1, which you can arrange in 4C2 = 6 ways. But in this 6 arrangements are also include those with a 1 in the first place. Then you considered the pair x = 1 and y = 4, for which again 4C2 = 6 arrangements, but these are identical to the previous ones. 4C2 counts arrangements without distinguishing whether you first place x or y. _________________ PhD in Applied Mathematics Love GMAT Quant questions and running. Senior Manager Joined: 03 Sep 2012 Posts: 382 Location: United States Concentration: Healthcare, Strategy GMAT 1: 730 Q48 V42 GPA: 3.88 WE: Medicine and Health (Health Care) Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 24 Sep 2012, 03:42 Numbers are 1, 2, 3, 4, 5, 6, 7, 8, 9 ie 9 different numbers Lets assume we have a XXYY scenario , this means that we could have 9x1x8x1 = 72 different numbers We could also have XyXy combination so that is also 72 different numbers We could have a XYYX combination and that is 72 more different numbers ... 72 x 3 = 216... D is the answer ... _________________ "When you want to succeed as bad as you want to breathe, then you’ll be successful.” - Eric Thomas Manager Joined: 16 Mar 2016 Posts: 129 Location: France GMAT 1: 660 Q47 V33 GPA: 3.25 How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 05 May 2016, 05:27 Stage 1 : Number of ways to select 2 different digits form 9 => (9 * 8 )/2 =36 Stage 2 : Number of ways to arrange AABB (MISSISSIPPI rule) = (4 * 3 *2) / (2*2) = 6 Fondamental Counting Principle = 36 * 6 = 216 Manager Joined: 18 May 2016 Posts: 67 Concentration: Finance, International Business GMAT 1: 720 Q49 V39 GPA: 3.7 WE: Analyst (Investment Banking) How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags Updated on: 01 Jun 2016, 12:33 surbhi87 wrote: yeah i did but don't quite understand the logic could you pls elaborate a little? If anyone else wanders into this forum and has the same question like Surbhi87 and I: I thought perhaps, the simple explanation for why the number of ways to pick to digits by simply multiplying 9 x 8 = 72 is because this is not a permutation, but a combination problem, where the order of selecting numbers does not matter. Am I correct? _________________ Please kindly +Kudos if my posts or questions help you! My debrief: Self-study: How to improve from 620(Q39,V36) to 720(Q49,V39) in 25 days! Originally posted by fantaisie on 01 Jun 2016, 08:30. Last edited by fantaisie on 01 Jun 2016, 12:33, edited 1 time in total. Manager Joined: 16 Mar 2016 Posts: 129 Location: France GMAT 1: 660 Q47 V33 GPA: 3.25 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 01 Jun 2016, 09:21 fantaisie wrote: surbhi87 wrote: yeah i did but don't quite understand the logic could you pls elaborate a little? If anyone else wanders into this forum and has the same question like Surbhi87 and I: I though perhaps, the simple explanation for why the number of ways to pick to digits by simply multiplying 9 x 8 = 72 is because this is not a permutation, but a combination problem, where the order of selecting numbers does not matter. Am I correct? Exactly, you can have 1144, or 4141, or 4411, or etc... Since you have two pairs of equal digits, the order doesn't matter here. Manager Joined: 01 Mar 2014 Posts: 117 Schools: Tepper '18 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 04 Jun 2016, 12:24 Bunuel wrote: pavanpuneet wrote: I understood the solution of Bunuel, but can you explain where did gmihir went wrong... Thanks in advance. In that solution the # of ways to choose 2 different numbers for X and Y is 9*8=72, but it should be $$C^2_9=36$$, so gmihir's solution counts the same numbers twice. I always tend to miss the cases where a solution is counted twice. Could you please explain as to how i can get it right? Target Test Prep Representative Status: Head GMAT Instructor Affiliations: Target Test Prep Joined: 04 Mar 2011 Posts: 2830 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 26 Jun 2017, 17:54 3 gmihir wrote: How many four-digit positive integers can be formed by using the digits from 1 to 9 so that two digits are equal to each other and the remaining two are also equal to each other but different from the other two ? A. 400 B. 1728 C. 108 D. 216 E. 432 Let’s first find the number of ways we can form a number AABB where A and B denote distinct digits. For the first digit, we have 9 possible options. If the second digit matches that of the first, then we have 1 option. Then for digit 3, we have 8 possible options. Since digit 4 matches digit 3, we have 1 option. So, for this scenario, we have 9 x 1 x 8 x 1 = 72 options. Since we are essentially being asked how many ways we can arrange As and Bs in A-A-B-B, where A and B are numbers 1 through 9, we see that A-A-B-B can be arranged in 4!/(2! x 2!) = 24/4 = 6 ways. Finally, we should take into account that we counted each number twice when we made this calculation (for instance, when A = 1 and B = 2, AABB = 1122, but when A = 2 and B = 1, BBAA = 1122); therefore, we should divide the result by 2. Thus, the total number of ways to create a 4-digit number with 2 unique digits is (6 x 72)/2 = 216. _________________ Jeffery Miller Head of GMAT Instruction GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Intern Joined: 24 May 2017 Posts: 2 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 06 Jul 2017, 20:11 Of the three-digit positive integers that have no digits equal to zero, how many have two digits that are equal to each other and the remaining digit different from the other two? Then how come this question is not 9C2 * 3C2? Intern Joined: 30 Apr 2017 Posts: 14 Re: How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 16 Jul 2017, 04:09 Quote: the 4 digits can be organised in 4C2 ways = 6 ways You have to divide by 2, so only 3 ways. Or, 9 possibilities for the first digit, this can be placed for the second time in 3 places, then in the remaining places have another different digit - 8 possibilities. A total of 9*3*8=216. 4C2 would work if you choose two distinct digit out of 9, which is 9C2 = 36 and you don't care about the order in which you have chosen them, take 2 of each type, then arrange them, which is 4C2 = 6. This will give you again 9C2*4C2 = 36*6 = 216. You cannot in one stage take order into account, then in the next stage ignore order. Choosing two digits as 9*8 means you distinguish between which one is chosen first. For example, once you considered x = 4 and y = 1, which you can arrange in 4C2 = 6 ways. But in this 6 arrangements are also include those with a 1 in the first place. Then you considered the pair x = 1 and y = 4, for which again 4C2 = 6 arrangements, but these are identical to the previous ones. 4C2 counts arrangements without distinguishing whether you first place x or y. This is the best explanation of the conundrum by far. I was also confused at first, not understanding why the answer is 216, not 432. In fact, if we pick the 2 numbers as a first step using "9 x 8 = 72", we are picking for example 8 & 9 as well as 9 & 8. Which means we have already "arranged" the 2 numbers one time. And then when we arrange the 2 numbers using 4!/2!*2!, we are again arranging the same numbers, now for a second time. That is why double counting is involved. The way to solve this is to NOT arrange the numbers when we pick them in the first place, meaning we should use "combination" instead of "permutation". So, choose 2 numbers out of 9 (9C2 = 36) and then arrange those 2 numbers in 4!/2!*2! ways. Hope this helps Intern Joined: 12 Jun 2018 Posts: 5 How many four-digit positive integers can be formed by using  [#permalink] ### Show Tags 20 Jul 2018, 10:50 If I arrange two distinct things in four places - 4!/2!*2! = 6 Select two from nine - 9c2 So total - 9*8*6/2 = 216 How many four-digit positive integers can be formed by using &nbs [#permalink] 20 Jul 2018, 10:50 Go to page    1   2    Next  [ 21 posts ] Display posts from previous: Sort by # How many four-digit positive integers can be formed by using new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2018-10-19 19:57:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7056176066398621, "perplexity": 899.2836934118516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512434.71/warc/CC-MAIN-20181019191802-20181019213302-00218.warc.gz"}
http://manpages.ubuntu.com/manpages/disco/man1/wiki2beamer.1.html
Provided by: wiki2beamer_0.9.5-1_all wiki2beamer(1) #### NAME wiki2beamer - convert wiki-formatted text to latex-beamer code #### SYNOPSIS wiki2beamer [OPTION...] [FILE...] #### DESCRIPTION FILE the text-file(s) to be processed -h,--help show a short usage help --version show version information -o,--output FILE write output to FILE instead of stdout #### USAGE Usually you want to pipe the output of wiki2beamer into a file: wiki2beamer footalk.txt > footalk.tex If called with multiple input files, wiki2beamer processes them in order with their content being simply concatenated. If called without any input file, wiki2beamer will attempt to read input from STDIN. If no input files are supplied and nothing is available on STDIN, wiki2beamer prints its usage message and exits. If an error occurs, wiki2beamer returns a return code other then 0. #### SYNTAX Wiki2beamer has it's own wiki-syntax which (evolved without much of a concept ;) and) is described below. Everything that is unknown to wiki2beamer will be passed through to the LaTeX output (unless inside special environments). OVERALL STRUCTURE A wiki2beamer txt file can consist of two sections: the head and the body. The head is optional and is an autotemplate environment. The body contains the content of the document. If the head (autotemplate) is not given, then only the code for the body will be generated and can be included into a manually crafted LaTeX template file. MANAGING INPUT You can split input to wiki2beamer into multiple files. This helps to keep things apart and avoids conflicts. There are two ways to split input. The first is to use multiple input files which wiki2beamer will read and process in order as if they were one concatenated file. The second is to use the >>>include<<< syntax. >>>includefile<<< Include the file named includefile at this line. Works recursively. Endless recursion will be detected and treated as an error. Including files doesn't work inside [nowiki] and [code] environments (see below). STRUCTURING THE PRESENTATION == sectionname == opens a section called sectionname == longsectionname ==[shortname] opens a section called longsectionname, passing the parameter shortname to latex === subsectname === opens a subsection called subsectname === longsubsectname ===[shortname] opens a subsection called longsubsectname, passing the parameter shortname to latex ==== frametitle ==== opens a frame with the title frametitle ==== frametitle ====[param] opens a frame with the title frametitle, passes frame parameters like t, fragile, verbatim etc. to latex !==== frametitle ====[param] the ! added in front of a frame, selects a frame for exclusive generation. It makes wiki2beamer skip all frames that are not selected. You can select multiple frames. This can speed up the edit-compile-view cycle massively. Sectioning commands work only at the beginning of a line. LISTS (BULLETS/ENUMERATIONS) * text create a bullet (itemize) with text *<onslide> text create a bullet (itemize) with text that only appears on the specified slides (onslide) # text create a numbered item (enumerate) with text #<onslide> text create a numbered item (enumerate) with text that only appears on the specified slides (onslide) Cascaded lists, mixed ordered and unordered items: * This is a crazy list. *# It contains different items. *# In different formats. *** On different levels. ***<2-> which are animated *<3-> Quite a lot of fun. **<4-> Isn't it? ENVIRONMENTS LaTeX knows many environments, some of which are as simple as \begin{center} \end{center}, others are more complicated. To use these in a more wiki-like fashion, use <[name] to open and [name]> to close environments. It will be simply converted to \begin{name} and end{name}. Warning No parsing is done. The user is responsible for closing any opened environment. Environment-tags are only recognized at the beginning of a line. SPECIAL ENVIRONMENTS Unlike standard environments, some environment names are recognized by wiki2beamer. These are: nowiki, code, autotemplate and frame. If wiki2beamer detects one of these it will do some advanced parsing, which can even fail with a syntax error. AUTOTEMPLATE Autotemplate can be used at the beginning of a beamer .txt file. It will create the required LaTeX headers to compile the content. <[autotemplate] opens the autotemplate environment [autotemplate]> close the autotemplate environment key=value (inside [autotemplate]) insert a template command \keyvalue key=value pairs are converted to \keyvalue in the output (except special keys) -- everything after = is just appended to \key. <[autotemplate] usepackage=[utf8]{inputenc} [autotemplate]> will be converted to \usepackage[utf8]{inputenc}. There is a built-in set of options: <[autotemplate] documentclass={beamer} usepackage={listings} usepackage={wasysym} usepackage={graphicx} date={\today} lstdefinestyle={basic}{....} titleframe=True [autotemplate]> titleframe is a special key that tells wiki2beamer to create a title frame. To set the title, subtitle and author of the presentation use the keys title, subtitle and author. Overriding of the default options works on · per-key level for: documentclass, titleframe · per-package level for: usepackage · no overriding for: everything else CODE Use code-environments to display animated code listings. <[code] open a code environment <[code][param] open a code environment passing parameters to the latex lstlisting environment. [code]> close the code environment <[code][key=value,...] ... [code]> <[code] opens the environment, [code]> closes it, everything after <[code] is passed to the LaTeX listings package as options for this listing. Inside the code environment, [ and ] must be escaped as $and$. Things between [ and ] are animations. There are two kinds of animations: · [<slidespec>some code] - show "some code" only on specified slides · [[<slidespec>some code][<slidespec>some other code]] - show "some code" on the slides in the first spec, show "some other code" on the slides in the second spec, fill up space on slides without content with spaces Slide-specs can be of the form: · n - one single frame n · n-m - sequence of frames n to m · spec,spec,... - combine multiple specs into on (e.g. <1-3,5>) NOWIKI Nowiki-Environments completely escape from wiki2beamer replacements. <[nowiki] opens the environment, [nowiki]> closes it. FRAME The LaTeX-frame environment is where the content of a slide goes. You can manually close a frame-environment which was opened with ==== Frametitle ==== with [frame]>. Wiki2beamer is then aware that the last frame is already closed and doesn't try to close it again. TEXT FORMATTING '''text''' typeset text bold ''text'' typeset text italic @text@ typeset text in typewriter type, to ignore an @, escape it as \@ !text! alert text, to ignore an !, escape it as \! _ color _ text _ make text appear in color COLUMNS <[columns] opens the column environment [[[ width ]]] creates a column of width, everything below goes into this column [columns]> closes the column environment GRAPHICS <<<pathtofile>>> include image from pathtofile <<<pathtofile,key=value>>> include image from pathtofile passing key=value parameters to latex FOOTNOTES (((text))) create a footnote containing text LAYOUT --length-- when found at start of line, with nothing afterwards, insert a \vspace{length} (vertical space of length length) --*length-- same as above, but insert a \vspace* (a forced vspace) +<overlay>{content} \uncover the content on the given overlay subframes. They will already take up the space, they need to be displayed, so the geometry of the frame doesn't change when the element pops up. -<overlay>{content} \only show the content on the given overlay subframes. They will not take up the space they need to be displayed, so the geometry of the frame changes when the element pops up. SUBSTITUTIONS --> becomes $\rightarrow$ ==> becomes $\Rightarrow$ <-- becomes $\leftarrow$ <== becomes $\Leftarrow$ :-) becomes \smiley (requires package wasysym) :-( becomes \frownie (requires package wasysym) There are two variables, FRAMEHEADER and FRAMEFOOTER. The content of these will be inserted at the beginning/end of all following slides. @FRAMEFOOTER=text set framefooter to text Leave text empty to reset frame headers and footers. Copyright (C) 2009 Kai Dietrich, Michael Rentzsch and others.
2019-11-12 20:51:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5882167220115662, "perplexity": 14352.12661997526}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665767.51/warc/CC-MAIN-20191112202920-20191112230920-00168.warc.gz"}
https://labs.tib.eu/arxiv/?author=Mauro%20Giavalisco
• ### Demographics of Star-forming Galaxies since $z\sim2.5$. I. The $UVJ$ Diagram in CANDELS(1710.05489) April 6, 2018 astro-ph.GA This is the first in a series of papers examining the demographics of star-forming galaxies at $0.2<z<2.5$ in CANDELS. We study 9,100 galaxies from GOODS-S and UDS having published values of redshifts, masses, star-formation rates (SFRs), and dust attenuation ($A_V$) derived from UV-optical SED fitting. In agreement with previous works, we find that the $UVJ$ colors of a galaxy are closely correlated with its specific star-formation rate (SSFR) and $A_V$. We define rotated $UVJ$ coordinate axes, termed $S_\mathrm{SED}$ and $C_\mathrm{SED}$, that are parallel and perpendicular to the star-forming sequence and derive a quantitative calibration that predicts SSFR from $C_\mathrm{SED}$ with an accuracy of ~0.2 dex. SFRs from UV-optical fitting and from UV+IR values based on Spitzer/MIPS 24 $\mu\mathrm{m}$ agree well overall, but systematic differences of order 0.2 dex exist at high and low redshifts. A novel plotting scheme conveys the evolution of multiple galaxy properties simultaneously, and dust growth, as well as star-formation decline and quenching, exhibit "mass-accelerated evolution" ("downsizing"). A population of transition galaxies below the star-forming main sequence is identified. These objects are located between star-forming and quiescent galaxies in $UVJ$ space and have lower $A_V$ and smaller radii than galaxies on the main sequence. Their properties are consistent with their being in transit between the two regions. The relative numbers of quenched, transition, and star-forming galaxies are given as a function of mass and redshift. • ### The intrinsic characteristics of galaxies on the SFR-stellar mass plane at 1.2<z<4: I. the correlation between stellar age, central density and position relative to the main sequence(1706.02311) Dec. 28, 2017 astro-ph.GA We use the deep CANDELS observations in the GOODS North and South fields to revisit the correlations between stellar mass ($M_*$), star--formation rate (SFR) and morphology, and to introduce a fourth dimension, the mass-weighted stellar age, in galaxies at $1.2<z<4$. We do this by making new measures of $M_*$, $SFR$, and stellar age thanks to an improved SED fitting procedure that allows various star formation history for each galaxy. Like others, we find that the slope of the Main Sequence (MS) of star formation in the $(M_*;SFR)$ plane bends at high mass. We observe clear morphological differences among galaxies across the MS, which also correlate with stellar age. At all redshifts, galaxies that are quenching or quenched, and thus old, have high $\Sigma_1$ (the projected density within the central 1 kpc), while younger, star-forming galaxies span a much broader range of $\Sigma_1$, which includes the high values observed for quenched galaxies, but also extends to much lower values. As galaxies age and quench, the stellar age and the dispersion of $\Sigma_1$ for fixed values of $M_{*}$ shows two different regimes, one, at the low--mass end, where quenching might be driven by causes external to the galaxies; the other, at the high--mass end, where quenching is driven by internal causes, very likely the mass given the low scatter of $\Sigma_1$ (mass quenching). We suggest that the monotonic increase of central density as galaxies grow is one manifestation of a more general phenomenon of structural transformation that galaxies undergo as they evolve. • ### Clumpy Galaxies in CANDELS. II. Physical Properties of UV-bright Clumps at $0.5\leq z<3$(1712.01858) Dec. 5, 2017 astro-ph.GA Studying giant star-forming clumps in distant galaxies is important to understand galaxy formation and evolution. At present, however, observers and theorists have not reached a consensus on whether the observed "clumps" in distant galaxies are the same phenomenon that is seen in simulations. In this paper, as a step to establish a benchmark of direct comparisons between observations and theories, we publish a sample of clumps constructed to represent the commonly observed "clumps" in the literature. This sample contains 3193 clumps detected from 1270 galaxies at $0.5 \leq z < 3.0$. The clumps are detected from rest-frame UV images, as described in our previous paper. Their physical properties, e.g., rest-frame color, stellar mass (M*), star formation rate (SFR), age, and dust extinction, are measured by fitting the spectral energy distribution (SED) to synthetic stellar population models. We carefully test the procedures of measuring clump properties, especially the method of subtracting background fluxes from the diffuse component of galaxies. With our fiducial background subtraction, we find a radial clump U-V color variation, where clumps close to galactic centers are redder than those in outskirts. The slope of the color gradient (clump color as a function of their galactocentric distance scaled by the semi-major axis of galaxies) changes with redshift and M* of the host galaxies: at a fixed M*, the slope becomes steeper toward low redshift, and at a fixed redshift, it becomes slightly steeper with M*. Based on our SED-fitting, this observed color gradient can be explained by a combination of a negative age gradient, a negative E(B-V) gradient, and a positive specific star formation rate gradient of the clumps. We also find that the color gradients of clumps are steeper than those of intra-clump regions. [Abridged] • ### CANDELS: Elevated Black Hole Growth in the Progenitors of Compact Quiescent Galaxies at z~2(1710.05921) Oct. 16, 2017 astro-ph.GA We examine the fraction of massive ($M_{*}>10^{10} M_{\odot}$), compact star-forming galaxies (cSFGs) that host an active galactic nucleus (AGN) at $z\sim2$. These cSFGs are likely the direct progenitors of the compact quiescent galaxies observed at this epoch, which are the first population of passive galaxies to appear in large numbers in the early Universe. We identify cSFGs that host an AGN using a combination of Hubble WFC3 imaging and Chandra X-ray observations in four fields: the Chandra Deep Fields, the Extended Groth Strip, and the UKIDSS Ultra Deep Survey field. We find that $39.2^{+3.9}_{-3.6}$\% (65/166) of cSFGs at $1.4<z<3.0$ host an X-ray detected AGN. This fraction is 3.2 times higher than the incidence of AGN in extended star-forming galaxies with similar masses at these redshifts. This difference is significant at the $6.2\sigma$ level. Our results are consistent with models in which cSFGs are formed through a dissipative contraction that triggers a compact starburst and concurrent growth of the central black hole. We also discuss our findings in the context of cosmological galaxy evolution simulations that require feedback energy to rapidly quench cSFGs. We show that the AGN fraction peaks precisely where energy injection is needed to reproduce the decline in the number density of cSFGs with redshift. Our results suggest that the first abundant population of massive, quenched galaxies emerged directly following a phase of elevated supermassive black hole growth and further hints at a possible connection between AGN and the rapid quenching of star formation in these galaxies. • ### X-ray spectral analyses of AGNs from the 7Ms Chandra Deep Field-South survey: the distribution, variability, and evolution of AGN's obscuration(1703.00657) June 9, 2017 astro-ph.GA, astro-ph.HE We present a detailed spectral analysis of the brightest Active Galactic Nuclei (AGN) identified in the 7Ms Chandra Deep Field South (CDF-S) survey over a time span of 16 years. Using a model of an intrinsically absorbed power-law plus reflection, with possible soft excess and narrow Fe K$\alpha$ line, we perform a systematic X-ray spectral analysis, both on the total 7Ms exposure and in four different periods with lengths of 2-21 months. With this approach, we not only present the power-law slopes, column densities $N_H$, observed fluxes, and absorption-corrected 2-10~keV luminosities $L_X$ for our sample of AGNs, but also identify significant spectral variabilities among them on time scales of years. We find that the $N_H$ variabilities can be ascribed to two different types of mechanisms, either flux-driven or flux-independent. We also find that the correlation between the narrow Fe line EW and $N_H$ can be well explained by the continuum suppression with increasing $N_H$. Accounting for the sample incompleteness and bias, we measure the intrinsic distribution of $N_H$ for the CDF-S AGN population and present re-selected subsamples which are complete with respect to $N_H$. The $N_H$-complete subsamples enable us to decouple the dependences of $N_H$ on $L_X$ and on redshift. Combining our data with that from C-COSMOS, we confirm the anti-correlation between the average $N_H$ and $L_X$ of AGN, and find a significant increase of the AGN obscured fraction with redshift at any luminosity. The obscured fraction can be described as $f_{obscured}\thickapprox 0.42\ (1+z)^{0.60}$. • ### CANDELS Sheds Light on the Environmental Quenching of Low-mass Galaxies(1705.01946) May 16, 2017 astro-ph.GA We investigate the environmental quenching of galaxies, especially those with stellar masses (M*)$<10^{9.5} M_\odot$, beyond the local universe. Essentially all local low-mass quenched galaxies (QGs) are believed to live close to massive central galaxies, which is a demonstration of environmental quenching. We use CANDELS data to test {\it whether or not} such a dwarf QG--massive central galaxy connection exists beyond the local universe. To this purpose, we only need a statistically representative, rather than a complete, sample of low-mass galaxies, which enables our study to $z\gtrsim1.5$. For each low-mass galaxy, we measure the projected distance ($d_{proj}$) to its nearest massive neighbor (M*$>10^{10.5} M_\odot$) within a redshift range. At a given redshift and M*, the environmental quenching effect is considered to be observed if the $d_{proj}$ distribution of QGs ($d_{proj}^Q$) is significantly skewed toward lower values than that of star-forming galaxies ($d_{proj}^{SF}$). For galaxies with $10^{8} M_\odot < M* < 10^{10} M_\odot$, such a difference between $d_{proj}^Q$ and $d_{proj}^{SF}$ is detected up to $z\sim1$. Also, about 10\% of the quenched galaxies in our sample are located between two and four virial radii ($R_{Vir}$) of the massive halos. The median projected distance from low-mass QGs to their massive neighbors, $d_{proj}^Q / R_{Vir}$, decreases with satellite M* at $M* \lesssim 10^{9.5} M_\odot$, but increases with satellite M* at $M* \gtrsim 10^{9.5} M_\odot$. This trend suggests a smooth, if any, transition of the quenching timescale around $M* \sim 10^{9.5} M_\odot$ at $0.5<z<1.0$. • ### CANDELS Multiwavelength Catalogs: Source Identification and Photometry in the CANDELS Extended Groth Strip(1703.05768) March 16, 2017 astro-ph.GA We present a 0.4-8$\mu$m multi-wavelength photometric catalog in the Extended Groth Strip (EGS) field. This catalog is built on the Hubble Space Telescope (HST) WFC3 and ACS data from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS), and it incorporates the existing HST data from the All-wavelength Extended Groth strip International Survey (AEGIS) and the 3D-HST program. The catalog is based on detections in the F160W band reaching a depth of F160W=26.62 AB (90% completeness, point-sources). It includes the photometry for 41457 objects over an area of $\approx 206$ arcmin$^2$ in the following bands: HST ACS F606W and F814W; HST WFC3 F125W, F140W and F160W; CFHT/Megacam $u^*$, $g'$, $r'$, $i'$ and $z'$; CFHT/WIRCAM $J$, $H$ and $K_\mathrm{S}$; Mayall/NEWFIRM $J1$, $J2$, $J3$, $H1$, $H2$, $K$; Spitzer IRAC $3.6\mu$m, $4.5\mu$m, $5.8\mu$m and $8.0\mu$m. We are also releasing value-added catalogs that provide robust photometric redshifts and stellar mass measurements. The catalogs are publicly available through the CANDELS repository. • ### Early Science with the Large Millimeter Telescope: Detection of dust emission in multiple images of a normal galaxy at $z>4$ lensed by a Frontier Fields cluster(1703.04535) March 13, 2017 astro-ph.CO, astro-ph.GA We directly detect dust emission in an optically-detected, multiply-imaged galaxy lensed by the Frontier Fields cluster MACSJ0717.5+3745. We detect two images of the same galaxy at 1.1mm with the AzTEC camera on the Large Millimeter Telescope leaving no ambiguity in the counterpart identification. This galaxy, MACS071_Az9, is at z>4 and the strong lensing model (mu=7.5) allows us to calculate an intrinsic IR luminosity of 9.7e10 Lsun and an obscured star formation rate of 14.6 +/- 4.5 Msun/yr. The unobscured star formation rate from the UV is only 4.1 +/- 0.3 Msun/yr which means the total star formation rate (18.7 +/- 4.5 Msun/yr) is dominated (75-80%) by the obscured component. With an intrinsic stellar mass of only 6.9e9Msun, MACS0717_Az9 is one of only a handful of z>4 galaxies at these lower masses that is detected in dust emission. This galaxy lies close to the estimated star formation sequence at this epoch. However, it does not lie on the dust obscuration relation (IRX-beta) for local starburst galaxies and is instead consistent with the Small Magellanic Cloud (SMC) attenuation law. This remarkable lower mass galaxy showing signs of both low metallicity and high dust content may challenge our picture of dust production in the early Universe. • ### Morphology Dependence Of Stellar Age in Quenched Galaxies at Redshift ~ 1.2: Massive Compact Galaxies Are Older Than More Extended Ones(1607.06089) March 9, 2017 astro-ph.GA We report the detection of morphology dependent stellar age in massive quenched galaxies (QGs) at z~1.2. The sense of the dependence is that compact QGs are 0.5-2 Gyr older than normal-sized ones. The evidence comes from three different age indicators, Dn4000, Hdelta and fits to spectral synthesis models, applied to their stacked optical spectra. All age indicators consistently show that the stellar populations of compact QGs are older than their normally-sized counterparts. We detect weak [OII] emission in a fraction of QGs, and the strength of the line, when present, is similar between the two samples; however, compact galaxies exhibit significantly lower frequency of [OII] emission than normal ones. A fraction of both samples are individually detected in 7 Ms Chandra X-ray images (luminosities$\sim10^{40}$-$10^{41}$ erg/sec). 7 Ms stacks of non-detected galaxies show similarly low luminosities in the soft band only, consistent with a hot gas origin for the X-ray emission. While both [OII] emitters and non-emitters are also X-ray sources among normal galaxies, no compact galaxy with [OII] emission is an X-ray source, arguing against an AGN powering the line in compact galaxies. We interpret the [OII] properties as further evidence that compact galaxies are older and further along into the process of quenching star-formation and suppressing gas accretion. Finally, we argue that the older age of compact QGs is evidence of progenitor bias: compact QGs simply reflect the smaller sizes of galaxies at their earlier quenching epoch, with stellar density most likely having nothing directly to do with cessation of star-formation. • ### Predicting Quiescence: The Dependence of Specific Star Formation Rate on Galaxy Size and Central Density at 0.5<z<2.5(1607.03107) Feb. 21, 2017 astro-ph.GA In this paper, we investigate the relationship between star formation and structure, using a mass-complete sample of 27,893 galaxies at $0.5<z<2.5$ selected from 3D-HST. We confirm that star-forming galaxies are larger than quiescent galaxies at fixed stellar mass (M$_{\star}$). However, in contrast with some simulations, there is only a weak relation between star formation rate (SFR) and size within the star-forming population: when dividing into quartiles based on residual offsets in SFR, we find that the sizes of star-forming galaxies in the lowest quartile are 0.27$\pm$0.06 dex smaller than the highest quartile. We show that 50% of star formation in galaxies at fixed M$_{\star}$ takes place within a narrow range of sizes (0.26 dex). Taken together, these results suggest that there is an abrupt cessation of star formation after galaxies attain particular structural properties. Confirming earlier results, we find that central stellar density within a 1 kpc fixed physical radius is the key parameter connecting galaxy morphology and star formation histories: galaxies with high central densities are red and have increasingly lower SFR/M$_{\star}$, whereas galaxies with low central densities are blue and have a roughly constant (higher) SFR/M$_{\star}$ at a given redshift. We find remarkably little scatter in the average trends and a strong evolution of $>$0.5 dex in the central density threshold correlated with quiescence from $z\sim0.7-2.0$. Neither a compact size nor high-$n$ are sufficient to assess the likelihood of quiescence for the average galaxy; rather, the combination of these two parameters together with M$_{\star}$ results in a unique quenching threshold in central density/velocity. • ### The evolution of star formation histories of quiescent galaxies(1609.03572) Sept. 12, 2016 astro-ph.GA Although there has been much progress in understanding how galaxies evolve, we still do not understand how and when they stop forming stars and become quiescent. We address this by applying our galaxy spectral energy distribution models, which incorporate physically motivated star formation histories (SFHs) from cosmological simulations, to a sample of quiescent galaxies at $0.2<z<2.1$. A total of 845 quiescent galaxies with multi-band photometry spanning rest-frame ultraviolet through near-infrared wavelengths are selected from the CANDELS dataset. We compute median SFHs of these galaxies in bins of stellar mass and redshift. At all redshifts and stellar masses, the median SFHs rise, reach a peak, and then decline to reach quiescence. At high redshift, we find that the rise and decline are fast, as expected because the Universe is young. At low redshift, the duration of these phases depends strongly on stellar mass. Low-mass galaxies ($\log(M_{\ast}/M_{\odot})\sim9.5$) grow on average slowly, take a long time to reach their peak of star formation ($\gtrsim 4$ Gyr), and the declining phase is fast ($\lesssim 2$ Gyr). Conversely, high-mass galaxies ($\log(M_{\ast}/M_{\odot})\sim11$) grow on average fast ($\lesssim 2$ Gyr), and, after reaching their peak, decrease the star formation slowly ($\gtrsim 3$ Gyr). These findings are consistent with galaxy stellar mass being a driving factor in determining how evolved galaxies are, with high-mass galaxies being the most evolved at any time (i.e., downsizing). The different durations we observe in the declining phases also suggest that low- and high-mass galaxies experience different quenching mechanisms that operate on different timescales. • ### The Evolution of the Galaxy Rest-Frame Ultraviolet Luminosity Function Over the First Two Billion Years(1410.5439) June 6, 2015 astro-ph.GA We present a robust measurement and analysis of the rest-frame ultraviolet (UV) luminosity function at z=4-8. We use deep Hubble Space Telescope imaging over the CANDELS/GOODS fields, the Hubble Ultra Deep Field and the Year 1 Hubble Frontier Field deep parallel observations. These surveys provides an effective volume of 0.6-1.2 x 10^6 Mpc^3 over this epoch, allowing us to perform a robust search for faint (M_UV=-18) and bright (M_UV < -21) galaxies. We select candidate galaxies using a well-tested photometric redshift technique with careful screening of contaminants, finding a sample of 7446 galaxies at 3.5<z<8.5, with >1000 galaxies at z~6-8. We measure the luminosity function using a Markov Chain Monte Carlo analysis to measure robust uncertainties. At the faint end our results agree with previous studies, yet we find a higher abundance of UV-bright galaxies at z>6, with M* ~ -21 at z>5, different than that inferred based on previous trends at lower redshift. At z=8, a single power-law provides an equally good fit to the UV luminosity function, while at z=6 and 7, an exponential cutoff at the bright-end is moderately preferred. We compare to semi-analytical models, and find that the lack of evolution in M* is consistent with models where the impact of dust attenuation on the bright-end of the luminosity function decreases at higher redshift. We measure the evolution of the cosmic star-formation rate density, correcting for dust attenuation, and find that it declines as (1+z)^(-4.3 +/- 0.5) at z>4, consistent with observations at z>9. Our observations are consistent with a reionization history that starts at z>10, completes at z>6, and reaches a midpoint (x_HII = 0.5) at 6.7<z<9.4. Finally, our observations predict that the abundance of bright z=9 galaxies is likely higher than previous constraints, though consistent with recent estimates of bright z~10 galaxies. [abridged] • The TMT Detailed Science Case describes the transformational science that the Thirty Meter Telescope will enable. Planned to begin science operations in 2024, TMT will open up opportunities for revolutionary discoveries in essentially every field of astronomy, astrophysics and cosmology, seeing much fainter objects much more clearly than existing telescopes. Per this capability, TMT's science agenda fills all of space and time, from nearby comets and asteroids, to exoplanets, to the most distant galaxies, and all the way back to the very first sources of light in the Universe. More than 150 astronomers from within the TMT partnership and beyond offered input in compiling the new 2015 Detailed Science Case. The contributing astronomers represent the entire TMT partnership, including the California Institute of Technology (Caltech), the Indian Institute of Astrophysics (IIA), the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC), the National Astronomical Observatory of Japan (NAOJ), the University of California, the Association of Canadian Universities for Research in Astronomy (ACURA) and US associate partner, the Association of Universities for Research in Astronomy (AURA). • ### A Critical Assessment of Stellar Mass Measurement Methods(1505.01501) May 6, 2015 astro-ph.GA In this paper we perform a comprehensive study of the main sources of random and systematic errors in stellar mass measurement for galaxies using their Spectral Energy Distributions (SEDs). We use mock galaxy catalogs with simulated multi-waveband photometry (from U-band to mid-infrared) and known redshift, stellar mass, age and extinction for individual galaxies. Given different parameters affecting stellar mass measurement (photometric S/N ratios, SED fitting errors, systematic effects, the inherent degeneracies and correlated errors), we formulated different simulated galaxy catalogs to quantify these effects individually. We studied the sensitivity of stellar mass estimates to the codes/methods used, population synthesis models, star formation histories, nebular emission line contributions, photometric uncertainties, extinction and age. For each simulated galaxy, the difference between the input stellar masses and those estimated using different simulation catalogs, $\Delta\log(M)$, was calculated and used to identify the most fundamental parameters affecting stellar masses. We measured different components of the error budget, with the results listed as follows: (1). no significant bias was found among different codes/methods, with all having comparable scatter; (2). A source of error is found to be due to photometric uncertainties and low resolution in age and extinction grids; (3). The median of stellar masses among different methods provides a stable measure of the mass associated with any given galaxy; (4). The deviations in stellar mass strongly correlate with those in age, with a weaker correlation with extinction; (5). the scatter in the stellar masses due to free parameters are quantified, with the sensitivity of the stellar mass to both the population synthesis codes and inclusion of nebular emission lines studied. • ### UVUDF: Ultraviolet Through Near-infrared Catalog and Photometric Redshifts of Galaxies in the Hubble Ultra Deep Field(1505.01160) We present photometry and derived redshifts from up to eleven bandpasses for 9927 galaxies in the Hubble Ultra Deep field (UDF), covering an observed wavelength range from the near-ultraviolet (NUV) to the near-infrared (NIR) with Hubble Space Telescope observations. Our Wide Field Camera 3 (WFC3)/UV F225W, F275W, and F336W image mosaics from the ultra-violet UDF (UVUDF) imaging campaign are newly calibrated to correct for charge transfer inefficiency, and use new dark calibrations to minimize background gradients and pattern noise. Our NIR WFC3/IR image mosaics combine the imaging from the UDF09 and UDF12 campaigns with CANDELS data to provide NIR coverage for the entire UDF field of view. We use aperture-matched point-spread function corrected photometry to measure photometric redshifts in the UDF, sampling both the Lyman break and Balmer break of galaxies at z~0.8-3.4, and one of the breaks over the rest of the redshift range. Our comparison of these results with a compilation of robust spectroscopic redshifts shows an improvement in the galaxy photometric redshifts by a factor of two in scatter and a factor three in outlier fraction over previous UDF catalogs. The inclusion of the new NUV data is responsible for a factor of two decrease in the outlier fraction compared to redshifts determined from only the optical and NIR data, and improves the scatter at z<0.5 and at z>2. The panchromatic coverage of the UDF from the NUV through the NIR yields robust photometric redshifts of the UDF, with the lowest outlier fraction available. • ### The Swift X-ray Telescope Cluster Survey III: Cluster Catalog from 2005-2012 Archival Data(1503.04051) March 13, 2015 astro-ph.CO We present the Swift X-ray Cluster Survey (SWXCS) catalog obtained using archival data from the X-ray telescope (XRT) on board the Swift satellite acquired from 2005 to 2012, extending the first release of the SWXCS. The catalog provides positions, soft fluxes, and, when possible, optical counterparts for a flux-limited sample of X-ray group and cluster candidates. We consider the fields with Galactic latitude |b| > 20 degree to avoid high HI column densities. We discard all of the observations targeted at groups or clusters of galaxies, as well as particular extragalactic fields not suitable to search for faint extended sources. We finally select ~3000 useful fields covering a total solid angle of ~400 degree^2. We identify extended source candidates in the soft-band (0.5-2keV) images of these fields using the software EXSdetect, which is specifically calibrated for the XRT data. Extensive simulations are used to evaluate contamination and completeness as a function of the source signal, allowing us to minimize the number of spurious detections and to robustly assess the selection function. Our catalog includes 263 candidate galaxy clusters and groups down to a flux limit of 7E-15 erg/cm^2/s in the soft band, and the logN-logS is in very good agreement with previous deep X-ray surveys. The final list of sources is cross-correlated with published optical, X-ray, and SZ catalogs of clusters. We find that 137 sources have been previously identified as clusters, while 126 are new detections. Currently, we have collected redshift information for 158 sources (60% of the entire sample). Once the optical follow-up and the X-ray spectral analysis of the sources are complete, the SWXCS will provide a large and well-defined catalog of groups and clusters of galaxies to perform statistical studies of cluster properties and tests of cosmological models. • ### Clumpy Galaxies in CANDELS. I. The Definition of UV Clumps and the Fraction of Clumpy Galaxies at 0.5<z<3(1410.7398) Feb. 11, 2015 astro-ph.GA Although giant clumps of stars are crucial to galaxy formation and evolution, the most basic demographics of clumps are still uncertain, mainly because the definition of clumps has not been thoroughly discussed. In this paper, we study the basic demographics of clumps in star-forming galaxies (SFGs) at 0.5<z<3, using our proposed physical definition that UV-bright clumps are discrete star-forming regions that individually contribute more than 8% of the rest-frame UV light of their galaxies. Clumps defined this way are significantly brighter than the HII regions of nearby large spiral galaxies, either individually or blended, when physical spatial resolution and cosmological dimming are considered. Under this definition, we measure the fraction of SFGs that contain at least one off-center clump (Fclumpy) and the contributions of clumps to the rest-frame UV light and star formation rate of SFGs in the CANDELS/GOODS-S and UDS fields, where our mass-complete sample consists of 3239 galaxies with axial ratio q>0.5. The redshift evolution of Fclumpy changes with the stellar mass (M*) of the galaxies. Low-mass (log(M*/Msun)<9.8) galaxies keep an almost constant Fclumpy of about 60% from z~3.0 to z~0.5. Intermediate-mass and massive galaxies drop their Fclumpy from 55% at z~3.0 to 40% and 15%, respectively, at z~0.5. We find that (1) the trend of disk stabilization predicted by violent disk instability matches the Fclumpy trend of massive galaxies; (2) minor mergers are a viable explanation of the Fclumpy trend of intermediate-mass galaxies at z<1.5, given a realistic observability timescale; and (3) major mergers are unlikely responsible for the Fclumpy trend in all masses at z<1.5. The clump contribution to the rest-frame UV light of SFGs shows a broad peak around galaxies with log(M*/Msun)~10.5 at all redshifts, possibly linked to the molecular gas fraction of the galaxies. (Abridged) • ### The Relation Between SFR and Stellar Mass for Galaxies at 3.5 $\le z\le$ 6.5 in CANDELS(1407.6012) Jan. 13, 2015 astro-ph.GA Distant star-forming galaxies show a correlation between their star formation rates (SFR) and stellar masses, and this has deep implications for galaxy formation. Here, we present a study on the evolution of the slope and scatter of the SFR-stellar mass relation for galaxies at $3.5\leq z\leq 6.5$ using multi-wavelength photometry in GOODS-S from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) and Spitzer Extended Deep Survey. We describe an updated, Bayesian spectral-energy distribution fitting method that incorporates effects of nebular line emission, star formation histories that are constant or rising with time, and different dust attenuation prescriptions (starburst and Small Magellanic Cloud). From $z$=6.5 to $z$=3.5 star-forming galaxies in CANDELS follow a nearly unevolving correlation between stellar mass and SFR that follows SFR $\sim$ $M_\star^a$ with $a = 0.54 \pm 0.16$ at $z\sim 6$ and $0.70 \pm 0.21$ at $z\sim 4$. This evolution requires a star formation history that increases with decreasing redshift (on average, the SFRs of individual galaxies rise with time). The observed scatter in the SFR-stellar mass relation is tight, $\sigma(\log \mathrm{SFR}/\mathrm{M}_\odot$ yr$^{-1})< 0.3\ -$ 0.4 dex, for galaxies with $\log M_\star/\mathrm{M}_\odot > 9$ dex. Assuming that the SFR is tied to the net gas inflow rate (SFR $\sim$ $\dot{M}_\mathrm{gas}$), then the scatter in the gas inflow rate is also smaller than 0.3$-$0.4 dex for star-forming galaxies in these stellar mass and redshift ranges, at least when averaged over the timescale of star formation. We further show that the implied star formation history of objects selected on the basis of their co-moving number densities is consistent with the evolution in the SFR-stellar mass relation. • ### The Herschel view of the dominant mode of galaxy growth from z=4 to the present day(1409.5433) Jan. 7, 2015 astro-ph.GA We present an analysis of the deepest Herschel images in four major extragalactic fields GOODS-North, GOODS-South, UDS and COSMOS obtained within the GOODS-Herschel and CANDELS-Herschel key programs. The picture provided by 10497 individual far-infrared detections is supplemented by the stacking analysis of a mass-complete sample of 62361 star-forming galaxies from the CANDELS-HST H band-selected catalogs and from two deep ground-based Ks band-selected catalogs in the GOODS-North and the COSMOS-wide fields, in order to obtain one of the most accurate and unbiased understanding to date of the stellar mass growth over the cosmic history. We show, for the first time, that stacking also provides a powerful tool to determine the dispersion of a physical correlation and describe our method called "scatter stacking" that may be easily generalized to other experiments. We demonstrate that galaxies of all masses from z=4 to 0 follow a universal scaling law, the so-called main sequence of star-forming galaxies. We find a universal close-to-linear slope of the logSFR-logM* relation with evidence for a flattening of the main sequence at high masses (log(M*/Msun) > 10.5) that becomes less prominent with increasing redshift and almost vanishes by z~2. This flattening may be due to the parallel stellar growth of quiescent bulges in star-forming galaxies. Within the main sequence, we measure a non varying SFR dispersion of 0.3 dex. The specific SFR (sSFR=SFR/M*) of star-forming galaxies is found to continuously increase from z=0 to 4. Finally we discuss the implications of our findings on the cosmic SFR history and show that more than 2/3 of present-day stars must have formed in a regime dominated by the main sequence mode. As a consequence we conclude that, although omnipresent in the distant Universe, galaxy mergers had little impact in shaping the global star formation history over the last 12.5 Gyr. • ### The interstellar medium and feedback in the progenitors of the compact passive galaxies at z~2(1407.1834) Dec. 9, 2014 astro-ph.GA Quenched galaxies at z>2 are nearly all very compact relative to z~0, suggesting a physical connection between high stellar density and efficient, rapid cessation of star-formation. We present restframe UV spectra of Lyman-break galaxies (LBGs) at z~3 selected to be candidate progenitors of quenched galaxies at z~2 based on their compact restframe optical sizes and high surface density of star-formation. We compare their UV properties to those of more extended LBGs of similar mass and star formation rate (non-candidates). We find that candidate progenitors have faster ISM gas velocities and higher equivalent widths of interstellar absorption lines, implying larger velocity spread among absorbing clouds. Candidates deviate from the relationship between equivalent widths of Lyman-alpha and interstellar absorption lines in that their Lyman-alpha emission remains strong despite high interstellar absorption, possibly indicating that the neutral HI fraction is patchy such that Lyman-alpha photons can escape. We detect stronger CIV P-Cygni features (emission and absorption) and HeII emission in candidates, indicative of larger populations of metal rich Wolf-Rayet stars compared to non-candidates. The faster bulk motions, broader spread of gas velocity, and Lyman-alpha properties of candidates are consistent with their ISM being subject to more energetic feedback than non-candidates. Together with their larger metallicity (implying more evolved star-formation activity) this leads us to propose, if speculatively, that they are likely to quench sooner than non-candidates, supporting the validity of selection criteria used to identify them as progenitors of z~2 passive galaxies. We propose that massive, compact galaxies undergo more rapid growth of stellar mass content, perhaps because the gas accretion mechanisms are different, and quench sooner than normally-sized LBGs at these early epochs. • ### Rapid Decline of Lyman-alpha Emission Toward the Reionization Era(1405.4869) Sept. 24, 2014 astro-ph.CO, astro-ph.GA The observed deficit of strongly Lyman-alpha emitting galaxies at z>6.5 is attributed to either increasing neutral hydrogen in the intergalactic medium (IGM) and/or to the evolving galaxy properties. To investigate this, we have performed very deep near-IR spectroscopy of z>7 galaxies using MOSFIRE on the Keck-I Telescope. We measure the Lyman-alpha fraction at z~8 (combined photometric redshift peak at z=7.7) using two methods. First, we derived NLy{\alpha}/Ntot directly using extensive simulations to correct for incompleteness. Second, we used a Bayesian formalism (introduced by Treu et al. 2012) that compares the z>7 galaxy spectra to models of the Lyman-alpha equivalent width (WLy{\alpha}) distribution at z~6. We explored two simple evolutionary scenarios: smooth evolution where Lyman-alpha is attenuated in all galaxies by a constant factor (perhaps owing to processes from galaxy evolution or a slowly increasing IGM opacity), and patchy evolution where Lyman-alpha is blocked in some fraction of galaxies (perhaps due to the IGM being opaque along only some fraction of sightlines). The Bayesian formalism places stronger constraints compared with the direct method. Combining our data with that in the literature we find that at z~8 the Lyman-alpha fraction has dropped by a factor >3(84% confidence interval) using both the smooth and patchy scenarios compared to the z~6 values. Furthermore, we find a tentative evidence that the data favor the patchy scenario over smooth (with "positive" Bayesian evidence), extending trends observed at z~7 to higher redshift. If this decrease is a result of reionization as predicted by theory, then our data imply the volume averaged neutral hydrogen fraction in the IGM to be >0.3 suggesting that the reionization of the universe is in progress at z~8. • ### A Study of Massive and Evolved Galaxies at High Redshift(1408.3684) Aug. 22, 2014 astro-ph.GA We use data taken as part of HST/WFC3 observations of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) to identify massive and evolved galaxies at 3<z<4.5. This is performed using the strength of the Balmer break feature at rest-frame 3648A, which is a diagnostic of the age of the stellar population in galaxies. Using WFC3 H-band selected catalog for the CANDELS GOODS-S field and deep multi-waveband photometry from optical (HST) to mid-infrared (Spitzer) wavelengths, we identify a population of old and evolved post-starburst galaxies based on the strength of their Balmer breaks (Balmer Break Galaxies- BBGs). The galaxies are also selected to be bright in rest-frame near-IR wavelengths and hence, massive. We identify a total of 16 BBGs. Fitting the spectral energy distribution (SED) of the BBGs show that the candidate galaxies have average estimated ages of ~800 Myr and average stellar masses of ~5x10^10 M_sun, consistent with being old and massive systems. Two of our BBG candidates are also identified by the criteria that is sensitive to star forming galaxies (LBG selection). We find a number density of ~3.2x10^-5 Mpc^-3 for the BBGs corresponding to a mass density of ~2.0x10^6 M_sun/Mpc^3 in the redshift range covering the survey. Given the old age and the passive evolution, it is argued that some of these objects formed the bulk of their mass only a few hundred million years after the Big Bang. • ### The mass evolution of the first galaxies: stellar mass functions and star formation rates at $4 < z < 7$ in the CANDELS GOODS-South field(1408.2527) Aug. 11, 2014 astro-ph.GA We measure new estimates for the galaxy stellar mass function and star formation rates for samples of galaxies at $z \sim 4,~5,~6~\&~7$ using data in the CANDELS GOODS South field. The deep near-infrared observations allow us to construct the stellar mass function at $z \geq 6$ directly for the first time. We estimate stellar masses for our sample by fitting the observed spectral energy distributions with synthetic stellar populations, including nebular line and continuum emission. The observed UV luminosity functions for the samples are consistent with previous observations, however we find that the observed $M_{UV}$ - M$_{*}$ relation has a shallow slope more consistent with a constant mass to light ratio and a normalisation which evolves with redshift. Our stellar mass functions have steep low-mass slopes ($\alpha \approx -1.9$), steeper than previously observed at these redshifts and closer to that of the UV luminosity function. Integrating our new mass functions, we find the observed stellar mass density evolves from $\log_{10} \rho_{*} = 6.64^{+0.58}_{-0.89}$ at $z \sim 7$ to $7.36\pm0.06$ $\text{M}_{\odot} \text{Mpc}^{-3}$ at $z \sim 4$. Finally, combining the measured UV continuum slopes ($\beta$) with their rest-frame UV luminosities, we calculate dust corrected star-formation rates (SFR) for our sample. We find the specific star-formation rate for a fixed stellar mass increases with redshift whilst the global SFR density falls rapidly over this period. Our new SFR density estimates are higher than previously observed at this redshift. • ### Probing Outflows in z= 1~2 Galaxies through FeII/FeII* Multiplets(1407.0149) July 9, 2014 astro-ph.GA We report on a study of the 2300-2600\AA FeII/FeII* multiplets in the rest-UV spectra of star-forming galaxies at 1.0<z<2.6 as probes of galactic-scale outflows. We extracted a mass-limited sample of 97 galaxies at z~1.0-2.6 from ultra-deep spectra obtained during the GMASS spetroscopic survey in the GOODS South field with the VLT and FORS2. We obtain robust measures of the rest equivalent width of the FeII absorption lines down to a limit of W_r>1.5 \AA and of the FeII* emission lines to W_r>0.5 \AA. Whenever we can measure the systemic redshift of the galaxies from the [OII] emission line, we find that both the FeII and MgII absorption lines are blueshifted, indicative that both species trace gaseous outflows. We also find, however, that the FeII gas has generally lower outflow velocity relative to that of MgII. We investigate the variation of FeII line profiles as a function of the radiative transfer properties of the lines, and find that transitions with higher oscillator strengths are more blueshifted in terms of both line centroids and line wings. We discuss the possibility that FeII lines are suppressed by stellar absorptions. The lower velocities of the FeII lines relative to the MgII doublet, as well as the absence of spatially extended FeII* emission in 2D stacked spectra, suggest that most clouds responsible for the FeII absorption lie close (3~4 kpc) to the disks of galaxies. We show that the FeII/FeII* multiplets offer unique probes of the kinematic structure of galactic outflows. • ### Steadily Increasing Star Formation Rates in Galaxies Observed at 3 <~ z <~ 5 in the CANDELS/GOODS-S Field(1403.6198) March 25, 2014 astro-ph.CO, astro-ph.GA We investigate the star formation histories (SFHs) of high redshift (3 <~ z <~ 5) star-forming galaxies selected based on their rest-frame ultraviolet (UV) colors in the CANDELS/GOODS-S field. By comparing the results from the spectral-energy-distribution-fitting analysis with two different assumptions about the SFHs --- i.e., exponentially declining SFHs as well as increasing ones, we conclude that the SFHs of high-redshift star-forming galaxies increase with time rather than exponentially decline. We also examine the correlations between the star formation rates (SFRs) and the stellar masses. When the galaxies are fit with rising SFRs, we find that the trend seen in the data qualitatively matches the expectations from a semi-analytic model of galaxy formation. The mean specific SFR is shown to increase with redshift, also in agreement with the theoretical prediction. From the derived tight correlation between stellar masses and SFRs, we derive the mean SFH of star-forming galaxies in the redshift range of 3 <~ z <~ 5, which shows a steep power-law (with power alpha = 5.85) increase with time. We also investigate the formation timescales and the mean stellar population ages of these star-forming galaxies. Our analysis reveals that UV-selected star-forming galaxies have a broad range of the formation redshift. The derived stellar masses and the stellar population ages show positive correlation in a sense that more massive galaxies are on average older, but with significant scatter. This large scatter implies that the galaxies' mass is not the only factor which affects the growth or star formation of high-redshift galaxies.
2021-03-08 22:16:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7148901224136353, "perplexity": 3098.3498347257896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00113.warc.gz"}
https://money.stackexchange.com/questions/28927/where-to-find-turnover-average-amount-of-time-investors-mutual-funds-held-st
# Where to find turnover / average amount of time investors & mutual funds held stocks they purchased? In the book The Intelligent Investor, Jason Zweig mentions the annual turnover rate for stocks on the NYSE, that is the average amount of time investors and mutual funds held stocks that they purchased. I think the turnover rate for the market as a whole or individual stocks may be indicative of the degree of price change an investor should expect over time. Personally, I would avoid investing in stocks with a high turnover rate. Knowing the average turnover rate of the market may give investors a baseline with which to compare that of specific companies Where can I find this information for the market today? I would like to compare the current turnover rate to the that of different past time periods. • I am curious how this data would help anyone make a decision regarding their personal finances. If say, the number steadily went from a year to a month over the period, what would you do now? – JTP - Apologise to Monica Mar 10 '14 at 22:13 • @Jackmc1047 When comments are added to your question or answer asking for more information, it's better to edit your post to incorporate the new information, rather than reply in a comment. A comment may be buried/overlooked. Since you are the original poster (OP) of this question, you can simply edit to clarify. – Chris W. Rea Mar 13 '14 at 0:12 $\frac{Number&space;of&space;Shares&space;Traded}{Number&space;of&space;Outstanding&space;Shares}$
2021-07-27 12:44:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3510397970676422, "perplexity": 1273.474183205108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153391.5/warc/CC-MAIN-20210727103626-20210727133626-00086.warc.gz"}
https://www.physicsforums.com/threads/rational-numbers.416925/
# Rational Numbers Are whole numbers rational numbers? If so, will any whole number divided by any other whole number result in a rational number? Office_Shredder Staff Emeritus Gold Member A rational number, by definition, is an integer divided by another integer (that's not equal to zero). As for the integers themselves, for example $$3=\frac{3}{1}$$ we see that 3 is in fact an integer divided by another integer, so is a rational number as well. The same exact argument can be used for any whole number HallsofIvy
2021-01-26 02:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196022629737854, "perplexity": 446.889428136693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00153.warc.gz"}
http://nowpublishers.com/Home/ForAuthors/LatexStyles
## LaTeX Style Files Below are the links to download our LaTeX style files for our Foundations and Trends® and Research Journals. Consult the instructions, download the zipped file for the journal you are writing for below, and unpack into your normal LaTeX installation. Note that there are extra instructions for creating a bibliography file at the bottom of this page. ##### Foundations and Trends® Journals: With special thanks to Neal Parikh (Stanford University), who designed and created the class file for our Foundations and Trends® journals, and lrike Fischer, who designed and created the class file for our research journals. ### Bibliography file It is important that the bibliographic information is entered in the proper, structured format. We recommend using a separate bibliography file (also called “bibliographic database”), instead of entering the references directly in the LaTeX sourcefile. Not only does this make it easier for you to maintain the references it also ensures that they are in a structure that is suitable for our conversion processes. Please, enter each bibliographic item according to the following structure, in the same order and using the same lettercase: General syntax This is the general syntax for entries in the bibliographic database: @@entry_type{key, field_name={field text}, .. field_name={field text}, } Example article: @@article{vetracabitari97, author={J. Ventura-Travesset and G. Caire and E. Biglieri and G. Taricco}, title={Impact of diversity reception on fading channels with coded modulation}, journal={IEEE Trans. on Communications}, year={1997}, volume={45}, number={5}, pages={563-572}, } In the above example, the first word, prefixed with@@, describes the 'entry_type' (in this case, article). The entry_type is followed by the reference information for that entry, enclosed in curly braces{ }. After the first opening brace { we find the key by which the reference is used in the LaTeX document, referred by the \cite command. This key may be any combination of letters, numerals and hyphens or dots, except commas. Please note that this key is also case-sensitive: if "vetracabitari97" is used in the bibliographic file, and "Vetracabitari97" in the \cite command, problems will arise during conversion and DOI-look-up processes.
2017-11-19 02:43:06
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8149852156639099, "perplexity": 2561.7741559597835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805265.10/warc/CC-MAIN-20171119023719-20171119043719-00692.warc.gz"}
https://www.mathwizurd.com/linalg/2018/11/12/rank
Mathwizurd.com is created by David Witten, a mathematics and computer science student at Vanderbilt University. For more information, see the "About" page. Definition The rank of a matrix is the number of non-zero rows in its RREF. MathJax TeX Test Page $$\begin{bmatrix}1 & 2 & 5 & 10 \\ 3 & 4 & 8 & 10 \\ 3 & 4 & 8 & 10\end{bmatrix} \to \begin{bmatrix}1 & 2 & 5 & 10 \\ 0 & -2 & -7 & -20 \\ 0 & 0 & 0 &0\end{bmatrix} \to \begin{bmatrix}\boxed{1} & 0 & -2 & -10 \\ 0 & \boxed{1} & 3.5 & 10 \\ 0 & 0 & 0 &0\end{bmatrix}$$ In this example, the rank of this matrix is 2. As you can see if the rank equals the number of non-zero rows, it equals the number of pivots. The pivots cannot be greater than the number of rows or columns. $$\text{Rank } \le \min(m,n) \text{ (m = rows, n = columns)}$$ Injective MathJax TeX Test Page Injective means a function is one-to-one. That is to say $$f(x) \text{ is injective iff } \forall x,y \text{ if } f(x) = f(y) \to x = y$$ One theorem is T(x) is injective iff T(x) = 0 is unique. $$Theorem: Injective means Tx = 0 is trivial MathJax TeX Test Page One theorem is T(x) is injective iff T(x) = 0 has only the trivial solution.$$\text{ Assume T(x) is injective}T(0) = 0 \text{ (by definition}) \text{ If }T(x) = T(0), x = 0 \text{So, } T(x) = 0 \text{ only has the trivial solution.}$$Now, we prove it the other way. Let's prove the contrapositive. That is to say, instead of proving (T(x) = 0 trivial) \to (T(x) injective), we prove (T(x) not injective) \to (T(x) = 0 non-trivial)$$\text{Assume } T(x) = 0 \text{ is not injective.}\exists u,v, u \neq v \text{ s.t. } Tu = Tv = bTu - Tv = b - b = 0 \to T(u-v) = 0T(u-v) = T(0) = 0T(x) = 0 \text{ is not unique}$$Why is this important? Because we can now use another important theorem. Theorem: Ax = 0 is trivial means columns of A are linearly independent MathJax TeX Test Page Let's dissect what Ax = 0 is trivial means. This means$$A\begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ ... \\x_n\end{bmatrix} = \vec{0}_mx_1\vec{a_1} + x_2\vec{a_2} + ... + x_n\vec{a_n} = \vec{0}_m$$Saying that this equals 0 iff x_1 ... x_n = 0 is the same as saying a_1 ... a_n are linearly independent. Theorem: If the columns of A are L.I., then the number of pivots equals n. MathJax TeX Test Page Row-reduction is a linear transformation, because it can be expressed as a matrix. All matrix products are linear transformations. So, we can say B = M*A$$Ax = 0 \to MAx = M*0 \to Bx = 0$$Observe that the values that satisfy x will not change. That is to say, the columns of B are also linearly independent. This means none of the columns are 0. This means there is a non-zero value in every column. If there are n columns with a 1, with each one on a different row, then there are n rows with a 1, meaning there are n pivots. By definition, this means there are n non-zero rows. So, \boxed{\text{rank} = n} Surjective MathJax TeX Test Page A linear transformation means that \forall b \in \mathbb{R}^m, \exists x \text{ s.t. } Tx = b. Alternatively, you can say the columns of T span \mathbb{R}^m. Theorem: A is surjective means that there m pivots. MathJax TeX Test Page If A is surjective, then there exists a solution for every Ax = b. We can write this as an augmented matrix.$$\begin{bmatrix} A_1 & b_1 \\ A_2 & b_2 \\ A_3 & b_3 \\ ...\\ A_m & b_m\end{bmatrix}$$Assume it doesn't have m pivots, then there exists a row that is all 0.$$\begin{bmatrix} A_1 & b_1 \\ A_2 & b_2 \\ A_3 & b_3 \\ ...\\ \vec{0} & b_m - c_1b_1 + ... + c_{m-1}b_{m-1}\end{bmatrix} In order for this equation to be consistent, $b_m - c_1b_1 + ... + c_{m-1}b_{m-1} = 0$. However, we specified that this works for any $b_1 ... b_m$. Therefore, there is a contradiction. We conclude that A has m pivots. Row Space A row space is the span of the rows of a matrix. The row space of two row-equivalent matrices is the same. Why? because to row reduce you just add linear combinations of rows. The basis of the row space is equal to the non-zero rows in the row-reduced matrix, which equals the dimension of the row space. Column Space The column space is the span of the columns of A. In other words, it’s the image of a matrix A. I will prove that the dimension of the column space equals the dimension of the row space which equals the rank. Null Space The null space is the set of all x such that Ax = 0. I will later prove that the Rank + Dim (Null Space) = n (columns in A).
2018-12-10 00:27:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998762607574463, "perplexity": 349.0268405644126}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823228.36/warc/CC-MAIN-20181209232026-20181210013526-00479.warc.gz"}
https://cs.stackexchange.com/questions/57510/reduction-of-3-sat-to-vertex-cover
# Reduction of 3-SAT to Vertex Cover? Can someone explain to me in the simplest possible way, how to reduce $3SAT$ to $Vertex\:Cover$? I am following the explanation here (scroll to the bottom of page 4). I understand the basic setup of having two "gadgets": the 2-node variable gadgets and 3-node clause gadgets. I also understand the formula $k = variables + 2\:clauses$ as the minimum number of nodes required to cover all the edges. What I don't understand is how this setup proves that if there exists a $k\text-covering$, then the boolean expression in CNF is satisfiable. Examples with expressions that are satisfiable and not satisfiable would be helpful. Also, once the $3SAT$ problem is converted to a $k\text-covering$, does it provide a means to identify which value (true or false) should be assigned to each variable so as to satisfy the boolean expression? • I suggest you keep reading the document. The correctness proof is on page 5. – Yuval Filmus Sep 3 '17 at 18:34
2019-08-23 15:49:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055655360221863, "perplexity": 221.82368564795058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318894.83/warc/CC-MAIN-20190823150804-20190823172804-00193.warc.gz"}